text
stringlengths
16
1.79M
label
int64
0
10
nov betti numbers of certain sum ideals joydip saha indranath sengupta and gaurab tripathi a bstract in this paper we compute the betti numbers for ideals of the form xy j where x and y are matrices and j is the ideal generated by the minors of the matrix consisting of any two rows of x i ntroduction let k be a field and xij i j n yj j n be indeterminates over k n let r k xij yj denote the polynomial algebra over let x denote an n n matrix such that its entries belong to the ideal h xij i n j n i let y yj be the n column pnmatrix let xy denote the ideal generated by the polynomials gj xji yi j n which are the minors or entries of the n n matrix xy the primality primary decomposition and betti numbers of ideals of the form xy have been studied in and with the help of bases for xy ideals of the form xy j are particularly interesting because they occur in several geometric considerations like linkage and generic residual intersection of polynomial ideals especially in the context of syzygies resolved the ideal xy m n x where x is a generic matrix and y is a generic matrix proved certain properties for the ideals of the form xy x where x is a generic symmetric matrix and y is either generic or generic alternating we say that i and j intersect transversally if i ij suppose that resolves and resolves minimally it is interesting to note that if i and j intersect transversally then the tensor product complex f g mathematics subject classification primary secondary key words and phrases basis betti numbers transversal intersection mapping cone the first author thanks ugc for the senior research fellowship the second author is the corresponding author who is supported by the the research project sponsored by the serb government of india the third author thanks csir for the senior research fellowship joydip saha indranath sengupta and gaurab tripathi resolves j minimally see lemma therefore it is useful to know if two ideals intersect transversally especially when one is trying to compute minimal free resolutions and betti numbers for ideals of the form i j through iterated techniques see n otation m ain t heorem xin e if x is generic and i j let xij xjn if x is generic symmetric and i j let xii xij xin e xij xij xjj xjn and the eij let gij denote the set of all minors of x eij denote the ideal generated gij x our aim in this paper is to prove the following theorem theorem let x xij be either the generic or the generic symmetric matrix of order let i j eij hgi gj i are given by the total betti numbers for x n n n n i i ni for i n and bn n let k let p denote the total betti number for the eij gj gl i such that lk n ideal x k and lt is the smallest in the set n i j for every t they are given by p p for p n k and n eij are in particular the total betti numbers for the ideal xy x p reliminaries determinantal ideals we recall some useful results on determinantal ideals pertaining to our work we refer to for detailed discussions on these theorem let k be a field and let xij i m j n be indeterminates over let a xij be the matrix of indeterminates and im a denotes the ideal generated by the maximal minors of a the set of maximal minors of a is a universal basis for the ideal im a sum ideals proof see the complex we present the relevant portion from the book here let f rf and g rg be free modules of finite rank over the polynomial ring the complex of a map f g or that of a matrix a representing is a complex df df en symf g f symf g f d g f f f here symk g is the symmetric power of g and m homr m r the map dj are defined as follows first we define a diagonal map symk g x ui ui u i as the dual of the multiplication map g symk g in the symmetric algebra of next we define an analogous diagonal map f f f x v vi vi i as the dual of the multiplication in the exterior algebra of f theorem the complex is a free resolution of iff grade ig f g where ig denotes the g g minors of the matrix a representing proof see mapping cone we present the relevant portion from the book here let r be the polynomial ring let be a map of complexes of finitely generated the mapping cone of is the complex with differential defined as follows let wi ui with and d ui i for each theorem let m be an ideal minimally generated by the polynomials fr set mi fi i for i thus m mr for each i we have the short exact sequence mi if resolutions of and mi are known then we can construct a resolution of by the mapping cone construction joydip saha indranath sengupta and gaurab tripathi proof see construction in basis and transversal intersection of ideals lemma let hn r be such that with respect to a suitable monomial order on r the leading terms of them are mutually coprime then hn is a regular sequence in proof see lemma in lemma suppose that x is either generic or generic symmetric the eij with respect a suitable set gij is a basis for the ideal x monomial order proof we choose the lexicographic monomial order given by the following ordering among the variables xst if s t s t and yn xst for all s we now apply lemma in for the matrix x t and for k definition let t r be a set of monomials we define supp t i j xij divides m for some m t k yk divides m for some m t if t m then we write supp m instead of supp m lemma let be a monomial ordering on let i and j be ideals in r such that m i and m j denote unique minimal generating sets for their leading ideals lt i and lt j respectively then i j ij if supp m i m j in other words the ideals i and j intersect transversally if the set of variables occurring in the set m i is disjointed from the the set of variables occurring in the set m j proof let f i j we show that f ij now f i j implies that f i and therefore lt f lt i hence mi m i such that mi lt f similarly there exists monomial mj m j such that mj lt f given that mi and mj are of disjoint support we have mi mj lt f and this f proves that lt f lt ij we replace f by f lt now the proof mi mj lt f lt f follows by induction since lt f m i mj sum ideals homological lemmas lemma let i and j be graded ideals in a graded ring r such that i j i j suppose that and are minimal free resolutions of i and j respectively then is a minimal free resolution for the graded ideal i j proof consider the short exact sequence i r and tensor it with over we get the exact sequence j j the terms on the left are since r is a flat r module moreover the kernel of the map from j is i j therefore if and only if i i by the corollary of theorem proved in implies that tori for all i therefore hi tori for all i and j this proves that resolves i j the resolution is minimal since and are minimal lemma let a a be an exact sequence of free modules let be invertible matrices of sizes respectively then q q is also an exact sequence of free modules proof the following diagram is a commutative diagram is free modules and the vertical maps are isomorphisms rao q therefore is exact q r q rao q r a a is exact since corollary let c b a be an exact sequence of free modules let be invertible matrices of sizes respectively then p bp ap joydip saha indranath sengupta and gaurab tripathi is also an exact sequence of free modules c b proof consider the sequence if we take and i and apply lemma we get that the sequence bp is exact we further note that the entire p bp a a quence is exact as well since im b bp im and is invertible let us now consider the sequence a we take i and apply lemma to arrive at our conclusion lemma let n be an exact sequence of free r modules let aij denote the i j entry of an suppose that alm for some l and m ali for i m and ajm for j let be the matrix obtained by deleting the row from the matrix obtained by deleting the column from and an the matrix obtained by deleting the row and column from an then the sequence r r an r is exact proof the fact that the latter sequence is a complex is self evident we need to prove its exactness by the previous lemma we may assume that l m for we choose elementary matrices to permute rows and columns and these matrices are always invertible now due to exactness of the first complex we have an this implies that the first umn of which implies that im im therefore the right exactness of is preserved by a similar argument we can prove that the left exactness of is preserved from if x ker a let x denote a tuple with entries n then x ker an there exists y such that y x it follows that y x proving the left exactness of an by a similar argument we can prove the right exactness of an lemma let a be q p matrix over r with aij for some i and j let c be a p s matrix and b a r q matrix over there exist an invertible q q matrix x and an invertible p p matrix y such that sum ideals i xay kj and xay ik that is xay at the i j spot p ii y c kl ckl for k j and y c jl cjl ait ctl p iii bx kl bkl for l i and bx ki bki atj bkt proof i we prove for aij the other case is similar we take y ejk and x eki where ekl denotes the matrix e with ekl ett and eut for u t and u t k l ii and iii are easy to verify lemma let a be q p matrix c be a p s matrix and b a r q matrix over the matrices a b and c satisfy property pij if they satisfy the following conditions aij aik m for k j and akj m for k i bki m for k r cjl m for l the matrices xay bx and y c satisfy property pij if a b c satisfy property pij proof this follows from the above lemma since aik and akj belong to m inimal eij hgi gj i free resolution of x lemma let x be generic or generic symmetric let i eij n i ht x fij ii the complex minimally resolves the ideal x proof i we show that given by fk xik xj xjk xi k n form a regular sequence let us first assume that x is generic we take the lexicographic monomial order induced by the following ordering among the variables e and xin xjn not appearing in x the variables yk are smaller than then lt fk xik xj and hence joydip saha indranath sengupta and gaurab tripathi gcd lt fk lt fl for every k therefore is a regular e n on the other hand sequence by lemma and hence ht x e n by theorem in hence ht x e n ht x if x is generic symmetric then we have to choose the lexicographic monomial order induced by xii xij i xi xin xjj xc ij j xj xjn fij and the variables yp are smaller than and variables xkl not appearing in x xjn fij is n which is the maximum hence the ii the height of x fij complex minimally resolves the ideal x lemma let x be generic or generic symmetric let i j then eij hgi i x eij hgi i that is the ideals x eij and hgi i intersect x transversally proof let x be generic we choose the lexicographic monomial order given by the following ordering among the variables xst if s t s t and yn xst for all s then by lemma eij the set of all minors forms a basis for the ideal x eij doesn t involve the inclearly the minimal generating set m lt x determinates xin and yn whereas lt gi xin yn hence the supports of eij and m lt gi are disjoint therefore by lemma we m lt x are done let x be generic symmetric once we choose the correct monomial order the rest of the proof is similar to the generic case suppose that i j n n we choose the lexicographic monomial order given by the following ordering among the variables yn n xnn n xst for all other s suppose that i j n n we choose the lexicographic monomial order given by the following ordering among the variables yn xii xij i xi xin xjj j xj xjn xst for all other s sum ideals eij hgi i gj lemma let x be generic and i j then x eij hgi i xin i if x is generic symmetric and i j then x gj i xii xin p proof x be generic we have xit gj xjt gi xit xjk xik xjt yk fij hgi i gj moreover x eij hgi i hence xin i x xin i and gj xin i the ideal xin i being a eij hgi i gj the prime ideal it follows that xin i x proof for the generic symmetric case is similar r esolution of the sum ideals eij our aim is to construct a minimal free resolution for the ideal x eij and hgi i intersect gn i we have proved that the ideals x eij hgi i can therefore be resolved transversally see the ideal x eij minimally by theorem we have also proved that the ideal x hgi i and the ideal hgj i have linear quotient see therefore the ideal eij hgi gj i can be resolved by the mapping cone construction a x minimal free resolution can then be extracted from this resolution by apeij hgi gj i plying lemma next we will show that the ideal x intersects transversally with the ideal i if is the minimum in the eij set n i j see lemma therefore the ideal x hgi gj i can be resolved minimally by theorem proceeding in this eij gj gl i manner we will be able to show that the ideals x k and i intersect transversally if lk n and is the smallest in the set n i j lk see lemma eij this finally gives us a minimal free resolution for the ideal x hgi gj i with n and lt i j for every let us assume that x is generic and i and j the proofs for the general i and j with i j would be similar according to the aforesaid scheme the proofs in the case when x is generic symmetric would be similar as well comments for general i j and the symmetric case have been made whenever necessary joydip saha indranath sengupta and gaurab tripathi i the minimal free a minimal free resolution for x is given by the complex which is resolution of x the following e x ek n where ek rk and for each k n the map ek is defined as x x x x for every ordered k tuple with n and for every j k a minimal resolution of i is given by r r i and i intersect transversally by lemma therethe ideals x i is given fore by lemma a minimal free resolution for x by the tensor product complex i ek x such that ek is the map defined as j j eik eik j eik i by mapping now we find a minimal free resolution for x cone let ck k we have proved in lemma that x i i which is minimally resolved by the koszul sum ideals complex let us denote the koszul complex by where is the kth differential we first construct the connecting map let n n n us write fk r k and ck rk k the map fk ck is defined as x eik yj j let us choose the lexicographic ordering among the k tuples ik n such that ik n in order to write an ordered basis for r k we define lexicographic ordering among the tuples k j j for j k and k n to order the basis elements for n n n rk moreover in the free module ck rk r k we order n the basis elements in such a way that those for rk appear first the matrix representation of with respect to the chosen ordered bases is the following ak n n n n k k n n k k nk nk n n k k theorem the following diagram commutes for every k fo k co k proof it suffices to prove the statement for a basis element of without loss of generality we consider we first compute x x x n x ys es joydip saha indranath sengupta and gaurab tripathi we now compute n x ys es n x x ys es s k ej x ej n x x ys es n x ys es k ej x ej x n x ys es n x ys es k ej x ej x n x ys es k ej k ej x ej x n x ys es x ej sum ideals hence the mapping cone m gives us the resolution for x i as described in however this resolution is not minimal we now construct a minimal free resolution from m i has been constructed in a free resolution for the ideal x which is given by d k dk n n n such that dk ck r rk r k and dk let us recall that the map is the differential in the i the map is the differential in the koszul free resolution for x resolution for i and is the connecting homomorphism between the complexes defined in let us order bases for and ck with respect to the lexicographic ordering finally we order basis for dk in such a way that the basis elements for appear first followed by the basis elements for ck therefore the matrix representation for the differential map dk is given by a the entries in the matrices representing and can only belong to the maximal ideal hxij yj i since both are differentials of minimal free resolutions the block matrix a has also elements in the maximal ideal hxij yj i the only block which has elements outside the maximal ideal hxij yj i is in the identity block appearing in therefore it is clear from the matrix representation of the map dk that we can apply lemma repeatedly to get rid of hence we get a minimal free resolution and the joydip saha indranath sengupta and gaurab tripathi i are total betti numbers for the ideal x n n n n n n n n k k k k n n for k n k k k bn n eij gn i a minimal free resolution for x lemma let gk gk k n where is the defined in the list of notations in section the set of all minors of x gk i with respect set gk is a basis for the ideal x to a suitable monomial order proof we take the lexicographic monomial ordering in r induced by the following ordering among the indeterminates xnn xtt yn xst for other s then we observe that for every s lt gs is coprime with lt gt for every t k t s and also coprime with lt h for every h therefore moreover by lemma is a basis for x we only have test the s s h and s h for h pn we can write s yk and note that lt lt s for every k hence s we note that if i then the leading terms of and are mutually coprime and therefore p s next the expression s ys shows that s similarly if s sum ideals then the leading terms of and are mutually coprime and therefore s the proof for s is similar to that of s remark the corresponding result for i j in general would be the following lemma let gi j k gij gi gj k n n and lt is the smallest in the set n eij defined i j gij denotes the set of all minors of x in the list of notations in section the set gi j k is a basis for the eij gk i with respect to a suitable monomial order ideal x proof while proving this statement with i j arbitrary we have to choose the following monomial orders the rest of the proof remains similar suppose that x is generic we choose the lexicographic monomial ordering in r induced by the following ordering among the indeterminates xnn xc cii jj x yn xin xjn xst for all other s if x is generic symmetric we choose the lexicographic monomial ordering in r induced by the following ordering among the indeterminates xnn xc cii yn jj x xii xij i xi xin xjj j xj xjn xst for all other s gk i and intersect transverlemma the ideals x sally for every k n gk i such proof suppose not then there exists x e that gk i let us choose the same monomial order on r as defined in lemma upon division by elements of gk we may further assume that lt h lt for every h gk since gk gk i by lemma on is a basis for the ideal x gk i and therefore lt h the other hand x lt for some h gk since lt h and lt are mutually coprime a contradiction joydip saha indranath sengupta and gaurab tripathi remark the corresponding result for i j in general would be the eij hgi gj gl i and hgl i intersect following the ideals x k transversally if lk n and is the smallest in the set n i j lk for every k n the proof is essentially the same as above after we use the lemma proof of theorem part of the theorem has been proved in we now prove part under the assumption i j let the minimal gk i be by lemma free resolution of x is and lemma the minimal free resolution of x given by the tensor product of and r r and that is precisely with kp lp and p let p p n k denote the total betti number for the ideal gk i then the total betti numbers p p x are given by n k for the ideal x p p for p n k and n the proof for general i j follows similarly according to the strategy discussed in the beginning of section eij gn i in particular the total betti number p for the ideal x are given by p p for p and n example we show the betti numbers at each stage for n and n r eferences bruns kustin miller the resolution of the generic residual intersection of a complete intersection journal of algebra conca emanuela de negri elisa gorla universal bases for maximal minors international mathematics research notices sum ideals eisenbud geometry of syzygies ny gimenez sengupta and srinivasan minimal graded free resolution for monomial curves defined by arithmetic sequences journal of algebra johnson on equations defining veronese rings arch math basel on the vanishing of tor in regular local rings illinois matsumura commutative ring theory cambridge university press ny peeva graded syzygies london limited saha sengupta tripathi ideals of the form xy saha sengupta tripathi primality of certain determinanatal ideals department of mathematics rkm vivekananda university belur math howrah india address discipline of mathematics iit gandhinagar palaj gandhinagar gujarat india address indranathsg department of mathematics jadavpur university kolkata wb india address gelatinx
0
nov superrigidity of actions on finite rank median spaces elia fioravanti abstract finite rank median spaces are a simultaneous generalisation of finite dimensional cat cube complexes and real trees if is an irreducible lattice in a product of rank one simple lie groups we show that every action of on a complete finite rank median space has a global fixed point this is in sharp contrast with the behaviour of actions on infinite rank median spaces the fixed point property is obtained as corollary to a superrigidity result the latter holds for irreducible lattices in arbitrary products of compactly generated groups we exploit roller compactifications of median spaces these were introduced in and generalise a construction in the case of cube complexes we provide a reduced class that detects group actions with a finite orbit in the roller compactification even for cat cube complexes only second bounded cohomology classes were known with this property due to as a corollary we observe that in gromov s density model random groups at low density do not have shalom s property hf d contents introduction preliminaries median spaces and median algebras bridges the haagerup class haagerup class and elementarity of actions the main statement elementarity and shalom s property hf d superrigidity the superrigidity result homomorphisms to coarse median groups appendix a structure of ubs s references elia fioravanti introduction a metric space x is median if for any three points x there exists a unique point m x such that d xi m d m xj d xi xj for all i j simple examples are provided by real trees and rn with the metric under a finite dimensionality assumption to each connected median space b the spaces x and x b are bix corresponds a canonical cat space b lipschitz equivalent and isometries of x induce isometries of x for instance to rn with the metric we associate rn with its euclidean distance more elaborate examples of median spaces are provided by simply connected cube complexes satisfying gromov s link condition in this case we b obtain a median space x by endowing each cube with the metric and x is the corresponding cat cube complex median spaces generally display wilder features than cube complexes as like real trees they can be essentially objects note in this regard that the class of median spaces is closed under ultralimits these also preserve a notion of dimension usually called rank see section for a precise definition despite finite rank median spaces retain many of the good combinatorial properties of cube complexes in addition to the cat metric there is a notion of boundary compatible with the median property and many groups of isometries contain free subgroups we would expect many known results for cat cube complexes to extend to finite rank median spaces without significant complications for instance to name a few there is however a notable exception to this pattern general group actions on median spaces do not have a clear connection with the existence of codimension one subgroups such close similarities between cube complexes and general median spaces can be ascribed to the existence of a collection w of walls these encode the geometry of the space in the same way as hyperplanes do in cat cube complexes the set w should not be thought of as discrete and needs to be endowed with a measure that encodes the thickness of sets of walls indeed the concept of median space is in a certain sense dual to the notion of space with measured walls these extend spaces with walls which are the dual viewpoint on cat cube complexes our main theorem is a superrigidity result for irreducible lattices in products g g of locally compact compactly generated groups namely under weak assumptions of every action of on a finite rank median space x essentially arises from continuous actions gi y yi on median spaces of lower rank for cube complexes this was known due to superrigidity of actions on finite rank median spaces in the more general context of cat spaces similar results were obtained long ago in unfortunately applying these to a median space x only provides actions of the factors gi on cat subspaces b these subspaces might bear no relation to the median structure on zi x x this might seem like an irrelevant subtlety but it is on the contrary key to the fixed point properties that this paper provides as an illustration of this consider an irreducible lattice it acts properly and cocompactly on the cat space in particular it is to a product of surface groups which can easily be recognised as cubulated it was shown in that acts properly and coboundedly on an infinite rank median space the group is moreover coarse median in the sense of thus it should appear particularly striking that every action of on a connected finite rank median space fixes a point this follows from our superrigidity result see corollary d below the proof of our superrigidity theorem follows a very similar outline to monod s this is mostly hidden in our application of theorem from thus we believe it is important to highlight the analogy here we have already mentioned that to each finite rank median space x one b however one can also can associate a finite dimensional cat space consider the infinite dimensional cat spaces w and h here h is a close relative of w and will be introduced below retracing the proof of monod superrigidity in our context we would first induce a continuous action of g on the infinite dimensional cat space b see for a definition then we would prove that a x b splits as a product zk where the action subspace of x of g on the factor only depends on the projection to gi finally we b the information gained about y x b would carry back to y x our application of shalom s machinery instead constructs a continuous action of g on the infinite dimensional cat space h again one then proves a splitting theorem for this action and carries the gained insight back to y h our main contribution lies in transferring information back and forth between the actions y h and y x indeed shalom s machinery can only be set in motion once we have a nonvanishing reduced cohomology class for y h similarly once we have a superrigidity statement for y h we need to translate it back to y x we now describe our results in greater detail a cohomological characterisation of elementary actions each median space x has a distinguished collection h of subsets called halfspaces every wall gives rise to two halfspaces and each halfspace arises from a wall the collection h is equipped with a measure see in the case of cube complexes one recovers the usual notion of halfspace and is simply the counting measure elia fioravanti given a topological group g and an isometric action g y x one naturally obtains a unitary representation g u h and a cocycle b g h which will be referred to as the haagerup cocycle this construction is and appears for instance in if g y x has continuous orbits and b are continuous thus b induces a reduced continuous cohomology class b g in we introduced a notion of elementarity for actions on median spaces namely we say that g y x is roller elementary if g has at least one finite orbit within a certain compactification x the roller compactification roller elementarity implies the existence of a finite orbit in the visual b compactification of the cat space x if g y x is an isometric action with continuous orbits roller elementarity can be described in terms of the reduced cohomology class b theorem a let x be a complete finite rank median space the haagerup class b g vanishes if and only if g y x is roller elementary theorem a extends various known results in the case of simplicial trees it appears in for cat cube complexes the implication roller nonelementary b is implicit in in the authors construct a family of bounded cohomology classes detecting roller elementarity in cat cube complexes we remark that theorem a equally holds if we replace h with any lp h p although it is slightly simpler to exploit the richer structure of hilbert spaces in its proof our superrigidity result only relies on the implication of theorem a that yields b but we believe the full statement of theorem a to be of independent interest the proof of the other implication turns out to be quite technical and requires a careful study of the structure of ubs s in median spaces these are a generalisation of the simplices in hagen s simplicial boundary of a cat cube complex most of these details will be relegated to the appendix superrigidity of actions once we have a nontrivial reduced cohomology class as provided by theorem a we can apply machinery namely theorem in to obtain superrigidity results let x be a complete finite rank median space its roller compactification x is partitioned into components the subset x x forms a full component every other component is a complete median space of strictly lower rank this aspect of x shares some similarities with refined boundaries of cat spaces given a component z x a median subalgebra of z is a subset y z that is itself a median space with the restriction of the median metric of z equivalently the median map m z z takes y into y we are now ready to state our main superrigidity result superrigidity of actions on finite rank median spaces theorem b let x be a complete finite rank median space let be a uniform irreducible lattice in a product g g of compactly generated locally compact groups with suppose y x is a roller nonelementary action there exist a finite index subgroup a invariant component z x and a closed median subalgebra y z where the action y y extends to a continuous action y y for some open finite index subgroup remarks theorem b also applies to nonuniform lattices as long as they are this is a technical condition that implies finite generation and ensures that theorem in still holds irreducible lattices in g are if each gi is the group of ki points of a semisimple almost ki simple ki linear algebraic group defined over some local field ki further examples of nonuniform lattices include minimal groups over sufficiently large finite ground fields these can be regarded as irreducible lattices in the product of the closed automorphism groups of the associated buildings theorem b should be compared to shalom s superrigidity result for actions on simplicial trees theorem in if x is a simplicial tree y is always a subcomplex of x and the complications in the statement of theorem b reflect phenomena that do not happen in the world of trees however as soon as we leave the context of rank one median spaces our result is optimal even if one restricts to cat square complexes see examples and we remark that when x is a general cat square complex the median algebra y might not be a subcomplex of x or z we can take g and z x as long as x does not split as a nontrivial product of median spaces and g has no finite b see theorem below orbits in the visual compactification of x however even in this case the action in general extends only to a proper median subalgebra of x for cat cube complexes the superrigidity result of is slightly more general than theorem b as it applies to all nonuniform lattices this is due to the use of bounded cohomology namely theorem in rather than reduced cohomology our strategy of proof was hinted at on page of fixed point properties for irreducible lattices unlike automorphism groups of cat cube complexes the isometry group of a median space needs not be totally disconnected still it is possible to exploit theorem b to derive a fixed point property for irreducible lattices in connected groups elia fioravanti given a locally compact topological group q we denote the connected component of the identity by we say that q satisfies condition if is amenable or has shalom s property hf d see section for a definition in particular all and t groups satisfy condition theorem let x be a complete finite rank median space let be a irreducible lattice in a product g with suppose that every gi is compactly generated and satisfies condition every action y x is roller elementary if does not virtually map onto z every action y x has a finite orbit within x if moreover x is connected every action y x has a global fixed point when x is a real tree theorem c also follows from theorem in we remark that every group that virtually maps onto z admits a roller elementary action on rn with unbounded orbits for some n corollary let x be a complete connected finite rank median space let be an irreducible lattice in a connected higher rank semisimple lie group every action y x fixes a point an analogous result for cat cube complexes was proved in if each simple factor of g has rank at least two then has property t and corollary d follows from theorem in the assumption that x have finite rank is essential for corollary d to hold if at least one simple factor gi g is locally isomorphic to o n or u n n the lattice admits an action on an infinite rank median space with unbounded orbits moreover if all gi s are locally isomorphic to o ni ni then even admits a proper and cobounded action on an infinite rank median space homomorphisms to coarse median groups coarse median spaces were introduced in as an attempt to formulate a coarse notion of nonpositive curvature they have recently received a lot of attention and proved instrumental to striking results such as a group is said to be coarse median if its cayley graphs are coarse median spaces examples of finite rank coarse median groups include hyperbolic groups cubulated groups fundamental groups of closed irreducible not modelled on nil or sol mapping class groups and more generally all groups that are hhs we will be mainly interested in equivariantly coarse median groups if we view coarse median groups as a generalisation of groups that are hhs equivariantly coarse median groups generalise hierarchically hyperbolic groups hhg in particular hyperbolic groups cubulated groups and mapping class groups also are equivariantly coarse median of finite rank superrigidity of actions on finite rank median spaces more precisely we say that a group h is equivariantly coarse median if it is equipped with a finite generating set s h and a coarse median h h such that ds ha hb hc a b c c for all elements h a b c h here ds denotes the word metric induced by note that this definition does not depend on the choice of equivariantly coarse median groups have already been considered in under a different name if h is a coarse median group of finite rank every asymptotic cone of h is endowed with a equivalent median metric as an example in asymptotic cones of mapping class groups the median geodesics are limits of hierarchy paths when h is equivariantly coarse median the median metric on each asymptotic cone is preserved by the action of the ultrapower of given a group and an infinite sequence of pairwise homomorphisms h we can apply the construction the result is an isometric action on a median space x with unbounded orbits this is obtained as the canonical median space equivalent to an asymptotic cone of along with theorem c this implies the following corollary let h be an equivariantly coarse median group of finite rank let be as in the second part of theorem there exist only finitely many pairwise homomorphisms corollary e applies in particular to the case when is an irreducible lattice in a connected higher rank semisimple lie group when has property t this result already appears in and if in addition h is hhg a much stronger statement is provided by corollary d of note that haettel s method can not be applied in our context as lattices in products of rank one groups do admit nonelementary actions on hyperbolic spaces also compare theorem in for the case of hyperbolic h and an arbitrary product g of locally compact second countable groups we remark that a stronger conclusion can be reached if h acts freely on a complete finite rank median space indeed the following is an immediate consequence of theorem f in proposition let h be a group admitting a free action on a complete finite rank median space x suppose that every action y x is roller elementary then every homomorphism h factors through a virtually abelian subgroup of proposition f applies for instance to the case when has no free subgroups has property hf d or satisfies the hypotheses of the first part of theorem in particular if is an irreducible lattice in a connected higher rank semisimple lie group every homomorphism h has finite image this should motivate a certain interest in groups acting freely on complete finite rank median spaces if a group acts freely on a finite dimensional elia fioravanti cat cube complex it clearly falls into this class however it is unclear at this stage whether these are the only finitely generated examples see for partial results in this direction note that the infinitely generated group q even admits a proper action on a rank two median space which splits as a product of a simplicial tree and the real line see example in however since q is a divisible group all its elements must act elliptically on any possibly cat cube complex even within finitely generated groups actions on median spaces tend to be more flexible than actions on cat cube complexes for every group h we can consider dimf m h the minimum rank of a complete median space x admitting a free action of h if h does not act freely on any complete median space we set dimf m h restricting to cat cube complexes we can similarly define dimf c h and if we only consider metrically proper actions we obtain dimpm h and dimpc thus dimpc q dimf c q while dimpm q and dimf m q we remark that dimf m h dimcm h and dimpm h dimpm h for many finitely generated groups for instance dimf c h if and only if h is free on the other hand by work of rips dimf m h if and only if h is a free product of free abelian and surface groups excluding a few nonorientable surfaces see theorem in one can use the same observation to construct free actions of various raags on median spaces of rank strictly lower than the dimension of the salvetti complex considering more general actions we mention that there exist finitely generated groups admitting actions on real trees with unbounded orbits but whose actions on simplicial trees and in fact even finite dimensional cat cube complexes must have a global fixed point shalom s property hf d and random groups theorem a also allows us to prove that various groups do not have property hf d the latter was introduced in a topological group g has property hf d if every unitary representation with g has a finite dimensional subrepresentation property hf d is trivially satisfied by every locally compact group with property t but also at the opposite end of the universe of groups by a large class of amenable groups this includes polycyclic groups lamplighter groups and all connected locally compact amenable groups an example of an amenable group without hf d is provided by the wreath product z o z we prove the following see proposition below for a more general result corollary let be a discrete group with property hf d if acts freely and cocompactly on a cat cube complex x then is virtually abelian property hf d has been studied almost exclusively within the class of amenable groups where it happens to be a invariant it was a key ingredient implicitely or explicitely in recent more elementary superrigidity of actions on finite rank median spaces proofs of gromov s theorem on groups of polynomial growth it has moreover interesting applications to the study of embeddings into hilbert spaces property hf d is inherited by uniform lattices and it is stable under direct products and central extensions being satisfied by groups that fall into two extremely different classes namely amenable and kazhdan groups it is reasonable to expect a wide variety of groups with property hf d however it seems that no answer is known to the following question question does every finitely generated group with property hf d virtually split as a direct product of an amenable group and finitely many groups with property t does every word hyperbolic group with property hf d also satisfy property t corollary g and the results of imply that random groups at low density do not satisfy hf d corollary with overwhelming probability random groups at density d in gromov s density model do not have property hf d note however that at density d random groups are kazhdan hence satisfy property hf d acknowledgements the author warmly thanks brian bowditch caprace indira chatterji yves cornulier thomas delzant mark hagen masato mimura narutaka ozawa romain tessera pierre pansu alain valette for helpful conversations the author expresses special gratitude to cornelia and talia for contributing many of the ideas of this paper this work was undertaken at the mathematical sciences research institute in berkeley during the fall program in geometric group theory where the author was supported by the national science foundation under grant no and by the gear network part of this work was also carried out at the isaac newton institute for mathematical sciences cambridge during the programme curvature group actions and cohomology and was supported by epsrc grant no the author was also supported by the clarendon fund and the merton moussouris scholarship preliminaries median spaces and median algebras let x be a metric space given points x y x the interval i x y is the set of points z x that lie between x and y that satisfy d x y d x z z y we say that x is a median space if for all x y z x there exists a unique point m x y z that lies in i x y y z z x the median map m x x that we obtain this way endows x with a structure of median algebra most definitions in the theory of median spaces can also be given for arbitrary median algebras elia fioravanti we will follow this approach in introducing the necessary notions the reader can consult for more background on median spaces and algebras in a median space i x y z i x y z m x y z this can be taken as a definition of intervals in general median algebras if m m is a median algebra we say that a subset c m is convex if i x y c whenever x y the intersection of a finite family of pairwise intersecting convex sets is always nonempty this is known as helly s theorem see theorem in a subset h m is a halfspace if both h and m h are convex we will denote the set of halfspaces of m by h m or simply by h when there is no ambiguity halfspaces h k are said to be transverse if no two distinct elements of the set h k are comparable in the poset h equivalently the intersections h k h k are all nonempty given a h we write for h h a a subset h is said to be an ultrafilter if any two halfspaces in intersect and h t for instance for each x m the set h h x h is an ultrafilter given subsets a b m we write h h h b h a and h we refer to sets of the form h x y m as halfspace intervals if c c m are disjoint and convex the set h is nonempty see theorem in in particular if and only if the points x y m coincide a subset h is inseparable if whenever j h satisfies h j k for h k we have j given a subset a h its inseparable closure is the smallest inseparable subset of h that contains a it coincides with the union of the sets h for h k a a wall is a set of the form w h with h h we say that h and h are the sides of the wall w separates subsets a b m if either h or lies in h we denote by w w the set of walls separating a and b and by w m or simply w the set of all walls of the median algebra m a wall is contained in a halfspace k if one of its sides is a wall w is contained in disjoint halfspaces if and only if and w if a side of the wall is transverse to a side of the wall we say that and are transverse the rank of the median algebra m is the maximum cardinality of a set of pairwise transverse walls various alternative and equivalent definitions of the rank can be found in proposition of we remark that m has rank zero if and only if it consists of a single point if x is a median space of finite rank r the topological dimension of every locally compact subset of x is bounded above by r see theorem and lemma in if moreover x is complete and connected x is b the visual equivalent to a canonical cat space x b is finite dimensional by proposition in boundary of x superrigidity of actions on finite rank median spaces b yielding a homomorevery isometry of x extends to an isometry of x b every convex subset of x is also convex in x b phism isom x isom the converse is not true the euclidean convex hull of the points and in is not even a median subalgebra halfspaces in finite rank median spaces are fairly proposition let x be a complete median space of finite rank every halfspace is either open or closed possibly both moreover if hk is a chain of halfspaces with hk we have k the following is a simple but extremely useful observation given ultrafilters h m and h k we either have h k or k h or h and k are transverse along with dilworth s theorem this yields the following lemma let m be a median algebra of finite rank r and let h be ultrafilters we can decompose t t ck where k r and each ci is nonempty and totally ordered by inclusion every infinite subset of contains an infinite subset that is totally ordered by inclusion if c m is a subset and x m a gate for x c is a point y c such that y i x z for every z c gates are unique when they exist if a gate exists for every point of m we say that c is in this case we can define a m c by associating to each point of m the unique gate the to c is always a morphism of median algebras and satisfies w w x for every x m subsets are always convex but the converse is not always true for every y z m the interval i y z is with gateprojection x m x y z a proof of the following statements can be found in proposition let c c m be the sets h h m h h h c and h c are all naturally in bijection there exists a pair of gates a pair x of points x c and c such that x and x in particular we have h h the set c is with moreover if c c we have c c c and in particular if c c we have a median algebra m m endowed with a hausdorff topology is said to be a topological median algebra if the median map m m m is continuous here we equip m with the product topology median spaces always provide elia fioravanti topological median algebras indeed the median map m is in that case in compact median algebras and complete median spaces a subset is gateconvex if and only if it is closed and convex moreover are continuous in median spaces are even let x be a complete finite rank median space in we endowed b and a measure usually denoted just the set h with a b b as measurable sets unlike here we simply refer to the elements of b the map h h sending each halfspace to its complement is measure preserving every inseparable subset of h is measurable in particular all ultrafilters are measurable and for all x y x we have d x y almost every halfspace h h is thick both h and have nonemtpy b and differ from their counterparts in interior note that in general b proposition let x be a complete finite rank median space and h an ultrafilter such that for some x x there exists y x such that thus x can be equivalently described as the collection of all ultrafilters on h that satisfy for some x x we identify ultrafilters whose symmetric difference is considering the space of all ultrafilters on h we obtain a set x in which x embeds a structure of median algebra can be defined on x by setting m we endow x with a topology such that ultrafilters h converge to h if and only if lim sup is we refer to x as the roller compactification of x proposition the roller compactification x is a compact topological median algebra the inclusion x x is a continuous morphism of median algebras with dense convex image in general x is not open in x and the inclusion x x is not a homeomorphism onto its image the roller boundary is defined as x x a point of x can in general be represented by several distinct ultrafilters with null symmetric differences however for each x there is a unique preferred ultrafilter representing this should be seen as a generalisation of the ultrafilters when x x we can extend each halfspace h of x to a halfspace e h of x such that e e h x h indeed it suffices to define h x h when x we save the notation h for the set h x instead of the analogous subset of h x if y x is a closed median subalgebra the restriction of the metric of x turns y into a complete median space with rank y rank x moreover superrigidity of actions on finite rank median spaces lemma there is a canonical morphism of median algebras y x proof we write hy h h x h y y intersecting with y gives a map p hy h y lemma in implies that p is surjective thus for every ultrafilter h y there is a unique ultrafilter h x such that and p hy applying this to canonical ultrafilters yields the required embedding given ultrafilters h we set d we refer to d as the extended metric on x as it satisfies all the axioms of a metric even though the value is allowed note that for points of x this is the same as the original median metric on x a component z x is a maximal set of points having finite pairwise distances components are convex subsets of x one component always coincides with x x all other components are contained in the following appears in proposition let x be a complete median space of finite rank let z be a component and let d denote the extended metric on x the metric space z d is a complete median space of rank at most r every thick halfspace of z is of the form e h z for a unique h h if c x is closed and convex the closure of c in x is and naturally identified with the roller compactification of c thus the notation c is not ambiguous we denote by x c the corresponding gateprojection it extends the usual x if h x is an ultrafilter representing the point x the set h c is an ultrafilter on h c and represents similarly if z is a component the closure of z in x is gateconvex and naturally identified with the roller compactification z the x z satisfies x z in terms of ultrafilters takes the point of x represented by h x to the point of z represented by h z h z the intersection makes sense as by part of proposition almost every halfspace of z arises from a halfspace of x let be a group an isometric action y x is said to be roller elementary if there exists a finite orbit within x the action is roller minimal if rank x and does not preserve any proper closed convex subset c roller elementarity implies but is in general much stronger than the b when x is existence of a finite orbit in the visual compactification of a cat cube complex an action is roller minimal if and only if in the terminology of it is essential and does not fix any point in the visual b boundary of x neither roller elementarity nor roller minimality implies the other one however roller minimal actions naturally arise from roller nonelementary ones elia fioravanti proposition let x be a complete finite rank median space with an isometric action y x either y x fixes a point or there exist a component z x and a closed convex subset c z such that y c is roller minimal a closed convex subset c z x always gives rise to a here we introduced the measurable decomposition h hc t sets hc h h c e h c e and h h c e h note that by part of proposition the measure spaces hc and h c are isomorphic we say that an action y x is without wall inversions if there do not exist g and h h such that gh by proposition any action of a connected complete finite rank median space is without wall inversions the following was proved in compare with proposition let x be a complete finite rank median space with thick halfspaces h let y x be a roller minimal action without wall inversions there exists g such that h and d there exists g such that gk h k and d gk when g is a topological group all isometric actions g y x will be implicitely required to have continuous orbit maps equivalently the homomorphism g isom x is continuous where we endow isom x with the topology of pointwise convergence we remark that isom x is a hausdorff sequentially complete topological group as soon as x is complete if and are median algebras the product median algebra is defined as m where m here pi denotes the projection onto the factor if and are median spaces we endow the product with the metric namely d the median algebra associated to the median space d is just the product median algebra arising from and a median space x is said to be irreducible if it is not isometric to any nontrivial product proposition let xk be irreducible complete finite rank median spaces consider the product x xk we have a measurable partition h x t t hk where each hi is canonically identified with h xi if h hi and k hj with i j the halfspaces h and k are transverse every isometry of x permutes the members of the partition the product isom isom xk sits inside isom x as an open finite index subgroup every closed convex subset c x is of the form ck where each ci is a closed convex subset of xi the roller compactification x is naturally identified with the product median algebra xk with the product topology superrigidity of actions on finite rank median spaces proof for part and the first half of part see corollary and proposition in we conclude the proof of part by showing that isom isom xk is open in isom x choose points xi yi xi for each i and a real number such that d xi yi for all i denote by pi x xi the projection onto the factor let x x be the point with coordinates xi we also consider the points zi x such that pj zi xj for all i j and pi zi yi suppose that f isom x is such that d f x x and d f zi zi for all i we claim that f hi hi for all i this implies that the product of the isometry groups of the factors is open in isom x suppose for the sake of contradiction that for indices i j we have f hi hj this induces an isometry f xi xj such that pj f f pi see in particular we have d xj f xi d x f x and d xj f yi d zi f zi thus d xi yi d f xi f yi a contradiction irreducibility of the factors plays no role in part so it suffices to consider the case k let and be the projections of c to and if x and y there exist u and v such that the points x x v and y u y lie in it is immediate to observe that the point x y lies in i x y finally part is lemma in halfspaces hn form a facing n if they are pairwise disjoint we say that h k h are strongly separated if h k and no j h is transverse to both h and see for the following proposition let x be an irreducible complete finite rank median space let h be a thick halfspace if x admits a roller minimal action without wall inversions there exist thick halfspaces h such that and are strongly separated if x admits a roller nonelementary roller minimal action without walls inversions h is part of a facing of thick halfspaces for every n every complete finite rank median space can be isometrically embedded into its barycentric subdivision x this is a complete median space of the same rank see when x is the of a cat cube complex with the metric the space x is given by the of the customary barycentric subdivision we have a natural homomorphism isom x isom x given any isometric action y x the induced action y x is without wall inversions we write h instead of h x there is an inclusion preserving map p h h it is surjective isom x and its fibres have cardinality at most two we have h if and only if h is an atom for in this case we refer to each element of h as a hemiatom see for the following lemma elia fioravanti lemma let x be a complete finite rank median space an action y x is roller elementary if and only if the induced action y x is the sets and inherit a median algebra structure from the median space r in particular we can consider the product median algebras k and k for every k for every point x x x b there exists a canonical subset c x x it is isomorphic k to for some k via an isomorphism that takes x to the b centre the intersection c x c x x is in x k and corresponds to the subset k for x x we set b c x c x x see for more details lemma let x be a complete finite rank median space every infinite convex subset c x intersects x proof let x c be a point minimising r rank c x if r we have b x c x otherwise there exists a point y c that does not lie in c x b b the z of y to c x lies in c x x in particular c z is contained in a face of c x and has strictly lower rank a contradiction since z i x y in particular we can obtain the following extension of lemma lemma if y x is roller nonelementary and roller minimal so is the action y x proof suppose for the sake of contradiction that c x is a nonempty closed convex subset by corollary in there exists a component w x with w c note that c w is unbounded by corollary in since y x is roller nonelementary by lemma the component w is the barycentric subdivision of a component z x and lemma implies that c z since y x is roller minimal we must have z x and c x x hence x c by part of proposition in we conclude that c x let us now fix a basepoint x x where x is a complete finite rank median space the following discussion is independent of our choice of x a diverging chain of halfspaces is a sequence hn such that d x hn and hn for each n we use the same terminology for the set hn n given a ubs for is an inseparable subset containing a diverging chain of halfspaces given ubs s we say that is almost contained in if the halfspaces in lie at uniformly bounded distance from x this is denoted by if and the ubs s are equivalent and we write we denote the equivalence class of h by and the set of all equivalence classes of ubs s for by u the relation descends to a partial order on u a ubs is said to be minimal if superrigidity of actions on finite rank median spaces is a minimal element of u a minimal ubs is equivalent to the inseparable closure of any diverging chain that it contains we define a directed graph g as follows the vertex set of g is identified with the set of minimal elements of u given diverging chains hm and kn in and respectively we draw an oriented edge from to if almost every hm is transverse to almost every kn but not vice versa this is independent of the choices involved a subset a g is said to be inseparable if every directed path between vertices in a only crosses vertices in a the following can be found in proposition let x be a complete median space of finite rank r and the graph g has at most r vertices and contains no directed cycles the poset u is isomorphic to the poset of inseparable subsets of g ordered by inclusion the isomorphism maps u to the set of equivalence classes of minimal ubs s almost contained in in particular the set u is finite given a ubs and a set of representatives of all equivalence classes of minimal ubs s almost contained in we have sup d x h if is a minimal ubs we denote by the subset of halfspaces k that are not transverse to any diverging chain of halfspaces in we say that is reduced if we say that is strongly reduced if we can write t t ck for some k where each ci is totally ordered by inclusion and contains a diverging chain of halfspaces consider the median spaces in figures and both are subsets of with the restriction of the metric in both cases the ubs is minimal figure shows that can be reduced but not strongly reduced in figure the ubs is strongly reduced and exhibits how the decomposition t t ck can require k lemma let x be a complete finite rank median space consider x x and let be a minimal ubs the subset is a reduced ubs equivalent to there exists a strongly reduced ubs contained in all its s are strongly reduced if is strongly reduced it is reduced proof if h k lie in and k is transverse to a diverging chain of halfspaces then h is transverse to an infinite subchain this implies that is inseparable moreover can not contain a diverging chain or would contain two inequivalent ubs s this proves part to obtain part we decompose t t ck as in lemma let a be the union of the sets ci that do not contain a diverging chain elia fioravanti figure figure superrigidity of actions on finite rank median spaces there exists d such that d x h d for every h a the set h d x h d is a strongly reduced ubs and so are its s regarding part decompose t t ck where each ci is totally ordered by inclusion and contains a diverging chain if there existed k we would have k ci for some i in particular k would not be transverse to a diverging chain in ci since is minimal k would not be transverse to any diverging chain in a contradiction given we denote by x the group of isometries that fix let be the kernel of the action of x on u it is a subgroup of x if is a ubs we denote by the subgroup of x that fixes we can define r by the a mula g g g this is a homomorphism and only depends on the equivalence class if are a set of representatives for g we can also consider the full transfer homomorphism rk proposition let x be a complete finite rank median space consider the subgroup is open in x and the full transfer homomorphism rk is continuous every finitely generated subgroup of ker has a finite orbit in x if x is connected every finitely generated subgroup of ker fixes a point before proving the proposition we will need to obtain the following lemma note that for every point and every halfspace h the set h k k h is a ubs lemma for every thick halfspace h and every there exists a neighbourhood u of the identity in x such that h h and h for all g u proof pick a point x x with d x h in a neighbourhood of the identity of x we have x if k h h and y is the of x to k we have y by part of proposition since k and x thus d y g y d k we conclude that for every k h with d k there exists a neighbourhood vk of the identity in x such that k h for all g vk decompose h t t ck as in lemma let ki be the union of all k ci with d k if nonempty it is a halfspace and d ki elements of h h not contained in any k form a subset of measure at i p most h let v be the intersection of the vki for ki if g v the set h h has measure at most and consists of halfspaces at uniformly bounded distance from x x it now suffices to consider u v v proof of proposition we only need to prove that is open and that is continuous the rest of the statement is contained in theorem f of elia fioravanti for every v g there exists hv such that the vertices w g with w h are precisely v and those that are at the other end of an incoming edge at indeed given a diverging chain in a ubs representing v almost every halfspace in the chain can be chosen as hv let ai be the set of vertices v g such that there exists no directed path of length i in g that ends at note that by proposition we have ak g for some k we will show that the subgroup ki x that fixes ai g pointwise is open in x and that for every ai the homomorphism ki r is continuous we proceed by induction on i setting x the base step is trivial if i let ai vs and consider the halfspaces hvs setting h lemma provides a neighbourhood u of id in x such that for all j and all g u a minimal ubs almost contained in projects to an element of or to vj hence we have gvj vj for every g u since by the inductive hypothesis is open so is ki continuity of the transfer characters is obtained with a similar argument bridges let a median algebra m m and two subsets be fixed throughout this section all the following results have analogues in section of denote by m ci the to ci we will refer to the sets are gates for are gates for as the shores of and respectively by part of proposition these coincide with and hence they are the map is a bijection with inverse if m arises from a median space x this is an isometry as are the bridge is the set g g b i i the union is disjoint because if is a pair of gates for we have i xi for i this follows from part of proposition and the observation that i ci xi proposition the bridge b is and w b w w t w for any pair of gates the bridge is canonically isomorphic to the product i this is an isometry if m arises from a median space proof pick a pair of gates set i i and consider the morphism of median algebras m i if is another superrigidity of actions on finite rank median spaces pair of gates the projection provides an isomorphism i i mapping each to xi this observation and the decomposition above imply that the restriction is bijective the map m b is surjective and it is a by proposition in by part of proposition and the discussion above every wall of b arises either from a wall of m cutting or from a wall of m cutting i the latter correspond to w by part of proposition so we are left to show that w w w this follows from the fact that when m arises from a median space x the measure induces a measure b on the set w in this case the fact that is an isometry follows from the decomposition of w b above and the observation that b w d x y for all x y x we can extend the notion of strong separation to arbitrary subsets of median algebras we say that and are strongly separated if they are disjoint and w w note that the condition w w alone already implies that consists of at most one point in a median space two halfspaces are strongly separated in the sense of section if and only if their closures are strongly separated according to the definition above see lemma below for a stronger result proposition implies that two disjoint sets are strongly separated if and only if their shores are singletons this yields the following result corollary let m be strongly separated there exists a unique pair of gates and we will also need the following lemma let h k h be strongly separated as halfspaces the closures h k of e h ek in x are strongly separated as subsets of x proof since h k have disjoint closures in x the sets h and k are disjoint see the proof of theorem in by proposition it then suffices to prove that the shore s h is a singleton suppose for the sake of contradiction that s contains distinct points and let k be their projections to k in particular given j the argument at the beginning of the proof shows that the closures of j and h inside x intersect nontrivially by lemma in almost every j intersects considering complements we conclude that almost every j is transverse to h and similarly to since d there exists such a j contradicting the fact that h k are strongly separated the haagerup class let x be a median space g a topological group and p given a banach space e we denote by u e the group of linear isometries of elia fioravanti an isometric action g y x corresponds to a measure preserving action g y h we obtain a continuous representation g u lp h when p we simply write we will use interchangeably the notations g and g lp h to denote continuous cohomology given x x we can consider the continuous bx g lp h defined by bx g g it satisfies kbx g kp d x gx we will refer to bx as a haagerup cocycle the cohomology class bx g does not depend on the point x and we will simply denote it by b the action g y x has bounded orbits if and only if the affine action g y lp h induced by bx fixes a point this follows for instance from the theorem for p and from theorem a in in the case p thus we have b if and only if the action g y x has bounded orbits we can also consider the projection b of b to reduced continuous cohomology which carries more interesting geometrical information see theorem a in the introduction we will refer to b g as the haagerup class of g y x the choice of p here is not particularly relevant and the same discussion could be equally carried out for any other p with few complications see remark we conclude this section by collecting a few straightforward lemmata for later use let h be a hilbert space and g u h a continuous unitary representation lemma if h g is open and functoriality induces an injective map g h h h lemma given a decomposition h the projections onto the two factors induce g h g g recall that we denote by x the barycentric subdivision of x and by i x x the standard inclusion we write h instead of h x every isometric action g y x also induces a continuous representation g u h lemma let x be complete and finite rank and let g y x be an isometric action the projection p h h induces an isometric embedding h h and a monomorphism g g taking the haagerup class of g y x to the haagerup class of g y x proof the fact that is an isometric embedding follows from the observation that injectivity of follows from lemma applied to h and its orthogonal complement finally if x x and bx is the corresponding haagerup cocycle for g y x the cocycle bx is the haagerup cocycle for g y x relative to the point i x x superrigidity of actions on finite rank median spaces haagerup class and elementarity of actions the main statement let x be a complete finite rank median space and let g y x be an isometric action of a topological group the goal of this section is to prove theorem a by lemmata and it suffices to consider the case when g y x is without wall inversions this will be a standing assumption throughout the rest of the section lemma suppose that x is irreducible and that g y x is roller minimal and roller nonelementary there exists a free subgroup h g such that has no vectors and h y x has unbounded orbits proof part of proposition in provides a free subf group h g and a measurable partition h hh with ghh hgh for all g h it is immediate from the construction of h that it acts on x with unbounded orbits if there existed a sequence of almost invariant vectors fn in h say with kfn we could define functions fn h by fn h kfn it is immediate to check that kfn for every n and for every g h kgfn fn kfn h kfn x k gfn kfn x k gfn fn kgfn fn thus the regular representation of h would contain vectors implying amenability of h see theorem in this is a contradiction we can already prove the only if half of theorem a proposition if g y x is roller nonelementary we have b proof note that by functoriality of reduced cohomology it suffices to consider the case when g has the discrete topology thus we do not need to worry about continuity issues we proceed by induction on r rank x when r all actions are roller elementary assume for the rest of the proof that r we can also assume that g y x is roller minimal indeed if c z x are the subsets provided by proposition the action g y c is again without wall inversions and rank c r by proposition the induces an orthogonal decommeasurable partition h hc t position of l h and a splitting g h g h c g elia fioravanti if and are the orthogonal projections of h onto the two direct summands we can write bx bx bx for every x x note that the x c maps x to a point the cocycle bx is precisely the haagerup cocycle for the action g y c by part of proposition in particular if we can conclude that b we thus assume in the rest of the proof that x if x is irreducible lemma provides i h g such that h y x has unbounded orbits and has no vectors the first condition implies that b the second condition and theorem in thus yield b in particular b if instead x splits as a nontrivial product and j g is the subgroup preserving this decomposition it suffices to show that j b writing instead of proposition and lemma imply h h h hence if x we have bx the action y x is roller nonelementary since is in g thus up to exchanging the two factors y is roller nonelementary since rank r the inductive hypothesis guarantees that and this concludes the proof before proving the rest of theorem a we need to obtain a few more results lemma if the of x is finite the of is open proof suppose that t and d for all i where d is the extended metric on x by proposition in we can find xi yi x such that denoting by the projection to ii i xi yi we have d in a neighbourhood u g of the identity element we have d gxi xi d gyi yi and d for all i if g u we have d d gxi xi d gyi yi if in addition we had we would have as a consequence d d d which would contradict our choice of ii we conclude that u is contained in the stabiliser of which must be open in the proof of the following fact is rather lengthy and technical it will be carried out in appendix a proposition let and k x be a compact set of isometries acting trivially on u there exists a point xk x such that coincides up to a null set with t t where each is a strongly reduced minimal ubs superrigidity of actions on finite rank median spaces if g k we have whenever g and k whenever g k if i j and g k we have given points x x and a ubs we can define a function r as h h the dependence on the point x is not particularly relevant so we do not record it in our notation we can consider the sets h h c in appendix a we will obtain the following result see lemma proposition suppose that is minimal and strongly reduced let k x be a compact set of isometries such that for all g as c the functions i h g id c converge to in h uniformly in g if instead for all g k they converge to the function we are finally ready to complete the proof of theorem a proof of theorem a by proposition it suffices to consider the case when g has a finite orbit in x and by lemmata and we can actually assume that g fixes a point x if x we have b suppose instead that by proposition an open subgroup g acts trivially on u by lemma it suffices to consider the case g fix x x for every and every compact subset k g we need to construct a function h such that kbx g for all g considering the point xk x provided by proposition it suffices to find a function h such that kbxk g for all g k and then set if g k considering all equalities up to null sets we have t t t in particular since by construction whenever i j k g t introducing the notation for subsets a h we can rewrite x x bxk g k g k k k g k k elia fioravanti for c consider the function gc k x k c c proposition shows that it suffices to take gc for large remark theorem a also holds for the analogous class in g for every p indeed lemma applies to any decomposition of a banach space into closed subspaces in the proof of lemma a closed complement to lp h within lp h is always provided by the subspace of functions on h that take opposite values on hemiatoms theorem in holds for representations in general banach spaces finally if f is a free group p f has no f vectors for every p the value of p also has little importance for most of the material in appendix a note however that proposition fails for p one needs to consider functions with a quicker decay in that case elementarity and shalom s property hf d let x be a complete finite rank median space and let g be a topological group our main result on property hf d is the following proposition if g has property hf d every isometric action g y x is roller elementary we will need the following lemma lemma suppose that x is irreducible and let g y x be a roller nonelementary roller minimal action let e h be a measurable subset such that for all g then e is either or proof without loss of generality we can assume that g y x is without wall inversions as lemma allows us to pass to the barycentric subdivision x if necessary now suppose that e is such a set and e since x has finite rank we can find a thick halfspace h such that replacing e with e if necessary the set eh e k h k h satisfies a eh by part of proposition the halfspace h is part of a facing with n e by proposition there exist gn g such that h h gn h are a facing the sets eh eh gn eh are pairwise disjoint and contained in e up to null sets however their union has measure na e a contradiction proof of proposition suppose for the sake of contradiction that g y x is roller nonelementary without loss of generality we can assume that x has minimal rank r among complete median spaces admitting roller nonelementary actions of in particular x must be irreducible see the proof of proposition by proposition we can also assume that g y x is roller minimal theorem a guarantees that g and since superrigidity of actions on finite rank median spaces g has property hf d there exists a finite dimensional subrepresentation v h we will construct a measurable subset of h with positive finite measure thus violating lemma let fk be measurable functions on h whose equivalence classes in h form an orthonormal basis of v define for c n o ec h h fk h c since in the definition of ec it suffices to look at s lying in a countable dense subset of we conclude that ec is measurable if h ec we must have h kc for some i hence ec if c is sufficiently small we have ec given g g there exist real numbers p i j k such that outside a measure zero set we have fi j gfj for every p if h ec gec we must have fi h j gfj h for some i we conclude that gec for all g corollary let be a discrete group with property hf d if acts freely and cocompactly on a cat cube complex x then is virtually abelian proof cocompactness of the action implies that x is finite dimensional by propositions and there exists a subgroup and a normal subgroup n c consisting of elliptic elements such that is abelian since acts freely n is trivial recall that in gromov s density model random groups at density d are nonelementary hyperbolic with overwhelming probability together with theorem in corollary then immediately implies the following result corollary with overwhelming probability random groups at density d in gromov s density model do not have property hf d superrigidity the superrigidity result let x be a complete median space of finite rank r and y x an action by isometries of a discrete group lemma suppose h form a facing triple let ki h and be strongly separated for i there exists a point z x such that m z whenever kei proof let c be the intersection of the closures of e inside x it is nonempty closed and convex given points kei set m m by convexity we have i e hence m e permuting the indices we obtain m in particular denoting by the gate projection x c we have m m m the closures of e and eki in x are strongly separated by lemma let xi be the shore of and set z m by corollary elia fioravanti we have xi hence m z no matter which points kei we have chosen lemma suppose that x is irreducible and assume that y x is roller nonelementary and roller minimal given f h consider n s f gn gn g f g f there exists z x such that for every gn s f there exists n such that gn z z for all n n proof by part of proposition the barycentric subdivision x of x is an irreducible median space of the same rank the action y x is without wall inversions roller nonelementary and roller minimal by lemma as usual we write h for h x the function f h induces f h with s f s f we approximate f by a linear combination f of characteristic functions of halfspace intervals h h xk such that kf f k kf proposition implies that h all halfspace intervals have finite measure so there exists h such that xi yi for all propositions and provide a thick halfspace h h such that and are strongly separated in particular h contains every wall in the set w w xk propositions and also provide such that h and h are strongly separated and such that and h are strongly separated we can assume without loss of generality that d h as by proposition the quantity d h diverges as n goes to infinity thus we can choose elements such that h h and for all i the halfspaces and h are strongly separated and at distance at least we denote by s h the support of the function f and by p the set of all points xi yi let d be the maximum distance from of a point of p and d dd let us fix an integer m d and g such that f f k kf k and kgf f k kf we prove that h a straightforward repeated application of the triangle inequality yields kgf f k k and f f k k thus s gs and s s let w be a wall corresponding to a halfspace in s since w is contained in both h and h and y x is without wall inversions we conclude that h h a similar argument shows that h gh let u be the wall corresponding to if u is contained in h we either have h h or the former case immediately yields h h while the latter leads to a contradiction as h and gh intersect superrigidity of actions on finite rank median spaces if instead u is not contained in h it is contained in by strong separation let l m be minimum such that contains u we have h h since h h h h let k be the side of w that is contained in since either k or lies in s there exists q p such that q k hence m l d h d q d q d and m l d by strong separation and minimality of l the wall u is contained in h hence h h since otherwise would again violate the fact that h gh now consider the intersection c of the closures in x of the halfspaces e it consists of a single point since any j h with ej c and c would have to be transverse to almost all h violating strong separation strong separation also implies that actually lies in x x given gn s f we can assume removing a finite number of elements if necessary that kgn f f k kf k for all let n m be a natural number such that kgn f f k kf k for all n n m when n n m we have shown that gn h h thus we have gn gn h in this case strong separation implies that consists only of halfspaces whose corresponding walls are contained in this shows that lim sup we conclude that gn in the topology of x for every gn s f we finally construct the point z x let j m be thick halfspaces of x so that and j are strongly separated and ej part of proposition provides a facing triple consisting of m we choose thick halfspaces h such that and ji are strongly separated for i by proposition we can find hi such that hi j ji let z x be the point provided by lemma applied to j and m in particular we have z m hence z x since the set s f is closed under conjugation by elements of we have gn hi hi for all gn s f hence given gn s f there exists n such that for every n n we have gn ej and gn hi eji for i thus gn z gn m m gn gn gn z in the rest of the section we consider a locally compact group g and a lattice any borel fundamental domain u g defines a cocycle g u such that gu g u u we say that is if is finitely generated and u can be chosen so that z g u du g u here du is the haar measure on u and denotes the word length with respect to a finite generating set s integrability does not depend elia fioravanti on the choice of uniform lattices are always and a few nonuniform examples were mentioned in the introduction see for more details and examples we assume that g splits as a product g where each gi is compactly generated and we also require the lattice g to be irreducible to project densely into each factor gi consider a unitary representation u h we denote by the subspace of invariant vectors let c h be a for we will make use of the following result of shalom in an essential way see page and theorem in for a proof theorem suppose that is and that there exist closed subspaces hi h i where the restriction u hi extends to a continuous representation g u hi that factors through the projection pi g gi furthermore there are cocycles ci hi i such that c and c represent the same class in h the following is a version of our theorem b under stronger hypotheses theorem suppose that is and x is irreducible let y x be roller nonelementary and roller minimal there exists a closed median subalgebra y x where y y extends to a continuous action g y y moreover g y y factors through a projection p i g gi proof we have h by theorem lemma implies that has no nonzero invariant vectors thus theorem provides a subspace hi h where the action of extends to a continuous action of g factoring through a projection pi g gi pick any f hi and consider the set s f introduced in lemma any sequence gn with pi gn id lies in s f thus lemma implies that the set n o y x x gn pi gn id we have gn x x is nonempty note that y is a median subalgebra of x thus the restriction of the metric of x gives y a structure of median space since y is a closed subset of x it is a complete median space finally proposition in provides a continuous extension to g of y y and it factors through pi the assumption that y x be roller minimal and roller nonelementary can be replaced with the stronger requirement that have no finite orbit in b see proposition in the visual boundary of the cat space x the homomorphism g isom y provided by theorem is continuous with respect to the topology of pointwise convergence we remark however that remains continuous even if we endow isom y with the topology mentioned in remark below this will be a key point in our proof of theorem superrigidity of actions on finite rank median spaces remark in the proof of theorem lemma actually yields that the smaller set n o x x gn pi gn id gn x x n is nonempty thus g isom y is continuous with respect to the topology on isom y that is generated by stabilisers of points of in the statement of theorem we can always take y to be the closure of in x this topology on isom y might seem a lot finer than the topology of pointwise convergence to clarify this phenomenon we mention the following fact without proof let w be an irreducible complete finite rank median space admitting a roller nonelementary roller minimal action then there exists a dense convex subset c w such that for every x c the stabiliser of x is open for the topology of pointwise convergence on isom w this essentially follows from lemma relaxing the hypotheses of theorem we obtain theorem b for all lattices corollary suppose that is let y x be roller nonelementary there exist a finite index subgroup a component z x and a closed median subalgebra y z where the action y y extends to a continuous action y y for an open finite index subgroup proof we proceed by induction on rank x when the rank is zero there is nothing to prove so we assume that the statement holds for all median spaces of rank at most r by proposition there exists a closed convex subset d of a component w x such that y d is roller minimal and roller nonelementary if w we have rank d r and we conclude by the inductive hypothesis thus we can assume that w x let d dk be the splitting of d into irreducible factors if k the result follows from theorem if k let be a finite index subgroup preserving the splitting of d up to permuting the factors we can assume that y di is roller nonelementary for i s and roller elementary for i a further finite index subgroup fixes a point di for each i s we denote by zi di the component containing note that is a irreducible lattice in an open finite index subgroup of since rank di r for each i s the inductive hypothesis yields a finite index subgroup an open finite index subgroup g and a closed median subalgebra yi of a component zi di where the action of extends to a continuous action of let be the intersection of all and be the intersection of all for i the set y ys d is a closed median subalgebra of zk which is a component of d in particular zk is a closed convex subset of a component z x the action y y trivially extends to a continuous action y y elia fioravanti figure we now describe two examples that illustrate how in theorem the space y can not be taken to coincide with x nor with a convex subset example in corollary it can not be avoided to pass to the finite index subgroup even when the action is roller minimal example the actions that we consider are actually on cat square complexes since groups play an important role in the construction of the two examples we briefly recall a few facts regarding their construction given an integer n we denote by tn the tree and by an the group of even permutations on n elements we fix a legal colouring on tn a way of associating an integer in n to every edge of tn so that we see all n integers around each vertex in particular we have a bijection iv lk v n for every vertex let u an isom tn be the subgroup of isometries g such that igv g v an for every vertex v of tn we denote by u an the intersection of u an with the subgroup of isom tn generated by edge stabilisers if n the subgroup u an has index in u an see proposition in the subgroup u an is closed in isom tn in particular it is locally compact second countable and compactly generated by theorem and proposition in by theorem in there exists a uniform irreducible lattice u u for every integer k for the next two examples we fix such a lattice let u be the projections into the two factors and set u this is an irreducible lattice in the open index subgroup u u of u u let be the homomorphism with kernel example given any tree t we can blow up every edge to a square as in figure thus obtaining a tree of squares adjacent squares only share a vertex if t has no leaves each square has a pair of opposite vertices that are shared with other squares and a pair of opposite vertices that are not shared the space t is a complete rank two median space in which t embeds as a median subalgebra edges of t correspond to diagonals joining shared pairs of vertices of a square superrigidity of actions on finite rank median spaces we can embed isom t isom t by extending each isometry of t so that the restriction to each square is orientation preserving let isom t be the isometry that fixes pointwise the image of the embedding t t and acts on each square as a reflection in a diagonal we have id and isom t isom viewing u as a homomorphism into isom we can define a homomorphism isom by we denote by the composition of this map with the embedding isom isom the action y induced by is roller nonelementary and roller minimal since the action y induced by is as is irreducible theorem guarantees a continuous extension of y y to u for some median subalgebra of indeed one can take y to be the image of however y can not be taken to be a convex subspace or even a subcomplex of indeed y would be forced to be the whole as this is the only convex subset of the action y does not extend to u u by factoring via this is because whenever elements gn satisfy gn id the sequence gn must diverge however y also does not extend by factoring through we have u but is contained in the closed subgroup isom isom and is not example choose an element g and consider the action y given by x y x g y since the action u y does not preserve any proper closed subtree the same holds for the action of part of proposition then implies that does not leave any proper closed convex subset of invariant note that no component of is preserved by as this would correspond to a fixed point for y hence to a fixed point for u y we conclude that y is roller minimal and the same argument also shows that it is roller nonelementary one can easily check that y can be extended to an action of the whole by setting x y y g x for all this action also is roller minimal and roller nonelementary we will show however that there exists no isometric embedding j y of a median space y such that the action on y extends continuously to u u by factoring through one of the factors let j y be a embedding note that j y is entirely contained in a component z in particular the previous discussion shows that z by lemma in each wall of j y arises from a wall of a wall of one of the two factors see part of proposition since the two factors are exchanged by g we conclude that y splits as with y y preserving this decomposition and g exchanging and elia fioravanti suppose for the sake of contradiction that y y extends to an action of u u by factoring through one of the two factors as in example we see that the extension can not factor via however since is dense in u and preserves the splitting y part of proposition implies that an extension factoring through would also preserve the splitting y this contradicts the fact that g exchanges and we conclude the section by proving theorem proof of theorem we begin by observing that part follows from part and proposition now suppose for the sake of contradiction that admits a roller nonelementary action on x as in the proof of proposition we can assume that x is irreducible and that y x is roller minimal theorem then yields a factor gi a closed median subalgebra y x and actions gi y y and y y without loss of generality we can assume that y is the closure of inside x as in remark stabilisers of points of are open in gi thus the identity component must fix pointwise as is dense in y the entire action y y vanishes and gi y y descends to an action of the group gi since gi satisfies condition proposition above and corollary in imply that the action gi y y is roller elementary however by lemma the actions y y and gi y y are roller nonelementary a contradiction homomorphisms to coarse median groups we defined equivariantly coarse median groups in the introduction here we simply prove corollary proof of corollary fix a ultrafilter on n and let be the corresponding ultrapower of we endow h with a word metric ds arising from a finite generating set s given rn we denote by h the asymptotic cone obtained by taking all basepoints at the identity and as sequence of scaling factors let d denote the metric that ds induces on h it is a geodesic metric and it is preserved by the natural action y h if the coarse median on h induces a structure of finite rank median algebra on h see section in we denote by m the corresponding median map the action y h is by automorphisms of the median algebra structure by propositions and in we can endow h with a median metric dm that is bilipschitz equivalent to d and preserved by the furthermore the median algebra structure associated to dm is given by the map now suppose for the sake of contradiction that there exist pairwise nonconjugate homomorphisms h for n these correspond to a homomorphism hence to an action on every asymptotic cone of h that preserves the median metric dm the construction superrigidity of actions on finite rank median spaces provides us with a sequence such that modifying each within its conjugacy class if necessary the induced action y h has no global fixed point this however contradicts theorem appendix a structure of ubs s let x be a complete median space of finite rank we fix points x x and let be a minimal reduced ubs lemma let t g x be an isometry satisfying consider the ubs g if g we have g r and g r h h for all h if g we have g r and g r h h for all h proof observe that we fix a diverging chain kn in if h the halfspaces g i h with i r all lie in and can not be pairwise transverse thus there exists i r such that either g i h h or g i h hence for every h we either have g r h h or g r h suppose that g r for some set hk g kr for each k a cofinite subchain of g kn is a diverging chain in as hence for each k we have g kn if n is sufficiently large since is reduced in particular hk kn we conclude that each hk lies in proposition guarantees that hk is a diverging chain let be the inseparable closure of hk it is a ubs equivalent to and it satisfies g r observe that for each k kr g g kr g kr g kr d hk since d hk for some k we conclude that g the same argument applied to g shows that g if there exists with g r this proves part and shows that g r h h for all h if g in the latter case for every h we have h g r h kn for sufficiently large n since is reduced thus g r h and g r corollary let g x be an isometry define as in the previous lemma and consider the ubs g if g we have if g we have in the rest of the appendix we also consider a compact subset k x such that for every g lemma there exists a constant k such that every h with d x h lies in for every g if is a minimal reduced ubs such that and for all g k there exists a ubs that is disjoint from for all g elia fioravanti proof let hn be a diverging chain in with thick for every g k a cofinite subchain of g hn is contained in as since is reduced there exists n g so that g hn g by proposition we can assume that h g and lemma provides a neighbourhood u g of g in k such that h g for all u g in particular hn g for all u g there exist gk k such that k u u gk if n is the maximum of the n gi we have hn for all g let be the inseparable closure of hn if m n and g k we have hm ghn for every sufficiently large n since is reduced this shows that is contained in for all g since there exists a constant such that every h with d x h lies in to prove part let c be the supremum of distances d x h for h we have c since let m be the maximum distance d x gx for g k and consider c max c k m we define to be the set of h with d x h c if there existed h for some g k we would have g h and d x g h k thus part implies that g h g h and this contradicts the fact that h and d x h recall that we have introduced the function h r defined by the formula h h and the sets h h c observe that k h whenever h k in particular is measurable we have h h d x h for all h h we say that is small if otherwise is large if x is a cat cube complex every ubs is large an example of a small ubs in a rank two median space appears in figure of if is small we have h for every isometry h fixing note that the supremum of is precisely lemma let hn be halfspaces with hn for all n then hn if and only if hn is a diverging chain of halfspaces proof the fact that hn if hn is a diverging chain follows from the fact that is reduced for the other implication let km be a diverging chain in since h d x h it suffices to consider the case when is small for every m the set j j km has measure am by proposition for large n we have hn am hence there exists j km such that j h in particular hn km since m is arbitrary this shows that hn is a diverging chain lemma for every c the set is a ubs for all c we have rc proof since is reduced any h contains almost every halfspace in any diverging chain in this provides a diverging chain in inseparability follows from the monotonicity of superrigidity of actions on finite rank median spaces to prove part we decompose t t ck as in lemma if h k ci and h k we have c h h hence lemma in implies that the inseparable closure of ci has measure at most we conclude that kc rc lemma assume that for all g for all h and g k we have x gx g g h h d x gx in particular for some constant k for every c we have proof indeed g h h x h g h d x g x h d x gx h and h h h d x gx h h g d x gx h h g d x gx g h g we then take to be the maximum of d x gx g for g k this exists due to proposition regarding part observe that if h gh h gh gh c lemma assume that is strongly reduced for every d there exists a constant d such that h k for all h k with d x h and d x k if is a ubs and d x k d for all k we have h h for all h with d x h d proof decompose t t ck where each ci is totally ordered by inclusion and contains a diverging chain pick halfspaces ki ci with d x ki by part of lemma the halfspace ki can not be transverse to a diverging chain in thus part of lemma guarantees that the halfspaces of that are transverse to ki lie at uniformly bounded distance from x we conclude that there exists such that every h with d x h is contained in each ki hence also in each k with d x k this proves part part is an immediate consequence in the rest of the section we assume that is minimal and strongly reduced lemma suppose that g for all g elia fioravanti there exists a constant k such that for every halfspace h with d x h and every g k we have gh h there exists a constant k such that we have g and whenever c and g proof we first observe that given a minimal ubs and an isometry g x with we have gh h h g g h h g h h g g h x h h g g since h g as note that h x g h g g h thus gh h equals h x h h g g h h k gh k t now consider for each g k the set g g as in corollary we have g g and by part of lemma there exists a constant such that each h with d x h lies in g for every g by part of lemma if d x gh we have g gh g h k g g gh k by part of lemma if d x h and d x gh we have gh h g gh g h we conclude that gh h whenever d x h m where m is the maximum of the distances d x gx for g we now prove part let be as in part part of lemma provides a constant such that each h with d x h lies in g for every g let be the supremum of the values h for h with d x h max by lemma we have if c any h satisfies d x h max hence gh h c and gh in particular g observe that g and by our choice of the constant thus conversely it is clear that and g consider now for c the functions fc c in h superrigidity of actions on finite rank median spaces lemma assume that for all g for every there exists a constant k such that k g id fc for all g k and all c if instead for all g k we have k g id fc for c proof observe that g id fc c c c c c we will analyse the three summands separately by lemma we have h h h c c c c c for each h by part of lemma r c c c c by part of lemma there exists a constant such that each h with d x h lies in for every g k in particular if k for some g k then k d x k by part of lemma if c we have and c c m c c c where m is the maximum of g for g k which exists by proposition finally by part of lemma and part of lemma c cr c c if instead for all g k the previous discussion shows that for large c we have k g id fc the conclusion follows by applying lemma let be a ubs and let be pairwise inequivalent reduced ubs s representing all minimal equivalence classes of ubs s almost contained in for every i set there exist i increasing sequences cn such that i for all i k k is inseparable for all n cn cn proof we proceed by induction on if k the lemma is immediate suppose that k without loss of generality we can assume that corresponds to a vertex with no incoming edges in the full subgraph of g elia fioravanti with vertices fix we will construct c i satisfying the inseparability condition i k pick a diverging chain hn in each up to replacing hn with a cofinite subchain we can assume that for each i k and each m k i the halfspace hm is transverse to hn for almost every by lemma there exists c k such that k is contained in the inseparable k closure of hn as a consequence for every h k and every i i k the halfspaces h and hn are transverse for almost every halfspaces lying in the inseparable closure of but in neither of the are at uniformly bounded distance from x by part of proposition say that these distances are bounded above by m enlarge m so that all j with d x j m lie in k by part of proposition there exists a ubs contained in such that are all the equivalence classes of minimal ubs s almost contained in the inductive hypothesis lemma imply that we can find c i so that is inseparable and d x h m for c all h i with i k now if k were not inseparable there would exist j h such that ku j kv for halfspaces ku u and kv v but j would not lie in any of the i in particular u v and k u v observe that v k otherwise kv k would not be transverse to diverging chains in thus u k moreover for all i i k the halfspace j must be transverse to hn for almost every n since v k and j does not lie in which is inseparable since d x j d x kv m we have j the fact that each is reduced implies that j by our choice of m we have j k a contradiction lemma there exists a ubs such that every ubs with is of the form for some y x up to a null set proof let be pairwise inequivalent minimal ubs s representing all minimal elements of u we can assume that they are all reduced by lemma halfspaces in that are transverse to a diverging chain in each lie at uniformly bounded distance from x by part of proposition say that these distances are bounded above by m let consist of all h with d x h m it is a ubs equivalent to observe that if is a ubs equivalent to and e h for halfspaces h k then h indeed otherwise h would not contain any halfspace of by inseparability and it would therefore be transverse to a diverging chain in each of the hence d x k d x h m a contradiction superrigidity of actions on finite rank median spaces now given consider the set t h observe that is an ultrafilter indeed since and it suffices to check that h k whenever h and k if such halfspaces were disjoint we would have e h and h contradicting the observation we made above since h finally by lemma in we can decompose t for some set with in particular t since and are disjoint we have hence proposition implies that there exists y x such that is null thus up to measure zero we are now ready to prove proposition proof of proposition let be pairwise inequivalent ubs s representing all minimal elements of the poset u by lemma we can assume that each is strongly reduced up to replacing each with a smaller ubs part of lemma guarantees that we can assume that and for all i j and g by part of lemma we have g for all k c and all g k with g while we have g if g lemma provides constants k ci such that is inseparable thus is a ubs equivalent to by part of proposition we conclude by lemma enlarging the constants ci if necessary so that this is possible by lemma references goulnara arzhantseva graham niblo nick wright and jiawen zhang a characterization for asymptotic dimension growth jacek brodzki sarah campbell erik guentner graham niblo and nick wright property a and cat cube complexes funct bachir bekka pierre de la harpe and alain valette kazhdan s property t volume of new mathematical monographs cambridge university press cambridge jason behrstock cornelia and mark sapir addendum median structures on asymptotic cones and homomorphisms into mapping class groups proc lond math soc jason behrstock cornelia and mark sapir median structures on asymptotic cones and homomorphisms into mapping class groups proc lond math soc mladen bestvina degenerations of the hyperbolic space duke math mladen bestvina and mark feighn stable actions of groups on real trees invent uri bader and alex furman boundaries rigidity of representations and lyapunov exponents uri bader tsachik gelander and nicolas monod a fixed point theorem for spaces invent elia fioravanti martin bridson and haefliger metric spaces of curvature volume of grundlehren der mathematischen wissenschaften fundamental principles of mathematical sciences berlin jason behrstock mark hagen and alessandro sisto hierarchically hyperbolic spaces ii combination theorems and the distance formula jason behrstock mark hagen and alessandro sisto hierarchically hyperbolic spaces i curve complexes for cubical groups geom jason behrstock mark hagen and alessandro sisto quasiflats in hierarchically hyperbolic spaces marc burger and shahar mozes groups acting on trees from local to global structure inst hautes sci publ marc burger and shahar mozes lattices in product of trees inst hautes sci publ marc burger and nicolas monod continuous bounded cohomology and applications to rigidity theory geom funct brian bowditch coarse median spaces and groups pacific j brian bowditch invariance of coarse median spaces under relative hyperbolicity math proc cambridge philos brian bowditch rigidity properties of the mapping class groups preprint brian bowditch some properties of median metric spaces groups geom caprace amenable groups and hadamard spaces with a totally disconnected isometry group comment math indira chatterji and cornelia median geometry for spaces with measured walls and for groups indira chatterji cornelia and haglund kazhdan and haagerup properties from the median viewpoint adv yves cornulier and pierre de la harpe metric geometry of locally compact groups volume of ems tracts in mathematics european mathematical society ems winner of the ems monograph award indira chatterji talia and alessandra iozzi the median class and superrigidity of actions on cat cube complexes j with an appendix by caprace caprace and alexander lytchak at infinity of finitedimensional cat spaces math caprace and nicolas monod isometry groups of nonpositively curved spaces structure theory j cherix florian martin and alain valette spaces with measured walls the haagerup property and property t ergodic theory dynam systems caprace and bertrand simplicity and superrigidity of twin building lattices invent caprace and bertrand of twin building lattices geom dedicata montserrat and ilya kazachkov limit groups over partially commutative groups and group actions on real cubings geom superrigidity of actions on finite rank median spaces caprace and michah sageev rank rigidity for cat cube complexes geom funct yves de cornulier romain tessera and alain valette isometric group actions on hilbert spaces growth of cocycles geom funct yves de cornulier romain tessera and alain valette isometric group actions on banach spaces and representations vanishing at infinity transform groups patrick delorme des unitaires des groupes de lie et produits tensoriels continus de bull soc math france dilworth a decomposition theorem for partially ordered sets ann of math thomas delzant and pierre py cubulable groups talia the boundary and cat cube complexes elia fioravanti roller boundaries for median spaces and algebras elia fioravanti the tits alternative for finite rank median spaces talia and alain valette the sequence for graphs of groups property t and the first number victor gerasimov of groups and actions on cubings in algebra geometry analysis and mathematical physics russian novosibirsk pages izdat ross akad nauk sib otd inst novosibirsk erik guentner and nigel higson weak amenability of cat groups geom dedicata gromov asymptotic invariants of infinite groups in geometric group theory vol sussex volume of london math soc lecture note pages cambridge univ press cambridge alain guichardet sur la cohomologie des groupes topologiques ii bull sci math thomas haettel higher rank lattices are not coarse median algebr geom thomas haettel hyperbolic rigidity of higher rank lattices haglund isometries of cat cube complexes are mark hagen the simplicial boundary of a cat cube complex algebr geom mark hagen corrigendum to the simplicial boundary of a cat cube complex haglund and paulin de groupes d automorphismes d espaces courbure in the epstein birthday schrift volume of geom topol pages geom topol coventry mark hagen and tim susse hierarchical hyperbolicity of all cubical groups marcin kotowski and kotowski random groups and property t s theorem revisited lond math soc elia fioravanti bruce kleiner a new proof of gromov s theorem on groups of polynomial growth amer math aditi kar and michah sageev ping pong on cat cube complexes comment math bernhard leeb a characterization of irreducible symmetric spaces and euclidean buildings of higher rank by their asymptotic geometry volume of bonner mathematische schriften bonn mathematical publications bonn mathematisches institut bonn florian martin reduced of connected locally compact groups and applications j lie theory ashot minasyan new examples of groups acting on real trees j nicolas monod superrigidity for irreducible lattices and geometric splitting amer math bogdan nica group actions on median spaces amos nevo and michah sageev the poisson boundary of cat cube complex groups groups geom graham niblo nick wright and jiawen zhang a four point characterisation for coarse median spaces yann ollivier sharp phase transition theorems for hyperbolicity of random groups geom funct yann ollivier and daniel wise cubulating random groups at density less than trans amer math narutaka ozawa a functional analysis proof of gromov s polynomial growth theorem paulin outer automorphisms of hyperbolic groups and small actions on in arboreal group theory berkeley ca volume of math sci res inst pages springer new york bertrand construction de en de acad sci paris i bertrand integrability of induction cocycles for groups math martin roller poc sets median algebras and group actions an extended study of dunwoody s construction and sageev s theorem preprint university of southampton michah sageev ends of group pairs and curved cube complexes proc london math soc yehuda shalom rigidity of commensurators and irreducible lattices invent yehuda shalom harmonic analysis cohomology and the geometry of amenable groups acta jan spakula and nick wright coarse medians and property property t and kazhdan constants for discrete groups geom funct rudolf zeidler coarse median structures and homomorphisms from kazhdan groups geom dedicata
4
massive mimo performance comparison of beamforming and multiplexing in the terahertz band sayed amir ming and mahbub oct school of computer science and engineering university of new south wales sydney australia csiro sydney australia email this paper we compare the performance of two main mimo techniques beamforming and multiplexing in the terahertz thz band the main problem with the thz band is its huge propagation loss which is caused by the tremendous signal attenuation due to molecule absorption of the electromagnetic wave to overcome the path loss issue massive mimo has been suggested to be employed in the network and is expected to provide tbps for a distance within a few meters in this context beamforming is studied recently as the main technique to take advantage of mimo in thz and overcome the very high path loss with the assumption that the thz communication channel is los and there are not significant multipath rays on the other hand recent studies also showed that the absorbed energy by molecules can be reradiated immediately in the same frequency such signal is correlated with the main signal and can provide rich scattering paths for the communication channel this means that a significant mimo multiplexing gain can be achieved even in a los scenario for the thz band our simulation results reveal a surprising observation that the mimo multiplexing could be a better choice than the mimo beamforming under certain conditions in thz communications i ntroduction to respond to the huge increasing demand for the wireless data traffic recently the terahertz thz band thz is envisioned to make tbps wireless link feasible in spite of the wide unused bandwidth in this spectrum the high propagation loss is the main issue of using such spectrum thus the potential applications of the thz link are limited to short range communications such as nanosensors wireless communications and wireless personal area networks moreover part of the radio signal attenuation at the thz frequencies is due to molecular absorption which is frequency selective and increases the total loss to more than db for some frequencies at distance basically to overcome the very high path loss the transmit power could be largely increased unfortunately this is not feasible with the current technology and it is limited to a few of mw alternately channel gain can be significantly improved by means of the beamforming technique indeed due to the very small footprint of a large number of antennas at the thz band beamforming using very large scale multiple input multiple output mimo systems has been considered in the field as a practical solution which can provide up to db channel gain at thz however beamforming comes at the cost of system complexity and signaling overhead where the transmitter should receive the channel state information continuously and align the beam to the receiver on the other hand to achieve a significant mimo beamforming gain in high frequency spectrum the beam would become very narrow which is sometimes described as a pencil beam this makes beamforming vulnerable to any mobility because it is difficult to perform beam in a very short time interval another approach to take advantage of mimo is the mimo multiplexing technique while the beamforming technique strives to focus the transmission energy and achieve a large channel gain in a specific direction the multiplexing technique builds it strength on creating parallel information channels however the multiplexing gain is significant only when there are enough multipath signal components in a rich scattering environment because of the huge path loss thz communication is usually assumed to be applied in as a los dominant channel and thus the research focus has been on beamforming rather than multiplexing however recent studies show that in the channel medium molecules absorb and the electromagnetic energy in thz band which transforms the los channel into a environment the is usually considered as noise but the theorical model shows it is highly correlated to main signal in this paper we will theoretically investigate the thz channel capacity for both cases of beamforming and multiplexing in a mimo we find that the multiplexing technique can provide a considerable capacity gain in comparison with the beamforming technique on certain conditions also in some other conditions where the beamforming yields a higher capacity the multiplexing technique is still preferable choice due to its easier implementation note that in this work we assume a multiplexing technique using a blind precoding scheme without channel state information csi in contrast the beamforming technique always requires accurate csi to smartly direct its energy in the spatial domain the rest of the paper is structured as follows in section ii we present the molecular absorption model for the calculation of channel transfer function section iii analyzes the mimo channel model considering the molecular followed by simulation results in section iv finally we conclude the paper in section ii c hannel model and mimo capacity the molecular absorption model defines how different species of molecules in a communication channel absorb energy from the electromagnetic signals and how they them back to the environment this section first explains the concept of absorption coefficient used to characterize the absorption capacity of a given molecule species followed by the attenuation and models that are built upon this coefficient molecular absorption coefficient the medium absorption coefficient k f at frequency f is a weighted sum of the molecular absorption coefficients in the medium which can be formulated as k f n x mi ki f where ki f is the molecular absorption coefficient of species si on condition of temperature t and and pressure ki f can be obtained from hitran in this work to get the values of k f we will use some predefined standard atmosphere conditions and their corresponding ratio of molecules in the air which are tabulated in attenuation of radio signal the attenuation of the radio signal at the thz frequencies is due to spreading and molecular absorption in more detail the spreading attenuation is given by molecular the existing molecules in communication medium will be excited by electromagnetic waves at specific frequencies the excitement is temporary and the energy level of molecules will come back to a steady state and the absorbed energy will be in the same frequency these waves are usually considered as noise in the literature molecular absorption is not white and its power spectral density psd is not flat because of the different resonant frequencies of various species of molecules the psd of the molecular absorption noise that affects the transmission b of a signal snabs is contributed by the atmospheric noise sn x and the noise sn as addressed in b x snabs f d sn f d sn f d c b sn f d kb f d c x sn f d pt f f d d where k f is the absorption coefficient of the medium at frequency f is the reference temperature kb is the boltzmann constant pt f is the power spectral density of the transmitted signal and c is the speed of light the first term in which is called sky noise and defined in is independent of the signal wave however the noise in is highly correlated with the signal wave and can be considered as a distorted copy of the signal wave thus equation can be revised as the received power of the reradiated signal by molecules at the receiver by c pr a f d pt f f d d since the phase of the wave depends on the phase of molecular vibration which varies from molecules to molecules the received power in this case is affected by a large number of photons thus we assume a uniformly distributed random phase for the received signal with its power given by channel transfer function the channel transfer function for a single los channel is given by d c f f d d d c f d e e d where k f is the absorption coefficient of the medium at frequency f thus the los received power at the receiver becomes c pr los f d pt f f d then the partial channel transfer function resulted from the molecular absorption and excluding the los component can be represented by s c f d f d d c f d e d aspread f d d c where c is the speed of light the attenuation due to molecular absorption is characterized as aabs f d ek f hence the total channel transfer function is the superposition of the partial channel transfer functions which is written as f d f d f d f d c d d d f c f d d mimo channel model and capacity in this paper we consider a mimo system that is consisted of nt transmitting antennas and nr receiving ones the received signal vector y at nr receiving antennas can be formulated as y n where x is the transmitted signal vector form nt transmitting antennas and n is an nr vector with independent noises with variance is the channel matrix where each of its elements is a complex value denoting the transfer coefficient associated with the jth transmitter antenna and the ith receiver antenna note that can be obtained from for frequency f and distance dij the capacity of mimo channel can be written as c det inr p nt where p is total transmitting power and i is the identity matrix since the determinant of inr can be computed by the product of the eigenvalues of the matrix the mimo capacity can thus be written in the form of a product of eigenvalues as x p where denotes singular values of the matrix and hence the squared singular values denotes the eigenvalues of the matrix each of the characterize an equivalent p information channel where is the corresponding ratio snr of the channel at the receiver note that denotes the number of which for beamforming technique it is equal to one and in multiplexing technique it could be the rank of with min nr nt however because we use blind precoding and uniform power allocation for multiplexing technique nt therefore equation is valid for uniform power allocation at the transmitter p furthermore the equivalent channel snr should meet a minimum receiver threshold to be reliably detectable by the receiver in this paper we assumed db as the snr threshold and uniform power allocation at the transmitter the main difference between beamforming and multiplexing techniques is how to tune or exploit the eigenvalue distribution in more details beamforming technique aims to maximize to improve the channel snr for a single data stream while in the multiplexing technique a uniform eigenvalue distribution is preferable in this way multiplexing technique can utilize parallel data streams through mimo and maximize the data rate the complexity of beamforming comes from eigenvalues tuning because it means the channel state information csi should be measured and sent back to the transmitter periodically for optimum precoding this also results in a protocol overhead in the channel on the other hand multiplexing gain can take advantage of eigenvalue value distribution even with a blind precoding this is more beneficial when there is a rich scattering environment in the channel in next section we will discuss how the can provide a rich scattering environment iii a nalysis on the channel with molecular absorption to analyze the mimo channel capacity and characterize the scattering richness of channel quantitatively lets decompose and normalize channel transfer function as h f d r k hlos f d k r ha f d k where h hlos and ha are normalized with corresponding channel gain because of uniformly distributed random phase of received signal elements of ha are independent and identically distributed complex gaussian random variables with zero mean and unit magnitude variance k is the ratio of powers of the los signal and the components and if we assume the channel distance is much longer than antenna space it can be obtained by f d pr los f d pr a f d f d this is same as the rician channel model where the k is called rician equivalently shows how much channel is rich in term of scattering and multipath rays equation shows k is a function of absorption coefficient of channel medium k f and the distance between transmitter and receiver d so that a longer distance and a higher absorption result smaller k as shown in figure the capacity of mimo channel considering rician is studied in several works authors in showed the lower bound of rician channel expected capacity for large number of antennas is the expected capacity of channel considering only nlos component r e c h e c ha k p e c h e c f d ha where e denotes the expectation it is clear that the lower band is a increasing function of absorption coefficient such that k k emin c emin c a b mimo capacity using beamforming c mimo capacity using multiplexing fig is an increasing function of distance and absorption coefficient for both multiplexing and beamforming techniques the performance gain is affected by the capacity is calculated for mimo system iv s imulation and discussion simulation in this section to evaluate the molecular absorption impact on thz mimo capacity we consider a simple n n mimo system with a square uniform arrays where at both transmitter and receiver the spacing s is equal to half of the wavelength and the channel distance is moreover we consider uniform power allocation to transmitter arrays operating in an los scenario the default values of the parameters are listed in table i and different values will be explained when necessary since we apply random phases on nlos components created by molecular we conduct the evaluation of the mimo capacity with molecular for times and show the average result we use the online browsing and plotting which is based on hitran databases to generate absorption coefficients for different single gas or some predefined standard gas mixture of the atmosphere at sea level as shown in table ii since the water molecules play main roles in a normal air environment at thz bands we use the highest and lowest water ratio in table ii the usa model high latitude winter and usa model tropics the corresponding absorption coefficients in thz bands have been shown in figure for an ambient temperature of k and a sea level pressure of atm for a tropic atmosphere the water ratio is higher than that of the winter atmosphere and thus we can see a significant increase in the absorption coefficient among these two gas mixtures in our simulation we assume a constant transmit power over the entire frequency spectrum and display the mimo capacity in for thz bands we consider a mimo with antennas at each side in a uniform square planar array our aim is to compare the beamforming and multiplexing http table i simulation parameters transmitter and receiver distance d spacing s transmitter arrays angle receiver arrays angle number of arrays on each side n transmit power noise power m wave length dbm dbm techniques in different channel conditions first we calculate the channel capacity for beamforming while the is totally ignored in the channel next the beamforming capacity is when the is taken into account finally the multiplexing gain is calculated with and without the consideration of in all scenarios capacity is obtained by in the first step the simulation is run at ghz with the practical range of absorption coefficient over the thz spectrum as shown in figure it should be noted that the actual value of absorption coefficient at ghz is shown in figure the beamforming and multiplexing techniques capacity is calculated for a range of m distance and a mw transmit power secondly the channel is simulated for two different transmit power and three distances with realistic absorption coefficients our assumption on the transmit power is based on current technology and a previous work on thz massive mimo furthermore distances have been chosen to cover various application scenarios for example thz nanosensors are considered to communicate in a very short distance in the order of cm or less while thz communications are also nominated to provide terabit per second ultra high video communication link at around m distance for home entrainment devices like tv or virtual reality vr in addition longer distances to a few meters characterize wireless personal or local networks simulation results are presented in figure b the mimo capacity the figure illustrates how the channel is transformed from a los dominant channel to a rayleigh channel and how it effects on the mimo beamforming and multiplexing capacity gain as can be seen in figure the beamforming gain is decreasing when the absorption coefficient increases which is because in the very high absorption the channel is not los dominant anymore and there is significant nlos signal component generated by molecule or equivalently lower in contrast figure shows the multiplexing technique takes advantage of higher absorption to reach a huge data rate however the low snr limit the multiplexing gain in longer distances so that it drops sharply to zero beyond in figure more results for the thz spectrum with realistic absorption coefficients will be presented table ii atmosphere standard gas mixture ratio in percentage for different climates usa usa usa usa usa model model model model model mean latitude summer mean latitude winter high latitude summer high latitude winter tropics the mimo capacity the transmit power and distance the channel attenuation including molecular attenuation in and spreading attenuation in is illustrated in figure while the spreading attenuation is increasing linearly in db with distance and frequency the molecular attenuation is also increasing with distance but is frequency selective for example while the total loss at is db for ghz the total attenuation at ghz is db at m and it grows to db at m which is mostly because of very high absorption of water molecules in the channel medium at this frequency note that the channel atmosphere for this case is from tropic data where the ratio of water molecules in the air is more than as shown in table ii figure and illustrate the capacity of the investigated transmission techniques for a cm distance the transmit power is increased from mw in figure to mw in figure it can be seen that a huge performance difference exists between multiplexing and beamforming thanks to the tremendous multiplexing gain provided by the rich scattering environment due to molecule furthermore in very high absorption frequencies which existing studies consider as infeasible windows for thz communications a significant capacity improvement can be observed this is because more absorption leads to more which transforms a los dominant channel to rayleigh channel the details can be found in section iii where we have discussed about how the decreases the and creates a rich scattering environment to sum up the improves the multiplexing gain which is fundamentally supported by a better eigenvalue distribution and channel matrix rank in mathematical analysis in figure and the distance is increased to with a relatively large distance for thz communications it can be seen the beamforming gain is comparable with the multiplexing gain however we can see the multiplexing gain in high absorption windows such as ghz is significantly higher than the rest of spectrum for a mw transmit power it is a different story for a mw transmit power where the capacity drops to zero in high p absorption windows because the equivalent snr of most parallel channels created by the multiplexing technique is less than db and practically such parallel channels are useless because the receiver can not reliably detect the received signals such results are not surprising since it has been shown in several works on conventional communication band that the multiplexing performance drops dramatically in low snr however considering the implementation challenges of beamforming the multiplexing technique might still be a preferable choice for frequency up to thz for example it can be observed in figure at thz the capacity is and co co co co co for the multiplexing and beamforming techniques respectively finally figures and present the results for a m distance for such a distance path loss leads to a very low reception snr and thus the beamforming performance is significantly better than the multiplexing performance it is wellknown that beamforming technique is not very effective where there are strong multipath rays thus it is observed that in very high absorption frequency windows the beamforming performance drops sharply it is not only because of receiving strong nlos rays caused by molecule but also due to los signal attenuation note that the multiplexing technique can take advantage of same windows in high snr as we discussed above for figure c onclusion in this paper we compared the beam forming and multiplexing techniques of mimo in the terahertz band we showed in high snr high transmit power or lower distance the multiplexing technique can provide a considerable capacity gain compared with beamforming however for beyond a few meters such as meters there should be enough transmitting power possibility to use multiplexing technique otherwise the capacity drops to zero where the beamforming technique can still provide effective spectrum efficiency at the cost of complexity and protocol overhead our theoretical model also showed of molecules in the thz band can be helpful for massive mimo system to improve the channel performance using multiplexing technique the can provide significantly strong multipath components to achieve a full spatial multiplexing gain where the receiver is in an enough snr coverage it means some very high absorption frequency windows which have been formerly pointed as not feasible for communication might be more preferable choices for mimo in some certain applications r eferences akyildiz and jornet realizing mimo communication in the terahertz band nano communication networks vol pp zarepour hassan chou and a adesina semon sensorless event monitoring in wireless nanosensor networks acm transactions on sensor networks tosn vol no akyildiz jornet and han terahertz band next frontier for wireless communications physical communication vol pp kokkoniemi and juntti a discussion on molecular absorption noise in the terahertz band nano communication networks jornet and akyildiz modulation for terahertz band communication in nanonetworks ieee transactions on communications vol no pp may us model tropic high h o absorption coefficient m us model high latitude winter low h o frequency ghz a absorption coefficient k atm b signal attenuation beamforming w beamforming multiplexing w multiplexing beamforming w beamforming multiplexing w multiplexing capacity capacity frequency ghz frequency ghz c transmit d transmit beamforming w beamforming multiplexing w multiplexing beamforming w beamforming multiplexing w multiplexing capacity capacity frequency ghz e transmit f transmit beamforming w beamforming multiplexing w multiplexing beamforming w beamforming multiplexing w multiplexing capacity capacity frequency ghz frequency ghz g transmit frequency ghz h transmit fig mimo channel performance in tropic atmosphere jornet montana fundamentals of electromagnetic nanonetworks in the terahertz band dissertation georgia institute of technology rothman gordon babikov barbe et the molecular spectroscopic database journal of quantitative spectroscopy and radiative transfer vol pp barron molecular light scattering and optical activity cambridge university press tse and viswanath chapter mimo i spatial multiplexing and channel modeling fundamentals of wireless communication pp farrokhi foschini lozano and valenzuela processing with multiple transmit and receive antennas ieee communications letters vol no pp lebrun faulkner shafi and smith mimo ricean channel capacity an asymptotic analysis ieee transactions on wireless communications vol no pp gesbert shafi shiu smith et from theory to practice an overview of mimo coded wireless systems ieee journal on selected areas in communications vol no pp
7
an emptiness algorithm for regular types with set operators arxiv nov lunjin lu and john cleary department of computer science university of waikato hamilton new zealand phone lunjin jcleary abstract an algorithm to decide the emptiness of a regular type expression with set operators given a set of parameterised type definitions is presented the algorithm can also be used to decide the equivalence of two regular type expressions and the inclusion of one regular type expression in another the algorithm strictly generalises previous work in that tuple distributivity is not assumed and set operators are permitted in type expressions keywords type emptiness prescriptive type introduction types play an important role in programming languages they make programs easier to understand and help detect errors types have been introduced into logic programming in the forms of type checking and inference or type analysis or typed languages recent logic programming systems allow the programmer to declare types for predicates and type errors are then detected either at compile time or at run time the reader is referred to for more details on types in logic programming a type is a possibly infinite set of ground terms with a finite representation an integral part of any type system is its type language that specifies which sets of ground terms are types to be useful types should be closed under intersection union and complement operations the decision problems such as the emptiness of a type inclusion of a type in another and equivalence of two types should be decidable regular term languages called regular types satisfy these conditions and have been used widely used as types most type systems use tuple distributive regular types which are strictly less powerful than regular types tuple distributive regular types are regular types closed under tuple distributive closure intuitively the tuple distributive closure of a set of terms is the set of all terms constructed recursively by permuting each argument position among all terms that have the same function symbol this paper gives an algorithm to decide if a type expression denotes an empty set of terms the correctness of the algorithm is proved and its complexity is analysed the algorithm works on prescriptive types by prescriptive types we mean that the meaning of a type is determined by a given set of type definitions we allow parametric and overloading polymorphism in type definitions prescriptive types are useful both in compilers and other program manipulation tools such as debuggers because they are easy to understand for programmers type expressions may contain set operators with their usual interpretations thus the algorithm can be used to decide the equivalence of two type expressions and the inclusion of one type expression in another the introduction of set operators into type expressions allows concise and intuitive representation of regular types though using regular term languages as types allow us to make use of theoretical results in the field of tree automata algorithms for testing the emptiness of tree automata can not be applied directly as type definitions may be parameterised for instance in order to decide the emptiness of a type expression given a set of type definitions it would be necessary to construct a tree automaton from the type expression and the set of type definitions before an algorithm for determining the emptiness of an tree automaton can be used when type definitions are parameterised this would make it necessary to construct a different automaton each time the emptiness of a type expression is tested thus an algorithm that works directly with type definitions is desirable as it avoids this repeated construction of automata attempts have been made in the past to find algorithms for regular types to our knowledge dart and zobel s work is the only one to present decision algorithms for emptiness and inclusion problems for prescriptive regular types without the tuple distributive restriction unfortunately their decision algorithm for the inclusion problem is incorrect for regular types in general see for a counterexample moreover the type language of dart and zobel is less expressive than that considered in this paper since it doesn t allow set operators and parameterised type definitions set constraint solving has also been used in type checking and type inference however set constraint solving methods are intended to infer descriptive types rather than for testing emptiness of prescriptive types therefore they are useful in different settings from the algorithm presented in this paper moreover algorithms proposed for set constraint solving are not applicable to the emptiness problem we considered in this paper as they don t take type definitions into account the remainder of this paper is organised as follows section describes our language of type expressions and type definitions section presents our algorithm for testing if a type expression denotes an empty set of terms section addresses the of the algorithm section presents the complexity of the algorithm and section concludes the paper some lemmas are presented in the appendix type language let be a fixed ranked alphabet each symbol in is called a function symbol and has a fixed arity it is assumed that contains at least one constant that is a function symbol of arity the arity of a symbol f is denoted as arity f may be considered as the set of function symbols in a program let t be the set of all terms over t is the set of all possible values that a program variable can take we shall use regular term languages over as types a type is represented by a ground term constructed from another ranked alphabet and called type constructors it is assumed that thus a type expression is a term in t the denotations of type constructors in are determined by type definitions whilst and have fixed denotations that will be given soon several equivalent formalisms such as tree automata regular term grammars and regular unary logic programs have been used to define regular types we define types by type rules a type rule is a production rule of the form c where c are different type parameters and t where the restriction that every type parameter in the righthand side of a type rule must occur in the lefthand side of the type rule is often referred to as type preserving and has been used in all the type definition formalisms note that overloading of function symbols is permitted as a function symbol can appear in the righthand sides of many type rules we def s denote by the set of all type rules and define c is a restricted form of term grammar example let s nil cons and nat even list defines natural numbers even numbers and lists where nat s nat even s s even list nil cons list where for instance nat s nat is an abbreviation of two rules nat and nat s nat is called simplified if in each production rule c is of the form f such that each for j n is either in or of the form d and we shall assume that is simplified there is no loss of generality to use a simplified set of type rules since every set of type rules can be simplified by introducing new type constructors and rewriting and adding type rules in the spirit of example the following is the simplified version of the set of type rules in example s nil cons nat even odd list and nat s nat even s odd odd s even list nil cons list a type valuation is a mapping from to t the instance r of a production rule r under is obtained by replacing each occurrence of each type parameter in r with list cons list is the instance of list cons list under a type valuation that maps to let def ground r r t f f ground is the set of all ground instances of grammar rules in plus rules of the form f for every f given a set of type definitions the type denoted by a type expression is determined by the following meaning function def t def def def def t e def f tn i ti ei en gives fixed denotations to and and are interpreted by as set intersection set union and set complement with respect to t denotes t and the empty set example let be that in example we have nat s s s even s s s s s s s s s s s s s s s list cons s nil cons s s s nil the lemma in the appendix states that every type expression denotes a regular term language that is a regular type we extend to sequences of type expressions as follows def def hei e where is the empty sequence is the infix sequence concatenation operator hei is the sequence consisting of the type expression e and is the cartesian product operator as a sequence of type expressions can be thought of consisting of zero instance of we use to denote the sequence consisting of zero instance of and define we shall call a sequence of type expressions simply a sequence a sequence expression is an expression consisting of sequences of the same length and and the length of the sequences in a sequence expression is called the dimension of and is denoted by let and be sequence expressions of the same length def def def t t z times a conjunctive sequence expression is a sequence expression of the form where for i m are sequences emptiness algorithm this section presents an algorithm that decides if a type expression denotes the empty set with respect to a given set of type definitions the algorithm can also be used to decide if the denotation of one type expression is included in the denotation of another because is included in iff is empty we first introduce some terminology and notations a type atom is a type expression of which the principal type constructor is not a set operator a type literal is either a type atom or the complement of a type atom a conjunctive type expression c is of the form li with li being a type literal let be a type atom f defined below is the set of the principal function symbols of the terms in def f f f ground let f define def i f ground we have tk i f tk both f and are finite even though ground is usually not finite the algorithm repeatedly reduces the emptiness problem of a type expression to the emptiness problems of sequence expressions and then reduces the emptiness problem of a sequence expression to the emptiness problems of type expressions tabulation is used to break down any possible loop and to ensure termination let o be a type def expression or a sequence expression define empty o o two reduction rules we shall first sketch the two reduction rules and then add tabulation to form an algorithm initially the algorithm is to decide the validity of a formula of the form empty e where e is a type expression the first reduction rule rewrites a formula of the form into a conjunction of formulae of the following form reduction rule one empty where is a sequence expression where is applied to type expressions but not to any sequence expression it is obvious that a type expression has a unique modulo equivalence of denotation disjunctive normal form let dnf e be the disjunctive normal form of empty e can written into e empty c each c is a conjunctive type expression we assume that c contains at least one positive type literal this doesn t cause any loss of generality as c for any conjunctive type expression we also assume that c doesn t contain repeated occurrences of the same type literal let c where and are type atoms def the set of positive type literals in c is denoted as pos c i m while the set of complemented type atoms are denoted as def neg c j n lit c denotes the set of literals occurring in by lemma in the appendix empty c is equivalent to c f empty c c the intuition behind the equivalence is as follows c is empty iff for every function symbol f the set of the sequences tk i of terms such that f tk c is empty only the function symbols in c f need to be considered we note the following two special cases of the formula a if c f then the formula is true because true in particular f thus if pos c then c f and hence the formula is true b if for some neg c then and thus has no effect on the subformula for f when in order to get rid of complement operators over sequence subexpressions the complement operator in is pushed inwards by the function push defined in the following def push push def push ek i h i def push z z for k it follows from de morgan s law and the definition of that push substituting push for in the formula gives rise to a formula of the form the second reduction rule rewrites a formula of the form to a conjunction of disjunctions of formulae of the form formula is written into a disjunction of formulae of the form empty reduction rule two where be a conjunctive sequence expression in the case k by lemma in the appendix empty can be decided without further reduction if then empty is true because otherwise empty is false because in the case k empty is equivalent to k empty def where letting with being the j th component of note that is a type expression and empty is of the form algorithm the two reduction rules in the previous section form the core of the algorithm however they alone can not be used as an algorithm as a formula empty e may reduce to a formula containing empty e as a leading to nontermination suppose f a null and null f null clearly empty null is true however by the first reduction rule empty null reduces to empty hnulli which then reduces to empty null by the second reduction rule this process will not terminate the solution inspired by is to remember in a table a particular kind of formulae of which truth is being tested when a formula of that kind is tested the table is first looked up if the formula is implied by any formula in the table then it is determined as true otherwise the formula is added into the table and then reduced by a reduction rule the emptiness algorithm presented below remembers every conjunctive type expression of which emptiness is being tested thus the table is a set of conjunctive type expressions let and be def conjunctive type expressions we define lit lit since ci ci l implies and hence empty implies empty adding tabulation to the two reduction rules we obtain the following algorithm for testing the emptiness of prescriptive regular types let bcf c c push def etype e etype e def etype e dnf e conj c def etype conj c if pos c neg c true true if c f otherwise c f bc c def eseq dnf conj def eseq conj true if k false if k j if k equation initialises the table to the empty set equations and implement the first reduction rule while equations and implement the second reduction rule etype and etype conj test the emptiness of an arbitrary type expression and that of a conjunctive type expression respectively eseq tests emptiness of a sequence expression consisting of sequences and and operators while eseq conj tests the emptiness of a conjunctive sequence expression the expression of which emptiness is to be tested is passed as the first argument to these functions the table is passed as the second argument it is used in etype conj to detect a conjunctive type expression of which emptiness is implied by the emptiness of a tabled conjunctive type expression as we shall show later this ensures the termination of the algorithm each of the four binary functions returns true iff the emptiness of the first argument is implied by the second argument and the set of type definitions tabling any other kind of expressions such as arbitrary type expressions can also ensure termination however tabling conjunctive type expressions makes it easier to detect the implication of the emptiness of one expression by that of another because lit c can be easily computed given a conjunctive type expression in an implementation a conjunctive type expression c in the table can be represented as lit c the first two definitions for etype conj c in equation terminates the algorithm when the emptiness of c can be decided by c and without using type definitions the first definition also excludes from the table any conjunctive type expression that contains both a type atom and its complement examples we now illustrate the algorithm with some examples example let type definitions be given as in example the tree in figure depicts the evaluation of etype by the algorithm nodes are labeled with function calls we will identity a node with its label arcs from a node to its children are labeled with the number of the equation that is used to evaluate the node abbreviations used in the labels are defined in the legend to the right of the tree though a b a and b are syntactically different type expressions the evaluation returns true verifying consider etype conj b a we have b a as lit a lit b thus by equation etype conj b a true etype a etype a etype conj a eseq c a eseq a eseq conj a eseq conj c a true etype b a etype conj b a true legend a n b n c hn fig evaluation of etype n example let type definitions be given as in example the tree in figure depicts the evaluation of etype list by the algorithm the evaluation returns false verifying list indeed list nil the rightmost node is not evaluated as its sibling returns false which is enough to establish the falsity of their parent node etype a etype a etype conj a eseq a eseq hb ai a eseq conj a false legend a list at b at fig evaluation of etype list at example the following is a simplified version of the type definitions that is used in to show the incorrectness of the algorithm by dart and zobel for testing inclusion of one regular type in another let a b g h and g g g a h b h a b h h a let t g h h a b a t and t see example in for more details so this is verified by our algorithm as follows let and by applying equations and in that order we have etype etype conj by equation we have etype eseq eseq eseq where we choose not to simplify expressions such as so as to make the example easy to follow by applying equations and we have both eseq true and eseq true so etype eseq let to show etype false it suffices to show eseq conj false by equation because dnf and etype eseq figure depicts the evaluation of eseq conj the node that is linked to its parent by a dashed line is not evaluated because one of its siblings returns false which is sufficient to establish the falsity of its parent it is clear from the figure that etype conj false and hence etype false correctness this section addresses the correctness of the algorithm we shall first show that tabulation ensures the termination of the algorithm because the table can only be of finite size we then establish the partial correctness of the algorithm etype conj etype etyp etyp conj etype conj eseq eseq eseq eseq eseq conj eseq conj eseq conj true false false legend fig evaluation of etype conj termination given a type expression e a type atom in e is a type atom in e that is not a of any type atom in the set of type atoms in e is denoted by tla e for instance letting e nat ree tla e list nat t ree we extend tla to sequences def s by tla ek i tla ei given a type expression the evaluation tree for etype contains nodes of the form etype e etype conj c eseq and eseq conj in addition to the root that is etype only nodes of the form etype conj c add conjunctive type expressions to the table other forms of nodes only pass the table around therefore it suffices to show that the type atoms occurring in the first argument of the nodes are from a finite set because any conjunctive type expression added into the table is the first argument of a node of the form etype conj c the set rta of type atoms relevant to a type expression is the smallest set of type atoms satisfying tla rta and if is in rta and f is in ground then tla rta for i the height of is no more than that of for any f in ground thus the height of any type atom in rta is finite there are only a finite number of type constructors in thus rta is of finite size it follows by examining the algorithm that type atoms in the first argument of the nodes in the evaluation tree for etype are from rta which is finite therefore the algorithm terminates partial correctness the partial correctness of the algorithm is established by showing etype true iff empty let be a set of conjunctive type def expressions define empty c the following two lemmas form the core of our proof of the partial correctness of the algorithm lemma let be a set of conjunctive type expressions e a type expression c a conjunctive type expression a sequence expression and a conjunctive sequence expression a b c d if if if if empty c empty e empty empty then then then then etype conj c true and etype e true and etype true and etype true proof the proof is done by induction on the size of the complement of with respect to the set of all possible conjunctive type expressions in which type atoms are from rta where is a type expression basis the complement is empty contains all possible conjunctive type expressions in which type atoms are from rta we have c and hence etype conj c true by equation therefore a holds b follows from a and equation c follows from b equation and lemma in the appendix and d follows from c and equation induction by lemma in the appendix empty c implies empty bcf for any f c f thus c empty bcf the complement of c is smaller than the complement of by the induction hypothesis we have eseq bcf c true by equation etype conj c true therefore a holds b follows from a and equation c follows from b equation and lemma in the appendix and d follows from c and equation this completes the proof of the lemma lemma establishes the completeness of etype etype conj eseq and eseq conj while the following lemma establishes their soundness lemma let be a set of conjunctive type expressions e a type expression c a conjunctive type expression a sequence expression and a conjunctive sequence expression a b c d empty c empty e empty empty if if if if etype conj c true and etype e true and etype true and etype true proof it suffices to prove a since b c and d follow from a as in lemma the proof is done by induction on dp c the depth of the evaluation tree for etype conj c basis dp c etype conj c true implies either i pos c neg c or ii c in case i empty c is true and empty c consider case ii by the definition of and we have etype conj c true implies empty c induction dp c assume etype conj c true and c by lemma there is f c f such that bcf we have c bcf dp bcf c dp c by the induction hypothesis we have etuple bcf c false for otherwise c bcf by equation etype conj c false which contradicts etype conj c true so empty c if etype conj c true this completes the induction and the proof of the lemma the following theorem is a corollary of lemmas and theorem for any type expression e etype e true iff empty e proof by equation etype e etype e by lemma b and lemma b we have etype e true iff empty e the result follows since true complexity we now address the issue of complexity of the algorithm we only consider the time complexity of the algorithm the time spent on evaluating etype for a given type expression can be measured in terms of the number of nodes in the evaluation tree for etype the algorithm cycles through etype etype conj eseq and eseq conj thus children of a node of the form etype e can only be of the form etype conj c and so on let be the number of elements in a given set the largest possible table in the evaluation of etype contains all the conjunctive type expressions of which type atoms are from rta therefore the table can contain at most conjunctive type expressions so the height of the tree is bounded by o we now show that the branching factor of the tree is also bounded by o by equation the number of children of etype e is bounded by two to the power of the number of type atoms in e which is bounded by because e can only contain type atoms from rta by equation the number of children of etype conj c is bounded by the largest number of children of a node eseq is bounded by two to the power of the number of sequences in where bcf for each neg c is o arity f and thus the number of sequences in is o arity f and hence the number of children of eseq is o since arity f is a constant by equation the number of children of eseq conj is bounded by maxf arity f therefore the branching factor of the tree is bounded by o the above discussion leads to the following conclusion proposition the time complexity of the algorithm is o the fact that the algorithm is exponential in time is expected because the complexity coincides with the complexity of deciding the emptiness of any tree automaton constructed from the type expression and the type definitions a deterministic tree automaton recognising will consist of states as observed in the proof of lemma it is that the decision of the emptiness of the language of a deterministic tree automaton takes time polynomial in the number of the states of the tree automaton therefore the complexity of the algorithm is the best we can expect from an algorithm for deciding the emptiness of regular types that contain set operators conclusion we have presented an algorithm for deciding the emptiness of prescriptive regular types type expressions are constructed from type constructors and set operators type definitions prescribe the meaning of type expressions the algorithm uses tabulation to ensure termination though the tabulation is inspired by dart and zobel the decision problem we consider in this paper is more complex as type expressions may contain set operators for that reason the algorithm can also be used for inclusion and equivalence problems of regular types the way we use tabulation leads to a correct algorithm for regular types while the algorithm has been proved incorrect for regular types in general to the best of our knowledge our algorithm is the only correct algorithm for prescriptive regular types in addition to correctness our algorithm generalises the work of dart and zobel in that type expressions can contain set operators and type definitions can be parameterised parameterised type definitions are more natural than monomorphic type definitions while set operators makes type expressions concise the combination of these two features allows more natural type declarations for instance the type of the logic program append can be declared or inferred as append list list list the algorithm is exponential in time this coincides with deciding the emptiness of the language recognised by a tree automaton constructed from the type expression and the type definitions however the algorithm avoids the construction of the tree automaton which can not be constructed a priori when type definitions are parameterised another related field is set constraint solving however set constraint solving methods are intended to infer descriptive types rather than for testing the emptiness of a prescriptive type therefore they are useful in different settings from the gorithm presented in this paper in addition algorithms proposed for solving set constraints are not applicable to the emptiness problem we considered in this paper take for example the constructor rule in which states that emptiness of f em is equivalent to the emptiness of ei for some i however empty list is not equivalent to empty the latter is true while the former is false since list nil the constructor rule doesn t apply because it deals with function symbols only but doesn t take the type definitions into account references aiken kozen vardi and wimmers the complexity of set constraints in proceedings of computer science logic conference pages aiken and lakshman directional type checking of logic programs in b le charlier editor proceedings of the first international static analysis symposium pages aiken and wimmers solving systems of set constraints in proceedings of the seventh ieee symposium on logic in computer science pages the ieee computer society press aiken and wimmers type inclusion constraints and type inference in proceedings of the conference on functional programming languages and computer architecture pages copenhagen denmark june beierle type inferencing for polymorphic logic programs in sterling editor proceedings of the twelfth international conference on logic programming pages the mit press cardelli and wegner on understanding types data abstraction and polymorphism acm computing surveys codish and lagoon type dependencies for logic programs using aciunification in proceedings of the israeli symposium on theory of computing and systems pages ieee press june comon dauchet gilleron lugiez tison and tommasi tree automata techniques and applications draft dart and zobel efficient type checking of typed logic programs journal of logic programming dart and zobel a regular type language for logic programs in frank pfenning editor types in logic programming pages the mit press devienne talbot and tison set constraints with membership expressions in jaffar editor proceedings of the joint conference and symposium on logic programming pages the mit press fruhwirth shapiro vardi and yardeni logic programs as types for logic programs in proceedings of sixth annual ieee symposium on logic in computer science pages the ieee computer society press gallagher and de waal fast and precise regular approximations of logic programs in bruynooghe editor proceedings of the eleventh international conference on logic programming pages the mit press and steinby tree automata and steinby tree languages in rozenberg and salomma editors handbook of formal languages pages hanus horn clause programs with polymorphic types semantics and resolution theoretical computer science heintze and jaffar a finite presentation theorem for approximating logic programs in proceedings of the seventh annual acm symposium on principles of programming languages pages the acm press heintze and jaffar a decision procedure for a class of set constraints technical report university february later version of a paper in proc ieee symposium on lics heintze and jaffar semantic types for logic programs in frank pfenning editor types in logic programming pages the mit press heintze and jaffar set constraints and analysis in alan borning editor principles and practice of constraint programming volume of lecture notes in computer science springer may ppcp second international workshop orcas island seattle usa jacobs type declarations as subtype constraints in logic programming sigplan notices lu type analysis of logic programs in the presence of type definitions in proceedings of the acm sigplan symposium on partial evaluation and program manipulation pages the acm press lu a polymorphic type analysis in logic programs by abstract interpretation journal of logic programming lu and cleary on algorithm for testing regular type inclusion technical report department of computer science the university of waikato october http mishra towards a theory of types in prolog in proceedings of the ieee international symposium on logic programming pages the ieee computer society press mycroft and o keefe a polymorphic type system for prolog artificial intelligence frank pfenning editor types in logic programming the mit press cambridge massachusetts reddy types for logic programs in debray and hermenegildo editors logic programming proceedings of the north american conference pages the mit press soloman type definitions with parameters in conference record of the fifth acm symposium on principles of programming languages pages tiuryn type inference problems a survey in roven editor proceedings of the fifteenth international symposium on mathematical foundations of computer science pages yardeni fruehwirth and shapiro polymorphically typed logic programs in furukawa editor logic programming proceedings of the eighth international conference pages the mit press yardeni and shapiro a type system for logic programs journal of logic programming zobel derivation of polymorphic types for prolog programs in lassez editor logic programming proceedings of the fourth international conference pages the mit press appendix lemma let c be a conjunctive type expression empty c iff c f empty c c proof let t be a sequence of terms and f a function symbol by the definition of f t c iff f c f and t c c t c c iff t c c thus empty c iff empty c c for each f c f lemma let be a conjunctive sequence expression then empty iff kempty proof let k tn and with n i we have we have j j j t j iff j j iff iff empty lemma m is a regular term language for any type expression proof the proof is done by constructing a regular term grammar for m we first consider the case m t let r hrta m mi with f ground rta m r is a regular term grammar it now suffices to prove that t m iff m sufficiency assume m the proof is done by induction on derivation steps in m basis m t must be a constant and m t is in which implies m t is in ground by the definition of t m induction suppose m f mk then ni t f tk and mi t with ni n by the induction hypothesis ti mi and hence t m by the definition of necessity assume t m the proof is done by the height of t denoted as height t height t implies that t is a constant t m implies that m t is in ground and hence m t is in therefore m let height t then t f tk t m implies that m f mk ground and ti mi by the definition of we have m f mk by the definition of rta we have mi rta m by the induction hypothesis mi ti therefore m f mk f tk now consider the case m t we complete the proof by induction on the height of height m then m doesn t contain set operator we have already proved that m is a regular term language now suppose height m if m doesn t contain set operator then the lemma has already been proved if the principal type constructor is one of set operators then the result follows immediately as regular term languages are closed under union intersection and complement operators it now suffices to prove the case m c ml with c let n c xl where each xj is a different new type constructor of arity let xl xl and xj xj j l n is a regular term language on xl because n doesn t contain set operators by the induction hypothesis mj is a regular term language by the definition of we have m n xl ml which is a regular term language s is the set of terms each of which is obtained from a term in s by replacing each occurrence of yj with a possibly different term from syj this completes the induction and the proof the proof also indicates that a tree automaton that recognises m has m states and that a deterministic tree automaton that recognises m has o m states
6
a study of the allan variance for processes jun haotian xu guerrier roberto molinari yuming zhang allan variance av is a widely used quantity in areas focusing on error measurement as well as in the general analysis of variance for autocorrelated processes in domains such as engineering and more specifically metrology the form of this quantity is widely used to detect noise patterns and indications of stability within signals however the properties of this quantity are not known for commonly occurring processes whose covariance structure is and in these cases an erroneous interpretation of the av could lead to misleading conclusions this paper generalizes the theoretical form of the av to some processes while at the same time being valid also for weakly stationary processes some simulation examples show how this new form can help to understand the processes for which the av is able to distinguish these from the stationary cases and hence allow for a better interpretation of this quantity in applied cases index sensor calibration longitudinal studies haar wavelet variance heteroscedasticity i ntroduction the allan variance av is a widely used quantity in areas going from engineering to physics where there is an interest in studying the stochastic stability of error measurements from various instruments such as among others clocks and oscillators its usefulness resides in the fact that it provides an extremely informative summary on the variance of time series or more generally of autocorrelated processes especially when these are and with infinite variance indeed underlined how the av is a better measure of uncertainty compared to standard methods moving average variance for processes such as random walks and fractional average arfima models while being considerably useful also for stationary processes for these processes the av has a well known form which can help detect the kind of process for example from the plot of the av of an observed signal the behaviour and forms of the av for stationary and some processes was studied in where the av is used to detect and understand the process underlying a signal issued from different voltage measurements however there are many other applications for which the av is of interest xu is a phd student geneva school of economics and management university of geneva switzerland guerrier is assistant professor department of statistics pennsylvania state university pa usa molinari is visiting assistant professor department of statistics applied probability university of california santa barbara ca usa zhang is a graduate student department of statistics pennsylvania state university pa usa such as the detection of noise terms characterising inertial sensors see and many others see for an overview however although the av is extremely useful in the above settings it is not known how it behaves when in the presence of other types of processes and whether it is able to distinguish between them in this paper we intend to investigate the form of the av for a particular class of processes which includes all those processes that have a constant mean but have a structure such as for example the generalized autoregressive conditional heteroscedasticity garch models see while processes with specific forms of mean or are not considered since they can either be dealt with through statistical regression techniques or simply can not be detected by the av in particular we focus on those processes which are characterized by a dependence structure by blocks since they are common in settings such as longitudinal studies or sensor calibration for navigation engineering in the latter cases the av is often approximated by that of other known stationary processes such as for example the process whose av is often approximated by that of a firstorder autoregressive process see for example moreover it is not clear whether the av can actually help to distinguish between these processes and those processes for which its form is currently known the latter aspect is of particular relevance since it could lead to an erroneous interpretation of the observed process for example assuming stationarity when this is not the case and reaching false conclusions in order to deal with the above mentioned processes this paper intends to study the theoretical form of the av when the covariance structure is the consequent advantage of this study is that by considering the varying covariance structure in the av definition it extends the applicability of those approaches that make use of the av and raises awareness on its limitations and inappropriate interpretation in distinguishing and identifying these processes from stationary ones with this in mind section ii briefly defines the av and describes its theoretical form for those processes which have been considered up to now section iii introduces the new theoretical form of both the overlapping and av for processes whose covariance structure is and shows how the form of the av for stationary processes is a special case of this new form in the same section three case studies are presented which highlight the importance of these findings in order to better interpret processes through the av finally section iv concludes ii overview of the a llan variance h cov xt mean to estimate different kind of processes see for example however there are many commonly encountered processes whose av is not known and it is unclear to what point these can actually be distinguished from stationary processes the next section delivers a more general form of the av which includes these processes and studies if this quantity can actually be helpful in detecting them which depends solely on h the distance between observations with being the process variance we can consequently define the autocorrelation function as iii a llan variance for c ean n on tationary p rocesses to introduce the av let us first define xt t as a weakly stationary discrete time and regularly spaced stochastic process with a constant mean e xt and an autocovariance function defined as follows h h we consider the av computed at dyadic scales starting from local averages of the process which can be denoted as n n n where n x n x t therefore determines the number of consecutive observations considered for the average if the process has constant mean this implies n that also has the same mean and based on these averages and following the av moav t x n n e avarn xt as underlined in the previous sections the av is particularly useful for measuring uncertainty in processes especially when these have infinite variance nevertheless there are other forms of for which the properties of the av are unknown and these consist in those processes xt with a constant mean independent of time but a covariance structure this implies that the covariance function between observations at distance h is also a function of time t and can therefore be denoted as h t this type of process is very common in different areas going from engineering see to economics see to study the theoretical form of the av for this class of n processes let us first define xt as being the following vector of n consecutive observations starting at t n n xt t xt where m t and whose corresponding estimator is given by n xt avar t x n n n n where denotes the sample equivalent of based on a realization of the process xt another version of the av is the av noav whose estimator however is not statistically as efficient as that of the moav see app b for more details keeping in mind the above definitions of the moav delivered a general theoretical form of this quantity when applied to weakly stationary processes which is given by avarn xt n n x i n i i i based on the above equation the exact form of the av for different stationary processes such as the general class of autoregressive moving average arma models can be derived moreover provided the theoretical av for nonstationary processes such as the random walk and arfima models for which the av as mentioned earlier represents a better measure of uncertainty compared to other methods using the known theoretical forms of the av it is therefore possible to detect and distinguish different processes based on the pattern of their av due to this this quantity or similar quantities such as the haar wavelet variance can be used as a which contains the observations used to build the average in using the above vector for t n t we can then define the matrix n n var xt and for t t we define the matrix n n n cov xt these matrices represent the covariance matrices of the obser n vations contained in each consecutive average indeed represents the covariance matrix of the observations within the n average which is used in definitions and while n represents the between these two sets of observations a visual representation of these quantities is given in app a in this section we will only consider the moav while the form of the noav is given in app b based on the above matrices we can also define different quantities according to the matrix of reference and the lags between observations more specifically let us first consider the case in which we are interested in lags h such that h because of the overlapping nature of the av the observations at these lags can belong to the sets of n n observations within both the matrix and the matrix n and for these sets of observations within the matrix we can define the following quantity e h t x x cov n h cov if however the observations at the considered lags are among n the set of observations only within the matrix we define the quantity below t h xx e h cov m h finally when considering lags h such that n h the set of observations at these lags can only be considered n within the matrix and for this final case we define the quantity e h t x x cov m h the above definitions can be seen as generalized definitions of the autocovariance which consists in the average autocovariance for a given lag because of this it must be underlined that these definitions are not at all equivalent to the covariance function h t but correspond to an average of this function over all times having specified this we can now provide the following lemma l emma the moav is given by e e h avarn n h h h m h e h the proof of this lemma is given in app considering this expression an aspect that must be underlined is that the definitions of the functions e h and h given earlier simplify to the autocovariance function h when dealing with a weakly stationary process in the latter case the form of the av consequently reduces to the expression in for which a more detailed discussion is given in app as a final note to this result it should also be highlighted that in some of the considered cases the estimators of the moav defined in and of the noav see app b do not necessarily have the same expectation having underlined these points and with the general definition of the av given in lemma we can now investigate its properties when assuming the process of interest is within the class of processes treated in this paper the next sections report some simulation studies regarding some of these processes in attempt to understand also whether the av is a useful quantity to detect them and distinguish them from weakly stationary processes in all cases the simulated process is of length t except for the process where t and is simulated times the estimated av is represented in plots along with the theoretical stationary and forms of the av in order to understand its behaviour under these different assumptions white noise the first process we study is the white noise by which we intend without loss of generality all those processes whose variance changes with time the evolution of the variance in time can either be completely random or can follow a specific fig logarithm of the moav of the white noise process for scales estimated moav lines theoretical moav black line with dots and theoretical stationary av based on the average variance red line with triangles parametric model such as for example a garch process the goal of studying the av for these types of processes is to understand whether it is able to detect such a structure in a time series and if it can be distinguished from a stationary white noise process for this purpose the true process considered in the simulation study is generated from the following model xt n where t with t t the theoretical stationary form is based on the average of the variances used to simulate the processes in this example fig represents the estimated avs along with the theoretical forms stationary and in this case it can be seen how both theoretical forms correspond and the estimated avs closely follow these quantities this example confirms that the av is therefore unable to distinguish between a stationary white noise process and a white noise process whose secondorder behaviour is the process is a commonly known process in the engineering domain specifically for inertial sensor calibration for navigation the characteristic of this process is that it consists in different concatenated sequences blocks where within each block the realization of a random variable is repeated constant more formally let bi i b represent the set of time indices belong iid to the ith block within the time series and let ci n we can then define this process as xt ci if t bi one realization of the process is illustrated in the top panel fig where the length of block bi is for all i b and b since the theoretical form of the av for this process is not known exactly it is often approximated by the av of a autoregressive process although this approximation can be useful it is nevertheless still an approximation and using the form given in lemma we can now obtain a theoretical form for the av of this process which is represented in the bottom panel of fig fig top realization of the process with and the length of block bi b and b bottom logarithm of the moav of the process for scales estimated moav lines theoretical moav black line with dots and theoretical stationary moav of an approximating the biasinstability moav red line with triangles fig top realization of the autoregressive process with and the length of block bi b and b bottom logarithm of the moav of the firstorder autoregressive process for scales estimated moav lines theoretical moav black line with dots and theoretical stationary moav assuming no block structure red line with triangles process indeed the latter plot shows that the estimated avs closely follow the theoretical form given earlier the red line represents the av of a stationary process which is supposed to approximate the true av of the latter is the result of the averaging of the theoretical av for a stationary process estimated via on each of the simulated processes it is clear how although close over some scales this approximation is not good enough when considering the logarithmic representation of the av therefore knowing the exact form of the av for this process would allow to better interpret the signals characterised by autoregressive processes as a final example we consider a process similarly to the process within this paper we define a process as a process whose parameters are fixed but is made by concatenated time periods blocks where observations within each block are generated independently from those in the other blocks an example is given by the settings of longitudinal studies in which each subject can be measured over time and although the subjects are independent from each other these measurements can be explained by an autocorrelated process within each subject to define this i process formally let xt denote the following i xt i with parameter vector t where and iid n if again we let bi denote the ith block then the process can be defined as i xt xt i if t bi j where xt is independent of xt i j by defining and for the simulation study the top panel of fig shows a realization of this process while the bottom panel of fig illustrates the results of the simulations for this particular process as for it can be observed how the stationary form of the av that does not consider the block structure is not close to the estimated avs while the form provided in this paper adequately represents this process and can therefore allow to distinguish between a stationary autoregressive process and a one iv c onclusions within this paper we wanted to underline an issue concerning the av which had not yet been studied indeed the behaviour of the av in commonly occurring settings where the covariance structure of the processes is was unknown and in many cases was either ignored or dealt with through approximations the consequence of the latter approaches would probably consist in erroneous interpretations and conclusions drawn from an av analysis for this reason this paper studied the form of the av for this class of processes thereby generalizing its form also for weakly stationary processes based on this several examples were provided in which the properties of the av were studied highlighting its ability to detect these processes and to eventually distinguish them from stationary ones making researchers and practitioners more aware of issues related to the interpretation and use of this quantity in more general and common settings n trix ce ma an i var co r eferences bollerslev generalized autoregressive conditional heteroskedasticity journal of econometrics hou niu analysis and modeling of inertial sensors using allan variance ieee transactions on instrumentation and measurement gallegati semmler wavelet applications in economics and finance vol springer guerrier skaloud stebler estimation for composite stochastic processes journal of the american statistical association percival a wavelet perspective on the allan variance ieee transactions on ultrasonics ferroelectrics and frequency control percival guttorp processes the allan variance and wavelets wavelets in geophysics unsal demirbas estimation of deterministic and stochastic imu error parameters in position location and navigation symposium plans pp ieee zhang allan variance of time series models for measurement data metrologia a ppendix a g raphical illustration of moav to graphically illustrate the quantities defined in section iii fig represents the true covariance matrix for a given process and highlights how the av is related to this matrix by overlapping square matrices along the diagonal each of which is composed of the quantities defined in eq and eq n fig graphical illustration of matrices this estimator is less efficient than the moav mainly because it is based on fewer averages and therefore on a smaller sample size to define the theoretical form of the noav for the non n stationary processes of interest we first define the vector xj of n consecutive observations starting at j n n xj x xjn using the above for k m we define the matrices n n n and as follows n n n n var var n n n and cov as in appendix a the above matrices are graphically represented in fig where as opposed to the moav these matrices do not overlap along the diagonal of the covariance n n n matrix of the process we then let and n n n denote the averages of the matrices and respectively n n x x n n n i j n a ppendix b t heoretical form of the noav for n on tationary p rocesses n for the moav t where m b the corresponding estimator for this quantity is given by m x n n n xt avar the av noav is defined as m x n n e avarn xt n and n n x x n i j n n x x n n i j we further define n and n as follows m m i x h n x n n n and n m l emma we define the noav as avarn e h e n h e h ce m h e h proof of lemma the proof of lemma is direct from the above definitions indeed we have n n n n e var var n n cov trix an i var co ma n n n n n n fig graphical illustration of matrices and noav for the then using eq we obtain pm n n e avarn pm n n n n n k e e h n h based on the earlier defined matrices as for the moav we can also define different quantities according to the matrix of reference and the lags between observations more specifically let us first consider the case in which we are interested in lags h such that h the observations at these lags can belong to the sets of observations within the matrix n n n and and for these sets of observations n n within matrices and we can define the following quantity xx e h cov x x n h if however the observations at the considered lags are among n the set of observations only within the matrix we define the quantity below m h h xx cov x x mh finally when considering lags h such that n h the set of observations at these lags can only be considered n within the matrix and for this final case we define the quantity e h m h e h which concludes the proof a ppendix c n proof of lemma in order to prove lemma let n n n and denote the averages of the matrices and respectively n n x x n n n i j n n n x x n n i j further we define n and n as follows n t x n n t t n t x n t t m x x cov x x m h based on these definitions we have n n n n as for the moav case the above definitions can be seen as e var var generalized definitions of the autocovariance which consists in n n the average autocovariance for a given lag using the above cov notations and definitions we can provide the following result n n n e h then using eq we obtain avarn pt e n n n pt n n n n e n h e h h e h which concludes the proof m h e h
10
accepted by ieee transactions on cybernetics learning subspace using domain features and independence maximization jun ke yan lu kou and david zhang fellow ieee adaptation algorithms are useful when the distributions of the training and the test data are different in this paper we focus on the problem of instrumental variation and drift in the field of sensors and measurement which can be viewed as discrete and continuous distributional change in the feature space we propose maximum independence domain adaptation mida and mida smida to address this problem domain features are first defined to describe the background information of a sample such as the device label and acquisition time then mida learns a subspace which has maximum independence with the domain features so as to reduce the discrepancy in distributions a feature augmentation strategy is also designed to project samples according to their backgrounds so as to improve the adaptation the proposed algorithms are flexible and fast their effectiveness is verified by experiments on synthetic datasets and four realworld ones on sensors measurement and computer vision they can greatly enhance the practicability of sensor systems as well as extend the application scope of existing domain adaptation algorithms by uniformly handling different kinds of distributional change index reduction domain adaptation drift correction independence criterion machine olfaction transfer learning i ntroduction i n many machine learning problems the labeled training data are from a source domain and the test ones are from a target domain samples of the two domains are collected under different conditions thus have different distributions labeling samples in the target domain to develop new prediction models is often and therefore domain adaptation or transfer learning is needed to improve the performance in the target domain by leveraging unlabeled and maybe a few labeled target samples this topic is receiving increasing attention in recent years due to its broad applications such as computer vision and text classification it is also important in the field of sensors and measurement because of the variations in the the work is partially supported by the grf fund from the hksar government the central fund from hong kong polytechnic university the nsfc fund shenzhen fundamental research fund and key laboratory of network oriented intelligent computation shenzhen china yan is with the department of electronic engineering graduate school at shenzhen tsinghua university shenzhen china yankethu kou is with the department of computing the hong kong polytechnic university kowloon hong kong cslkou zhang is with the shenzhen graduate school harbin institute of technology shenzhen china and also with the department of computing biometrics research centre the hong kong polytechnic university kowloon hong kong csdzhang fabrication of sensors and devices the responses to the same signal source may not be identical for different instruments which is known as instrumental variation furthermore the sensing characteristics of the sensors the operating condition or even the signal source itself can change over time which leads to complex drift as a result the prediction model trained with the samples from the initial device in an earlier time period source domain is not suitable for new devices or in a latter time target domains a typical application plagued by this problem is machine olfaction which uses electronic noses and pattern recognition algorithms to predict the type and concentration of odors the applications of machine olfaction range from agriculture and food to environmental monitoring robotics biometrics and disease analysis however owing to the nature of chemical sensors many are prone to instrumental variation and drift mentioned above which greatly hamper their usage in applications traditional methods dealing with these two kinds of drift drift correction methods hereinafter require a set of transfer samples which are predefined gas samples needed to be collected with each device and in each time period they are often used to learn regression models to map the features in the target domain to the source domain nevertheless collecting transfer samples repeatedly is a demanding job especially for nonprofessional users in such cases domain adaptation techniques with unlabeled target samples are desirable an intuitive idea is to reduce the discrepancy in the feature level to learn feature representation for example pan et al proposed transfer component analysis tca which finds a latent feature space that minimizes the distributional difference of two domains in the sense of maximum mean discrepancy more related methods will be introduced in section when applied to drift correction however existing domain adaptation algorithms are faced with two difficulties first they are designed to handle discrete source and target domains in drift however samples come in a stream so the change in data distribution is often continuous one solution is to split data into several batches but it will lose the temporal order information second because of the variation in the sensitivity of chemical sensors the same signal in different conditions may indicate different concepts in other words the conditional probability p y may change for samples with different backgrounds where background means when and with which device a sample was collected methods like accepted by ieee transactions on cybernetics tca project all samples to a common subspace hence the samples with similar appearance but different concepts can not be distinguished in this paper we present a simple yet effective algorithm called maximum independence domain adaptation mida the algorithm first defines domain features for each sample to describe its background then it finds a latent feature space in which the samples and their domain features are maximally independent in the sense of independence criterion hsic thus the discrete and continuous change in distribution can be handled uniformly in order to project samples according to their backgrounds feature augmentation is performed by concatenating the original feature vector with the domain features we also propose mida smida to exploit the label information with hsic mida and smida are both very flexible they can be applied in situations with single or multiple source or target domains thanks to the use of domain features in fact the notion domain has been extended to background which is more informative although they are designed for unsupervised domain adaptation problems no labeled sample in target domains the proposed methods naturally allow both unlabeled and labeled samples in any domains thus can be applied in both unlabeled and labeled samples in target domains and supervised only labeled samples in target domains problems as well the label information can be either discrete or classification or continuous regression to illustrate the effect of our algorithms we first evaluate them on several synthetic datasets then drift correction experiments are performed on two datasets and one spectroscopy dataset note that spectrometers suffer the same instrumental variation problem as finally a domain adaptation experiment is conducted on a object recognition benchmark results confirm the effectiveness of the proposed algorithms the rest of the paper is organized as follows related work on unsupervised domain adaptation and hsic is briefly reviewed in section ii section iii describes domain features mida and smida in detail the experimental configurations and results are presented in section iv along with some discussions section v concludes the paper information between all samples and their binary domain labels which can be viewed as a primitive version of the domain features used in this paper they also minimized the negated mutual information between the target samples and their cluster labels to reduce the expected classification error the transfer subspace learning ltsl algorithm presented in is a reconstruction guided knowledge transfer method it aligns source and target data by representing each target sample with some local combination of source samples in the projected subspace the label and geometry information can be retained by embedding different subspace learning methods into ltsl another class of methods first project the source and the target data into separate subspaces and then build connections between them fernando et al utilized a transformation matrix to map the source subspace to the target one where a subspace was represented by eigenvectors of pca the geodesic flow kernel gfk method measures the geometric distance between two different domains in a grassmann manifold by constructing a geodesic flow an infinite number of subspaces are combined along the flow in order to model a smooth change from the source to the target domain liu et al adapted gfk to correct timevarying drift of a sample stream is first split into batches according to the acquisition time the first and the latest batches domains are then connected through every intermediate batch using gfk another improvement of gfk is domain adaptation by shifting covariance dasc observing that modeling one domain as a subspace is not sufficient to represent the difference of distributions dasc characterizes domains as covariance matrices and interpolates them along the geodesic to bridge the domains independence criterion hsic hsic is used as a convenient method to measure the dependence between two sample sets x and y let kx and ky be two kernel functions associated with rkhss f and g respectively pxy is the joint distribution hsic is defined as the square of the norm of the operator cxy hsic pxy f g kcxy kx x ky y y kx x ky y y ii r elated w ork unsupervised domain adaptation two good surveys on domain adaptation can be found in and in this section we focus on typical methods that extract features in order to reduce the discrepancy while preserving useful information researchers have developed many strategies some algorithms project all samples to a common latent space transfer component analysis tca tries to learn transfer components across domains in a reproducing kernel hilbert space rkhs using maximum mean discrepancy it is further extended to tca sstca to encode label information and preserve local geometry of the manifold shi et al measured domain difference by the mutual kx x ky y y here is the expectation over independent pairs x y and y drawn from pxy it can be proved that with characteristic kernels kx and ky hsic pxy f g is zero if and only if x and y are independent a large hsic suggests strong dependence with respect to the choice of kernels hsic has a biased empirical estimate suppose z x y xn yn kx ky are the kernel matrices of x and y respectively then hsic z f g n tr kx hky h where h i is the centering matrix n due to its simplicity and power hsic has been adopted for feature extraction and feature selection accepted by ieee transactions on cybernetics researchers typically use it to maximize the dependence between the features and the label however to our knowledge it has not been utilized in domain adaptation to reduce the dependence between the extracted features and the domain features iii p roposed m ethod domain feature we aim to reduce the dependence between the extracted features and the background information a sample s background information should naturally exist thus can be easily obtained have different distributions in training and test samples correlate with the distribution of the original features the domain label which domain a sample belongs in common domain adaptation problems is an example of such information according to these characteristics the information clearly interferes the testing performance of a prediction model thus minimizing the aforementioned dependence is desirable first a group of new features need to be designed to describe the background information the features are called domain features from the perspective of drift correction there are two main types of background information the device label with which device the sample was collected and the acquisition time when the sample was collected we can actually encode more information such as the place of collection the operation condition and so on which will be useful in other domain adaptation problems formally if we only consider the instrumental variation the following coding scheme can be used suppose there are ndev devices which result in ndev different but related domains the domain feature vector is thus d rndev where dp if the sample is from the pth device and otherwise if the drift is also considered the acquisition time can be further added if a sample was collected from the pth device at time t then d where q dq t q otherwise according to the kernel matrix kd of the domain features needs to be computed for hsic we apply the linear kernel suppose d dn rmd md is the dimension of a domain feature vector then kd dt note that in traditional domain adaptation problems with several discrete domains the coding scheme can be applied to construct domain features because the problems are similar to instrumental variation b feature augmentation feature augmentation is used in this paper to learn subspaces in the author proposed a feature augmentation strategy for domain adaptation by replicating the original features however this strategy requires that data lie in discrete domains and can not deal with timevarying drift we propose a more general and efficient feature augmentation strategy concatenating the original features and the domain features x d the role of this strategy can be demonstrated through a linear dimensionality reduction example suppose a projection matrix w r has been learned for the augmented feature vector h is the dimension of the subspace w has two wx parts w wx wd rmd the embedwd ding of can be expressed as w t wxt x wdt d rh which means that a bias wdt d i has been added to each dimension i of the embedding from another perspective the feature augmentation strategy maps the samples to an augmented space with higher dimension before projecting them to a subspace it will be easier to find a projection direction in the augmented space to align the samples well in the subspace take machine olfaction for example there are situations when the conditional probability p y changes along with the background for instance the sensitivity of chemical sensors often decays over time a signal that indicates low concentration in an earlier time actually suggests high concentration in a later time in such cases feature augmentation is important because it allows samples with similar appearance but different concepts to be treated differently by the bias the strategy also helps to align the domains better in each projected dimension its effect will be illustrated on several synthetic datasets in section and further analyzed in the complementary materials maximum independence domain adaptation mida in this section we introduce the formulation of mida in detail suppose x is the matrix of n samples the training and the test samples are pooled together more importantly we do not have to explicitly differentiate which domain a sample is from the feature vectors have been augmented but we use the notations x and m instead of and m md for brevity a linear or nonlinear mapping function can be used to map x to a new space based on the kernel trick we need not know the exact form of but the inner product of x can be represented by the kernel matrix kx x t x then a projection matrix is applied to project x to a subspace with dimension h leading to the projected samples z t x similar to other kernel dimensionality reduction algorithms the key idea is to express each projection direction as a linear combination of all samples in the space namely x w w is the projection matrix to be actually learned thus the projected samples are z w t x t x w t kx intuitively if the projected features are independent of the domain features then we can not distinguish the background accepted by ieee transactions on cybernetics of a sample by its projected features suggesting that the interdomain discrepancy is diminished in the subspace therefore after omitting the scaling factor in we get the expression to be minimized tr kz hkd h tr kx w w t kx hkd h where kz is the kernel matrix of z in domain adaptation the goal is not only minimizing the difference of distributions but also preserving important properties of data such as the variance it can be achieved by maximizing the trace of the covariance matrix of the project samples the covariance matrix is cov z cov w t kx w t kx hkx w where h i n is the same as that in an orthonormal constraint is further added on w the learning problem then becomes max w tr w t kx hkd hkx w tr w t kx hkx w w t w i where is a using the lagrangian multiplier method we can find that w is the eigenvectors of kx h kx corresponding to the h largest eigenvalues note that a conventional constraint is requiring to be orthonormal as in which will lead to a generalized eigenvector problem however we find that this strategy is inferior to the proposed one in both adaptation accuracy and training speed in practice so it is not used when computing kx a proper kernel function needs to be selected common kernel functions include linear k x y xt y polynomial k x y y d gaussian radial basis function rbf k x y exp and so on different kernels indicate different assumptions on the type of dependence in using hsic according to the polynomial and rbf kernels map the original features to a higher or infinite dimensional space thus are able to detect more types of dependence however choosing a suitable kernel width parameter is also important for these more powerful kernels the maximum mean discrepancy mmd criterion is used in tca to measure the difference of two distributions song et al showed that when hsic and mmd are both applied to measure the dependence between features and labels in a classification problem they are identical up to a constant factor if the label kernel matrix in hsic is properly designed however tca is feasible only when there are two discrete domains on the other hand mida can deal with a variety of situations including multiple domains and continuous distributional change the stationary subspace analysis ssa algorithm is able to identify temporally stationary components in multivariate time series however ssa only ensures that the mean and covariance of the components are stationary while they may not be suitable for preserving important properties in data concept drift adaptation algorithms are able to correct continuous drift however most of them rely on newly arrived labeled data to update the prediction models while mida works unsupervisedly mida smida mida aligns samples with different backgrounds without considering the label information however if the labels of some samples are known they can be incorporated into the subspace learning process which may be beneficial to prediction therefore we extend mida to mida smida since we do not explicitly differentiate the domain labels of the samples both unlabeled and labeled samples can exist in any domain similar to hsic is adopted to maximize the dependence between the projected features and the labels the biggest advantage of this strategy is that all types of labels can be exploited such as the discrete labels in classification and the continuous ones in regression the label matrix y is defined as follows for classification problems the coding scheme can be used y yi j if xi is labeled and belongs to the jth class otherwise for regression problems the target values can be centered first then y yi equals to the target value of xi if it is labeled otherwise the linear kernel function is chosen for the label kernel matrix ky y t y the objective of smida is max w tr w t kx h h kx w w t w i where is a its solution is the eigenvectors of kx h h kx corresponding to the h largest eigenvalues the outline of mida and smida is summarized in algorithm the statements in brackets correspond to those specialized for smida algorithm mida or smida input the matrix of all samples x and their background information the labels of some samples the kernel function for x h and output the projected samples z construct the domain features according to the background information section augment the original features with domain features compute the kernel matrices kx kd and ky obtain w namely the eigenvectors of kx h kx or kx h h kx corresponding to the h largest eigenvalues z w t kx besides variance and label dependence another useful property of data is the geometry structure which can be preserved by manifold regularization mr mr can be conveniently incorporated into smida in our experiments adding mr generally increases the accuracy slightly with the cost of three more consequently it is not adopted in this paper accepted by ieee transactions on cybernetics iv e xperiments in this section we first conduct experiments on some synthetic datasets to verify the effect of the proposed methods then drift correction experiments are performed on two enose datasets and a spectroscopy dataset to show the universality of the proposed methods we further evaluate them on a visual object recognition dataset comparison is made between them and recent unsupervised domain adaptation algorithms that learn features a synthetic dataset in fig tca and mida are compared on a dataset with two discrete domains the domain labels were used to construct the domain features in mida according to the coding scheme introduced in section the similar definition was used in synthetic datasets and for both methods the linear kernel was used on the original features and the was set to in order to quantitatively assess the effect of domain adaptation logistic regression models were trained on the labeled source data and tested on the target data the accuracies are displayed in the caption showing that the order of performance is mida tca original feature tca aligns the two domains only on the first projected dimension however the two classes have large overlap on that dimension because the direction for alignment is different from that for discrimination incorporating the label information of the source domain sstca did no help on the contrary mida can align the two domains well in both projected dimensions in which the domainspecific bias on the second dimension brought by feature augmentation played a key role a explanation is included in the supplementary materials thus good accuracy can be obtained by using the two dimensions for classification in fig ssa and mida are compared on a dataset with continuous distributional change which resembles timevarying drift in machine olfaction samples in both classes drift to the upper right the chronological order of the samples was used to construct the domain features in mida d for the first sample d for the second sample etc the parameter setting of mida was the same with that in fig whereas the number of stationary components in ssa was set to the classification accuracies were obtained by training a logistic regression model on the first halves of the data in both classes and testing them on the last halves ssa succeeds in finding a direction that is free from drift however the two classes can not be well separated in that direction in plot c the randomly scattered colors suggest that the drift is totally removed in the subspace mida first mapped the data into a space with the third dimension being time then projected them to a plane orthogonal to the direction of drift in the space no label information was used in the last two experiments if keeping the label dependence in the subspace is a priority smida can be adopted instead of mida in the synthetic dataset in fig the best direction to align the two domains also mixes the two classes which results in the output of mida in plot b the labels in the source domain were used when learning the subspace from plot c we can observe that the classes are separated in fact class separation can still be found in the third dimension of the space learned by mida however for the purpose of dimensionality reduction we generally hope to keep the important information in the first few dimensions nonlinear kernels are often applied in machine learning algorithms when data is not linearly separable besides they are also useful in domain adaptation when domains are not linearly alignable as shown in fig in plot a the changes in distributions are different for the two classes hence it is difficult to find a linear projection direction to align the two domains even with the domainspecific biases of mida actually rotation matrices are needed since the target labels are not available the rotation matrices can not be obtained accurately however a nonlinear kernel can be used to map the original features to a space with higher dimensions in which the domains may be linearly alignable we applied an rbf kernel with width although the domains are not perfectly aligned in plot c the classification model trained in the source domain can be better adapted to the target domain a comparison on different kernel and kernel parameters on two synthetic datasets is included in the supplementary materials b gas sensor array drift dataset the gas sensor array drift collected by vergara et al is dedicated to research in drift correction a total of samples were collected by an with gas sensors over a course of months there are six different kinds of gases at different concentrations they were split into batches by the authors according to their acquisition time table a in the supplementary material details the dataset we aim to classify the type of gases despite their concentrations similar to we took the samples in batch as labeled training samples whereas those in batches are unlabeled test ones this evaluation strategy resembles the situation in applications in the dataset each sample is represented by features extracted from the sensors response curves each feature was first normalized to zero mean and unit variance within each batch the timevarying drift of the preprocessed features across batches can be visually inspected in fig it is obvious that samples in different batches have different distributions next the labeled samples in batch were adopted as the source domain and the unlabeled ones in batch b b as the target domain the proposed algorithms together with several recent ones were used to learn features based on these samples then a logistic regression model was trained on the source domain and tested on each target one for multiclass classification the strategy was utilized as displayed in table i the compared methods include kernel pca kpca transfer component analysis tca tca sstca subspace alignment sa http accepted by ieee transactions on cybernetics pos source data neg source data pos target data neg target data x a b c fig comparison of tca and mida in a synthetic dataset plots a c show data in the original space and projected spaces of tca and mida respectively the classification accuracies are only using the first projected dimension and x pos old data neg old data pos new data neg new data a b c fig comparison of ssa and mida in a synthetic dataset plots a c show data in the original space projected spaces of ssa and mida respectively the chronological order of a sample is indicated by color the classification accuracies are only using the first projected dimension and pos source data neg source data pos target data neg target data a b c x fig comparison of mida and smida in a synthetic dataset plots a c show data in the original space and projected spaces of mida and smida respectively the classification accuracies are and pos source data neg source data pos target data neg target data x a x b z c z fig comparison of different kernels in a synthetic dataset plots a c show data in the original space and projected spaces of mida with linear and rbf kernels respectively the classification accuracies are and accepted by ieee transactions on cybernetics average classification accuracy batch batch batch batch batch fig scatter of ethanol dots and acetone plus signs samples in batches in the gas sensor array drift dataset samples are projected to a subspace using pca different colors indicate different batches geodesic flow kernel gfk manifold regularization with combination gfk informationtheoretical learning itl structural correspondence learning scl and marginalized stacked denoising autoencoder msda for all methods the were tuned for the best accuracy in kpca tca sstca and the proposed mida and smida the polynomial kernel with degree was used kpca learned a subspace based on the union of source and target data in tca sstca mida and smida eigenvalue decomposition needs to be done on kernel matrices in order to reduce the computational burden we randomly chose at most nt samples in each target domain when using these methods with nt being twice the number of the samples in the source domain gfk used pca to generate the subspaces in both source and target domains the subspace dimension of gfk was determined according to the subspace disagreement measure in the results of are copied from in scl the pivot features were binarized before training pivot predictors using logistic regression we also compared several variants of our methods in table i the notation discrete means that two discrete domains source and target were used in mida and smida which is similar to other compared methods the domain feature vector of a sample was thus t if it was from the source domain and t if it was from the target however this strategy can not make use of the samples in intermediate batches an intuitive assumption is that the distributions of adjacent batches should be similar when adapting the information from batch to b taking samples from batches to b into consideration may improve the generalization ability of the learned subspace concretely nt samples were randomly selected from batches to b instead of batch b alone for each sample the domain feature was defined as its batch index which can be viewed as a proxy of its acquisition time mida and smida then maximized the independence between kpca tca sstca sa itl mida continuous smida continuous projected dimensions h fig performance comparison on the gas sensor array drift dataset with respect to the subspace dimension the learned subspace and the batch indices the results are labeled as continuous in table i besides the accuracies of continuous smida without feature augmentation no are also shown from table i we can find that as the batch index increases the accuracies of all methods generally degrade which confirms the influence of the drift continuous smida achieves the best average domain adaptation accuracy the continuous versions of mida and smida outperform the discrete versions proving that the proposed methods can effectively exploit the chronological information of the samples they also surpass which uses the samples in intermediate batches to build connections between the source and the target batches feature augmentation is important in this dataset since removing it in continuous smida causes a drop of four percentage points in average accuracy in fig the average classification accuracies with varying subspace dimension are shown mida and smida are better than other methods when more than features are extracted breath analysis dataset as a noninvasive approach disease screening and monitoring with is attracting more and more attention the concentration of some biomarkers in breath has been proved to be related to certain diseases which makes it possible to analyze a person s health state with an conveniently for example the concentration of acetone in diabetics breath is often higher than that in healthy people however the instrumental variation and drift of hinder the popularization of this technology in applications unsupervised domain adaptation algorithms can be applied to solve this problem we have collected a breath analysis dataset in years using two of the same model in this paper samples of five diseases were selected for experiments including diabetes chronical kidney disease ckd cardiopathy lung cancer and breast cancer they have been proved to be related to certain breath biomarkers we performed accepted by ieee transactions on cybernetics table i c lassification accuracy on the gas sensor array drift dataset b old values indicate the best results batch average kpca tca sstca sa gfk itl scl msda mida discrete smida discrete mida continuous smida no smida continuous a sensor response volt five classification tasks to distinguish samples with one disease from the healthy samples each sample was represented by the steady state responses of nine gas sensors in the when a gas sensor is used to sense a gas sample its response will reach a steady state in a few minutes the steady state response has a close relationship with the concentration of the measured gas therefore the feature vector contains most information needed for disease screening to show the instrumental variation and drift in the dataset we draw the steady state responses of two sensors of the ckd samples in fig each data point indicates a breath sample in plot a the sensitivity of the sensor in both devices gradually decayed as time elapsed in plot b the aging effect was so significant that we had to replace the sensors in the two devices with new ones on about day in this case a signal at v will suggest low concentration on day but high concentration on day in addition the responses in different devices are different plot b after day the numbers of samples in the six classes healthy and the five diseases mentioned above are and respectively we chose the first samples collected with device in each class as labeled training samples among the other samples samples were randomly selected in each class for validation the rest for testing the were tuned on the validation sets logistic regression was adopted as the classifier with as the accuracy criterion results are compared in table ii in kpca tca sstca mida and smida the rbf kernel was used because methods other than stationary subspace analysis ssa mida and smida are not capable of handling the chronological information we simply regarded each device as a discrete domain and learned features with them the same strategy was used in discrete mida and smida in continuous mida and smida the sensor response volt original feature acquisition time day device device b acquisition time day fig illustration of the instrumental variation and drift in the breath analysis dataset plots a and b show the steady state responses of the ckd samples of sensors and respectively domain features were defined according to where t was the exact acquisition time converted to years and the number of devices ndev ssa naturally considers the chronological information by treating the sample stream as a multivariate time series and identifying temporally stationary components however ssa can not deal with time series with multiple sources such as the case in this dataset thus the samples were arranged in chronological order despite their device labels from table ii we can find that the improvement made by ssa is little possibly because the stationary criterion is not suitable for preserving important properties in data for example the noise in data can also be stationary mida and smida achieved obviously better results than other methods they can address both instrumental variation and accepted by ieee transactions on cybernetics table ii c lassification accuracy on the breath analysis dataset b old values indicate the best results task average original feature kpca tca sstca sa gfk itl ssa scl msda mida discrete smida discrete mida continuous smida no smida continuous drift with the bias brought by feature augmentation they can compensate for the change in conditional probability in this dataset smida is better than mida because the label information of the first samples in each class was better kept corn dataset similar to data collected with spectrometers are signals indicating the concentration of the analytes instrumental variation is also a problem for them in this section we test our methods on the corn it is a spectroscopy dataset collected with three spectrometers designated as and the moisture oil protein and starch contents of corn samples were measured by each device with ranges of the measured values as and respectively each sample is represented by a spectrum with features this dataset resembles traditional domain adaptation datasets because there is no drift three discrete domains can be defined based on the three devices we adopt as the source domain and as the target ones in each domain samples were assigned as the test set the rest as the training set for tuning we applied a crossvalidation on the training sets of the three domains after the best were determined for each algorithm a regression model was trained on the training set from the source domain and applied on the test set from the target domains the regression algorithm was ridge regression with the regularization parameter table iii displays the root mean square error rmse of the four prediction tasks and their average on the two target domains we also plot the overall average rmse of http the two domains with respect to the subspace dimension h in fig itl was not investigated because it is only applicable in classification problems in kpca tca sstca mida and smida the rbf kernel was used for the semisupervised methods sstca and smida the target values were normalized to zero mean and unit variance before subspace learning the domain features were defined according to the device indices using the coding scheme we can find that when no domain adaptation was done the prediction error is large all domain adaptation algorithms managed to significantly reduce the error kpca also has good performance which is probably because the source and the target domains have similar principal directions which also contain the most discriminative information therefore source regression models can fit the target samples well in this dataset different domains have identical data composition as a result corresponding data can be aligned by subspaces alignment which explains the small error of sa however this condition may not hold in other datasets mida and smida obtained the lowest average errors in both target domains aiming at exploring the prediction accuracy when there is no instrument variation we further trained regression models on the training set of the two target domains and tested on the same domain the results are listed as train on target in table iii it can be found that smida outperforms these results this could be attributed to three reasons the discrepancy in this dataset is relatively easy to correct the use of rbf kernel in smida improves the accuracy smida learned the subspace on the basis of both training and test samples although the test samples were unlabeled they can provide some information about the distribution of the samples to make the learned subspace generalize better which can be viewed as the merit of learning to testify this assumption we conducted another experiment with multiple target domains the training samples from the source domain and the test ones from both target domains were leveraged together for subspace learning in mida and smida the average rmse for the two target domains are and for mida and and for smida compared with the results in table iii with single target domain the results have been further improved showing that incorporating more unlabeled samples from target domains can be beneficial visual object recognition dataset in gong et al evaluated domain adaptation algorithms on four visual object recognition datasets namely amazon a c dslr d and webcam w ten common classes were selected from them with to samples per class per domain and images in total each image was encoded with an histogram using surf features the normalized histograms were to have zero mean and unit variance in each dimension following the experimental setting provided in the sample code from the authors of experiments were conducted in random trials for each pair of domains for each unsupervised trail for a c w or for d labeled samples per class accepted by ieee transactions on cybernetics table iii r egression rmse on the corn dataset b old values indicate the best results as target domain as target domain moisture oil moisture oil protein starch average original feature kpca tca sstca sa gfk scl msda mida smida train on target kpca tca sstca sa mida smida average regression rmse protein starch average projected dimensions h fig performance comparison on the corn dataset with respect to the subspace dimension were randomly chosen from the source domain as the training set other samples were used unsupervisedly for domain adaptation while all unlabeled samples in the target domain made up the test set in trails three labeled samples per class in the target domain were also assumed to be labeled averaged accuracies on each pair of domains as well as standard errors are listed in tables iv and for gfk transfer subspace learning ltsl domain adaptation by shifting covariance dasc and a recent method called integration of global and local metrics for domain adaptation iglda we copied the best results reported in the original papers for other methods tested the were tuned for the best accuracy logistic regression was adopted as the classifier the polynomial kernel with degree was used in kpca tca sstca mida and smida the domain features were defined according to the domain labels using the onehot coding scheme mida and smida achieve the best average accuracies in both unsupervised and visual object recognition experiments we observe that tca and sstca have comparable performance with mida and smida which may be explained by the fact that the hsic criterion used in mida and mmd used in tca are identical under certain conditions when there are one source and one target domain besides the feature augmentation strategy in mida is not crucial in this dataset because there is no change in conditional probability on the other hand tca and sstca can only handle one source and one target domains sstca uses the manifold regularization strategy to preserve local geometry information hence introduces three more than smida moreover computing the data adjacency graph in sstca and the matrix inversion operation in tca and sstca make them slower than mida and smida we compared their speed on the domain adaptation experiment c a they were run on a server with intel xeon ghz cpu and gb ram no parallel computing was used the codes of the algorithms were written in matlab on average the running times of each trial of mida smida tca and sstca were s s s and s respectively therefore mida and smida are more practical to use than tca and sstca besides they were initially designed for drift correction this dataset is used to show their universality c onclusion in this paper we introduced maximum independence domain adaptation mida to learn features the main idea of mida is to reduce the discrepancy by maximizing the independence between the learned features and the domain features of the samples the domain features describe the background information of each sample such as the domain label in traditional domain adaptation problems in the field of sensors and measurement the device label and acquisition time of the each collected sample can be expressed by the domain features so that unsupervised drift correction can be achieved by using mida the feature augmentation strategy proposed in this paper adds domainspecific biases to the learned features which helps mida to align domains accepted by ieee transactions on cybernetics table iv u nsupervised domain adaptation accuracy on the visual object recognition dataset b old values indicate the best results x y means that x is the source domain and y is the target one ori average kpca tca sstca itl sa gfk ltsl dasc scl msda iglda mida smida table v s emi supervised domain adaptation accuracy on the visual object recognition dataset b old values indicate the best results average ori kpca tca sstca itl sa gfk ltsl scl msda mida smida mida and smida are flexible algorithms with the design of the domain features and the use of the hsic criterion they can be applied in all kinds of domain adaptation problems including discrete or continuous distributional change multiple domains classification or regression etc they are also easy to implement and fast requiring to solve only one eigenvalue decomposition problem future directions may include further extending the definition of the domain features for other applications r eferences pan and yang a survey on transfer learning ieee trans knowl data vol no pp patel gopalan li and chellappa visual domain adaptation a survey of recent advances signal processing magazine ieee vol no pp cui li xu shan chen and li flowing on riemannian manifold domain adaptation by shifting covariance ieee trans vol no pp bian tao and rui human action recognition systems man and cybernetics part b cybernetics ieee transactions on vol no pp pan tsang kwok and yang domain adaptation via transfer component analysis neural networks ieee transactions on vol no pp seah ong and tsang combating negative transfer from predictive distribution differences ieee trans vol no pp gardner and bartlett a brief history of electronic noses sens actuators b vol no pp barsan and weimar electronic nose current status and future trends chem vol no pp marjovi and marques optimal swarm formation for odor plume finding ieee transactions on cybernetics vol no pp zhang tian kadri xiao li pan and zhou sensor calibration transfer among electronic nose instruments for monitoring volatile organic chemicals in indoor air quality sens actuators b vol no pp yan zhang wu wei and lu design of a breath analysis system for diabetes screening and blood glucose level prediction ieee trans biomed vol no pp marco and signal and data processing for machine olfaction and chemical sensing a review ieee sens vol no pp di carlo and falasconi drift correction methods for gas chemical sensors in artificial olfaction systems techniques and challenges intech ch pp yan and zhang improving the transfer ability of prediction accepted by ieee transactions on cybernetics models for electronic noses sens actuators b vol pp calibration transfer and drift compensation of via coupled task learning sens actuators b vol pp shi and sha learning of discriminative clusters for unsupervised domain adaptation in proceedings of the intl conf on machine learning icml fernando habrard sebban and tuytelaars unsupervised visual domain adaptation using subspace alignment in proceedings of the ieee international conference on computer vision pp gong grauman and sha learning kernels for unsupervised domain adaptation with applications to visual object recognition international journal of computer vision vol no pp shao kit and fu generalized transfer subspace learning through constraint international journal of computer vision vol no pp blitzer dredze and pereira biographies bollywood boomboxes and blenders domain adaptation for sentiment classification in acl vol conference proceedings pp chen xu weinberger and sha marginalized denoising autoencoders for domain adaptation in international conference on machine learning conference proceedings jiang huang huang and yen integration of global and local metrics for domain adaptation learning via dimensionality reduction ieee transactions on cybernetics gretton bousquet smola and schlkopf measuring statistical dependence with norms in algorithmic learning theory springer pp feudale woody tan myles brown and transfer of multivariate calibration models a review chemometr intell vol no pp gong shi sha and grauman geodesic flow kernel for unsupervised domain adaptation in computer vision and pattern recognition cvpr ieee conference on ieee pp liu li ye ge and x du drift compensation for electronic nose by domain adaption ieee sens vol no pp song smola gretton bedo and borgwardt feature selection via dependence maximization mach learn vol no pp song gretton borgwardt and smola colored maximum variance unfolding in advances in neural information processing systems pp barshan ghodsi azimifar and jahromi supervised principal component analysis visualization classification and regression on subspaces and submanifolds pattern vol no pp daum iii frustratingly easy domain adaptation in proc ann meeting of the assoc for computational linguistics schlkopf smola and mller nonlinear component analysis as a kernel eigenvalue problem neural computation vol no pp scholkopft and mullert fisher discriminant analysis with kernels neural networks for signal processing vol ix pp von meinecke and finding stationary subspaces in multivariate time series physical review letters vol no gama bifet pechenizkiy and bouchachia a survey on concept drift adaptation acm computing surveys csur vol no belkin niyogi and sindhwani manifold regularization a geometric framework for learning from labeled and unlabeled examples mach learn vol pp vergara vembu ayhan ryan homer and huerta chemical gas sensor drift compensation using classifier ensembles sens actuators b vol pp
2
ieee transactions on pattern analysis and machine intelligence submitted matrix learning with nonconvex regularizers sep quanming yao member ieee james kwok fellow ieee taifeng wang member ieee and liu fellow ieee modeling has many important applications in computer vision and machine learning while the matrix rank is often approximated by the convex nuclear norm the use of nonconvex regularizers has demonstrated better empirical performance however the resulting optimization problem is much more challenging recent requires an expensive full svd in each iteration in this paper we show that for many nonconvex regularizers the singular values obtained from the proximal operator can be automatically threshold this allows the proximal operator to be efficiently approximated by the power method we then develop a fast proximal algorithm and its accelerated variant with inexact proximal step a convergence rate of o where t is the number of iterations can be guaranteed furthermore we show the proposed algorithm can be parallelized and the resultant algorithm achieves nearly linear speedup the number of threads extensive experiments are performed on matrix completion and robust principal component analysis significant speedup over the is observed index matrix learning nonconvex regularization proximal algorithm parallel algorithm matrix completion robust principle component analysis f i ntroduction l ow matrix learning is a central issue in many machine learning and computer vision problems for example matrix completion which is one of the most successful approaches in collaborative filtering assumes that the target rating matrix is besides collaborative filtering matrix completion has also been used on tasks such as video and image processing another important use of matrix learning is robust principal component analysis rpca which assumes that the target matrix is and also corrupted by sparse noise rpca has been popularly used in computer vision applications such as shadow removal background modeling and robust photometric stereo besides matrix learning has also been used in face recognition and subspace clustering however minimization of the matrix rank is to alleviate this problem a common approach is to use a convex surrogate such as the nuclear norm which is the sum of singular values of the matrix it is known that the nuclear norm is the tightest convex lower bound of the rank though the nuclear norm is the resultant optimization problem can be solved efficiently using modern tools such as the proximal algorithm algorithm and active subspace selection method despite the success of the nuclear norm recently there have been numerous attempts to use nonconvex surrogates that better approximate the rank function the key idea is yao and kwok are with the department of computer science and engineering hong kong university of science and technology clear water bay hong kong qyaoaa jamesk wang and liu are with microsoft research asia beijing china taifengw tyliu that the larger and thus more informative singular values should be less penalized example nonconvex regularizers include the penalty penalty lsp truncated nuclear norm tnn smoothly clipped absolute deviation scad and minimax concave penalty mcp they have been applied on various computer vision tasks such as image denoising and background modeling empirically these nonconvex regularizers achieve better recovery performance than the convex nuclear norm regularizer recently theoretical results have also been established however the resultant nonconvex optimization problem is much more challenging most existing optimization algorithms that work with the nuclear norm can not be applied a general approach that can still be used is the procedure which decomposes the nonconvex regularizer into a difference of convex functions however a sequence of relaxed optimization problems have to be solved and can be computationally expensive a more efficient approach is the recently proposed iteratively nuclear norm irnn algorithm it is based on the observation that existing nonconvex regularizers are concave with each irnn iteration only involves computing the supergradient of the regularizer and a singular value decomposition svd however performing svd on a m n matrix takes o time assuming m n and can be expensive on large matrices recently the proximal algorithm has been used for nonconvex matrix learning however it requires the full svd to solve the proximal operator which can be expensive in this paper we observe that for the nonconvex regularizers the singular values obtained from the corresponding proximal operator can be automatically ieee transactions on pattern analysis and machine intelligence submitted thresholded one then only needs to find the leading singular in order to generate the next iterate moreover instead of computing the proximal operator on a large matrix one only needs to use the matrix projected onto its leading subspace the matrix size is significantly reduced and the proximal operator can be made much more efficient besides by using the power method a good approximation of this subspace can be efficiently obtained while the proposed procedure can be readily used with the standard proximal algorithm its convergence properties are not directly applicable as the proximal step here is only approximately solved in the sequel we will show that inexactness on the proximal step can be controlled and a o convergence rate can still be guaranteed moreover the algorithm can be further speeded up using acceleration effectiveness of the proposed algorithms is demonstrated on two popular matrix learning applications namely matrix completion and robust principal component analysis rpca for matrix completion we show that additional speedup is possible by exploring the problem s sparse plus structure whereas for rpca we extend the proposed algorithm so that it can handle the two parameter blocks involved in the rpca formulation with the popularity of multicore platforms we parallelize the proposed algorithms so as to handle much larger data sets we will show that they can achieve almost linear speedup the number of threads experiments are performed on both synthetic and realworld data sets results show that the proposed nonconvex matrix learning algorithms can be several orders faster than the and outperform other approaches including factorization and the use of nuclear norm regularization preliminary results of this paper have been reported in in this full version we speed up the algorithm with acceleration and demonstrate how it can be applied to two important instances of matrix learning problems namely matrix completion and rpca besides we show how the proposed algorithms can be parallelized more extensive empirical evaluations are also performed on both the sequential and parallel versions of the algorithms notation in the sequel vectors are denoted by lowercase boldface matrices by uppercase boldface and the transpose by the superscript for a square matrix x tr x is its trace for a rectangle matrix x p kxkf tr x x is its frobenius norm and i x where x is the ith leading singular value of x is the nuclear norm given x xi rm diag x constructs a m m diagonal matrix whose ith diagonal element is xi i denotes the identity matrix for a differentiable function f we use for its gradient for a nonsmooth function we use for its subdifferential x s f y f x s y x where f is a smooth loss r is a nonsmooth regularizer and is a regularization parameter we make the following assumptions on f f is not necessarily convex but is differentiable with continuous gradient kf kf without loss of generality we assume that f is bounded below inf f x and limkxkf f x in recent years the proximal algorithm has been popularly used for solving at iteration t it produces prox r xt xt where is the stepsize and prox r z arg min kx x x is the proximal operator the proximal step in can also be rewritten as arg miny tr xt y xt ky xt y when f and r are convex the proximal algorithm converges to the optimal solution at a rate of o where t is the number of iterations this can be further accelerated to the rate of o by replacing xt in with a proper linear combination of xt and recently the accelerated proximal algorithm has been extended to problems where f or r may be nonconvex the is the nonmonotone accelerated proximal gradient nmapg algorithm algorithm each iteration may perform two proximal steps steps and acceleration is performed in step the objective is then checked to determine whether is accepted step as the problem is nonconvex its convergence rate is still open however empirically it is much faster algorithm nonmonotone apg nmapg input choose and initialize xa f and for t t do yt xt xat xt t a prox r yt yt background if f bt yt then else prox r xt xt if f f otherwise end if p bt f where end for return xt proximal algorithm in this paper we consider matrix learning problems of the form min f x f x x x nonconvex regularizers for the proximal algorithm to be successful the proximal operator has to be efficient the following shows that the ieee transactions on pattern analysis and machine intelligence submitted proximal operator of the nuclear norm k has a closedform solution proposition x u v where is the svd of x and a max aij while the convex nuclear norm makes optimization easier it may not be a good enough approximation of the matrix rank as mentioned in section a number of nonconvex surrogates have been recently proposed in this paper we make the following assumption on the regularizer r in which is satisfied by all nonconvex regularizers in table r is possibly and nonconvex and of the pm form r x x where is concave and for and table s for some popular nonconvex regularizers for the tnn regularizer n is the number of leading singular values that are not penalized for scad and for the others x min x lsp log x x tnn scad mcp x if i otherwise x x x x if x if x otherwise if x otherwise recently the iteratively reweighted nuclear norm irnn algorithm has been proposed to handle this nonconvex matrix optimization problem in each iteration it solves a subproblem in which the original nonconvex regularizer is approximated by a weighted version of the nuclear pm norm kxkw wi x and wm the subproblem has a solution but svd is needed which takes o time other solvers that are designed for specific nonconvex regularizers include for for tnn and for mcp all these including irnn perform svd in each iteration which takes o time and are slow while the proximal algorithm has mostly been used on convex problems recently it is also applied to nonconvex problems the generalized proximal gradient gpg algorithm is the first proximal algorithm which can handle all the above nonconvex regularizers in particular its proximal operator can be computed as follows proposition generalized singular value thresholding gsvt for any r satisfying assumption z udiag v where is the svd of z and with arg min yi z yi yi in problem is solved by iteration however solutions indeed exist for regularizers in table nevertheless proposition still involves svd which takes o time p roposed a lgorithm in this section we show how the proximal algorithm can be made much faster by using approximate gsvt automatic thresholding of singular values the following proposition shows that in becomes zero when z is smaller than a threshold proof can be found in appendix proposition there exists a threshold such that when z together with proposition solving the proximal operator only needs the leading singular of z for the nonconvex regularizers in table simple solutions of can be obtained by examining the optimality conditions of proof can be found in appendix corollary the values for the following regularizers are lsp min tnn max z scad mcp if and otherwise this can also be used with the nuclear norm it can be shown that and max a however since our focus is on nonconvex regularizers the case for nuclear norm will not be further pursued in the sequel approximate gsvt proposition computes the proximal operator by exact svd in this section we show that one can use approximate svd which is more efficient reducing the size of svd assume that z has singular values larger than then we only need to a svd on z with k let the svd of z be the following proposition shows that z can be obtained from the proximal operator on a smaller the proof can be found in appendix proposition assume that q where k is orthogonal and span span q then z q q z obtaining an approximate gsvt to obtain such a q we use the power method algorithm which has been recently used to approximate the svt in nuclear norm minimization as in we set the number of power iterations to can be used via matrix r in algorithm this is particularly useful because of the iterative nature of proximal algorithm obtaining an approximate q using algorithm takes o mnk time as in the propack we noticed a similar result in after the conference version of this paper has been accepted however only considers the case where r is the nuclear norm regularizer ieee transactions on pattern analysis and machine intelligence submitted algorithm can also be used to obtain q in o mnk time however it finds the q exactly and can not benefit from hence though it has the same time complexity as power method empirically it is much less efficient xt for the proximal gradient algorithm in an approximate proximal step solution is generated in step and we try to ensure algorithm powermethod z r note that this is less stringent than where the condition in lemma if holds we accept otherwise we improve by using to the next iterate the following proposition shows convergence of algorithm the proof can be found in appendix input z r r r and the number of power iterations j zr for j j do qj qr yj qr decomposition returning only the q matrix z z qj end for return qj the approximate gsvt procedure is shown in algorithm step uses the power method to efficiently obtain an orthogonal matrix q that approximates span step performs a small svd though this svd is still exact q z is much smaller than z k n vs m n and svd q z takes only o nk time in step the singular values s are thresholded using corollary steps obtains an approximate z using proposition the time complexity for gsvt is reduced from o to o mnk f f x proposition if k where is the number of singular values in a larger than then prox r a algorithm inexact proximal step inexactps x r input x and r for a x x r for p do approxgsvt a if f f x then break end if end for return algorithm approximate gsvt approxgsvt z r input z and r for q powermethod z r u v svd q z a number of s that are in corollary ua a leading columns of u va a leading columns of v for i a do obtain from end for return components of qua diag and va and inexact proximal step in this section the proximal step will be inexact and so it can utilize the approximate gsvt in algorithm inexact proximal step has been considered in however r in is assumed to be convex in attouch et al considered nonconvex r but they require a difficult and expensive condition to control inexactness an example is provided in appendix b let a x x the following shows that the objective f is always decreased as after an exact proximal step lemma f prox r a f x a kprox motivated by this lemma we propose to control the proximal step s inexactness by algorithm note that x the complete procedure the complete procedure for solving is shown in algorithm and will be called fancl fast nonconvex lowrank similar to we perform using the column spaces of the previous iterates vt and for further speedup we employ a continuation strategy at step as in specifically is initialized to a large value and then decreases gradually algorithm fancl fast nonconvex algorithm input choose and initialize rn as random gaussian matrices and for t t do t rt qr vt warm start inexactps xt rt end for return xt assume that evaluations of f and take o mn time which is valid for many applications such as matrix completion and rpca let rt be the rank of xt at the tth iteration and kt rt in algorithm step takes o time and step takes o mnpkt time as rt has kt columns the iteration time complexity is thus o mnpkt in the experiment we set p which is enough to guarantee empirically the iteration time complexity of algorithm thus reduces to o mnkt in contrast exact gsvt takes o time and is much slower as kt besides the space complexity of algorithm is o mn ieee transactions on pattern analysis and machine intelligence submitted proposition r can be decomposed as where and are convex o mnkt besides its space complexity is o mn which is the same as algorithm there are several major differences between algorithm and nmapg first the proximal step of algorithm is only inexact to make the algorithm more robust we do not allow nonmonotonous update f can not be larger than f xt moreover we use a simpler acceleration scheme step in which only xt and are involved on matrix completion problems this allows using the sparse plus structure to greatly reduce the iteration complexity section finally we do not require extra comparison of the objective at step this further reduces the iteration complexity based on this decomposition we introduce the definition of critical point algorithm accelerated fancl algorithm convergence analysis the inexact proximal algorithm is first considered in which assumes r to be convex the nonconvex extension is considered in however as discussed in section they use an expensive condition to control inexactness of the proximal step thus their analysis can not be applied here it is known that in assumption can be decomposed as a difference of convex functions the following proposition shows that r also admits such a decomposition the proof is in appendix definition if x x x then x is a critical point of f the following proposition shows that algorithm generates a bounded sequence the proof is in appendix proposition the sequence xt generated algorithm is bounded and has at least one limit point from let g r xt xt prox r xt xt which is known as the proximal mapping of f at xt if g r xt xt is a critical point of this motivates the use of kg r xt to measure convergence in however kg r xt can not be used here as r is nonconvex and the proximal step is inexact as proposition guarantees the existence of limit points we use xt instead to measure convergence if the proximal step is exact kg r xt xt the following corollary shows convergence of algorithm its proof can be found in appendix corollary t xt f f t the following theorem shows that any limit point is also a critical point the proof is in appendix theorem assume that algorithm returns x only when x prox r x the input is returned as output only if it is a limit point let xtj be a subsequence of xt generated by algorithm such that limtj xtj then is a critical point of acceleration in convex optimization acceleration has been commonly used to speed up convergence of proximal algorithms recently it has also been extended to nonconvex optimization a algorithm is the nmapg algorithm in this section we integrate nmapg with fancl the whole procedure is shown in algorithm the accelerated iterate is obtained in step if the resultant inexact proximal step solution can achieve a sufficient decrease step as in this iterate is accepted step otherwise we choose the inexact proximal step solution obtained with the nonaccelerated iterate xt step note that step is the same as step of algorithm thus the iteration time complexity of algorithm is at most twice that of algorithm and still input choose and initialize rn as random gaussian matrices and for t t do yt xt xt rt qr vt warm start inexactps yt rt if f f xt yt then else inexactps xt rt end if p end for return xt the following proposition shows that algorithm generates a bounded sequence proof can be found in appendix proposition the sequence xt generated algorithm is bounded and has at least one limit point from in corollary xt is used to measure progress before and after the proximal step in algorithm the proximal step may use the accelerated iterate yt or the iterate xt hence we use ct where ct yt if step is performed and ct xt otherwise similar to corollary the following shows a o convergence rate proof can be found in appendix corollary for algorithm t f f min t on nonconvex optimization problems the optimal convergence rate for methods is o thus the convergence rate of algorithm corollary can not improve that of algorithm corollary however in practice acceleration can still significantly reduce the number of iterations on nonconvex problems on the other hand as algorithm may need a second proximal step step its iteration time complexity can be higher than that of algorithm however this is much compensated by the speedup in convergence as will be demonstrated in section empirically algorithm is much faster ieee transactions on pattern analysis and machine intelligence submitted the following theorem shows that any limit point of the iterates from algorithm is also a critical point proof can be found in appendix theorem let xtj be a subsequence of xt generated by algorithm such that limtj xtj with the assumption in theorem is a critical point of a pplications in this section we consider two important instances of problem namely matrix completion and robust principal component analysis rpca as the accelerated fancl algorithm algorithm is usually faster than its nonaccelerated variant we will only consider the accelerated variant here for matrix completion section we will show that algorithm can be made even faster and require much less memory by using the sparse plus structure of the problem in section we show how algorithm can be extended to deal with the two parameter blocks in rpca matrix completion matrix completion attempts to recover a matrix o by observing only some of its elements let the observed positions be indicated by such that if oij is observed and otherwise matrix completion can be formulated as an optimization problem in with f x x o x where a ij aij if and otherwise in the following we show that the time and space complexities of algorithm can be further reduced utilizing the problem structure first consider step which checks the objectives computing f xt relies only on the observed positions in and the singular values of xt hence instead of explicitly constructing xt we maintain the svd ut vt of xt and a sparse matrix xt computing f xt then takes o rt time computing f takes o kt time as rt has rank kt next since yt is a linear combination of xt and in step we can use the above form and compute yt in o m n time thus step then takes o kt m n time steps and perform inexact proximal step for the first proximal step step yt defined in step can be rewritten as xt where when it calls inexactps step of algorithm has a yt yt o xt yt o the first two terms involve matrices while the last term involves a sparse matrix this special sparse plus structure can speed up matrix multiplications specifically for any v av can be obtained as av ut vt v v o yt similarly for any u u a can be obtained as u a u ut vt u u o yt both and take o m n kt k k time instead of o mnk as rt in step of algorithm has kt columns each call to approximate gsvt takes o kt time instead of o mnkt finally step in algorithm also takes o m n kt time as a result step of algorithm takes a total of o m n kt time step is slightly cheaper as no is involved and its time complexity is o m n rt kt rt summarizing the iteration time complexity of algorithm is o m n kt usually kt n and mn thus is much cheaper than the o mnkt complexity of standard section the space complexity is also reduced we only need to store the factorizations of xt and and the sparse matrices xt and these take a total of o kt space instead of o mn in section these techniques can also be used on algorithm it can be easily shown that its iteration time complexity is o m n rt kt rt and its space complexity is o m n rt as no is involved comparison with existing algorithms table compares the convergence rates iteration time complexities and space complexities of various matrix completion algorithms that will be empirically compared in section overall the proposed algorithms algorithms and enjoy fast convergence cheap iteration complexity and low memory cost while algorithms and have the same convergence rate we will see in section that algorithm which uses acceleration is significantly faster robust principal component analysis rpca given a noisy data matrix o rpca assumes that o can be approximated by the sum of a matrix x plus some sparse noise s its optimization problem is min f x s f x s x s x s where f x s r is a regularizer and g is a regularizer here we allow both r and g to be nonconvex and nonsmooth thus can be seen as a nonconvex extension of rpca which uses the nuclear norm regularizer for r and for g some examples of nonconvex r are shown in table and examples of nonconvex g include the norm and ieee transactions on pattern analysis and machine intelligence submitted table comparison of the iteration time complexities convergence rates and space complexity of various matrix completion solvers here kt rt and integer ta are constants for the active subspace selection method active ts is the number of inner iterations required regularizer convex nuclear norm factorization nonconvex method apg active lmafit irnn gpg fancl convergence rate o o t o o t o o while involves two blocks of parameters x and s they are not coupled together thus we can use the separable property of proximal operator x s x s for many popular regularizers computing s takes only o mn time for p example when g s i j s ij sign sij max where sign x is the sign of x however directly computing x requires o time and is expensive to alleviate this problem algorithm can be easily extended to algorithm the iteration time complexity which is dominated by the inexact proximal steps in steps and is reduced to o mnkt algorithm algorithm for rpca input choose and initialize rn as random gaussian matrices and for t t do ytx yts xt st xt st rt qr vt warm start inexactps ytx rt prox g yts f ytx yts yts yts if f f xt st then else inexactps xt rt prox g st f xt st end if p end for return xt and st iteration time complexity o mnrt o kt ts o kt m n o rt m n o o o o rt m n rt kt o kt m n space complexity o mn o m n kt o m n kt o m n rt o m n rt o mn o mn o m n rt o m n kt theorem let xtj stj be a subsequence of xt st generated by algorithm such that limtj xtj and limtj stj with the assumption in theorem is a critical point of parallel fa ncl for m atrix c ompletion in this section we show how the proposed algorithms can be parallelized we will only consider the matrix completion problem in extension to other problems such as rpca in section can be similarly performed moreover for simplicity of discussion we focus on the simpler fancl algorithm algorithm its accelerated variant algorithm can be similarly parallelized and is shown in appendix a parallel algorithms for matrix completion have been proposed in however they are based on stochastic gradient descent and matrix factorization and can not be directly used here proposed algorithm convergence results in section can be easily extended to this rpca problem proofs of the following can be found in appendices and operations on a matrix x are often of the form i multiplications u x and xv for some u v in and ii operation evaluation of f x in a popular scheme in parallel linear algebra is block distribution assume that there are q threads for parallelization block distribution partitions the rows and columns of x into q parts leading to a total of q blocks figure shows how computations of xv u x and operation can be easily parallelized in algorithm the most important variables are the factorized form ut vt of xt and the sparse matrices xt o using block distribution they are thus partitioned as in figure the resultant parallelized version of fancl is shown in algorithm steps that can be parallelized are marked with b two new subroutines are introduced namely step which replaces qr factorization and step which is the parallelized version of algorithm they will be discussed in more detail in the following sections note that algorithm is equivalent to algorithm except that it is parallelized thus the convergence results in section still hold proposition the sequence xt st generated from algorithm is bounded and has at least one limit point corollary let ct ytx yts if steps and are performed and ct xt st otherwise then f t k ct fmin c t identifying the span step in step of algorithm qr factorization is used to find the span of matrix vt this can be parallelized with the householder transformation and gaussian elimination which however is typically very complex the following ieee transactions on pattern analysis and machine intelligence submitted a u x b xv c operation fig parallelization of different matrix operations here the number of threads q is equal to each dotted path denotes operation of a thread moreover though step uses svd it only takes o k time algorithm parallel algorithm to identify the span of a a a b o fig partitioning of variables and o and three threads are used q input matrix a b b a a u v svd b construct w as in proposition v vdiag w b q av return q q q qp algorithm fancl in parallel input choose and initialize rn as random gaussian matrices and partition and o start q threads for parallelization for t t do t b rt vt b at xt xt o for p do b rt at rt b ap f b at f xt b af xt if ap at af then break end if end for b end for return xt proposition proposes a simpler method to identify the span of a matrix proof can be found in appendix proposition given a matrix a let the svd of a a be w wi where wi if and otherwise then av diag w is orthogonal and contains span a the resultant parallel algorithm is shown in algorithm its time complexity is o nq q k k algorithm calls algorithm with input vt and thus takes o nq q time where kt rt we do not parallelize steps as only k k matrices are involved and k is small approximate gsvt step the key steps in approximate gsvt algorithm are the power method and svd the power method can be parallelized straightforwardly as in algorithm in which we also replace the qr subroutine with algorithm algorithm parallel power method powermethodpl z r input matrix z r b zr for j j do b qj yj b z z qj end for return qj as for svd multiple qr factorizations are usually needed for parallelization which are complex as discussed in section the following proposition performs it in a simpler manner proof can be found in appendix proposition given a matrix b let p be orthogonal and equals span b and the svd of p b be then the svd of b is pu the resultant parallelized procedure for approximate gsvt is shown in algorithm at step a small svd is performed by a single thread on the k k matrix p b at step of algorithm is returned from algorithm and we keep in its factorized form besides when algorithm is called z at and has the sparse plus structure mentioned earlier hence and can be used to speed up matrix as rt has kt columns in algorithm as no acceleration is used is equal to in these two equations ieee transactions on pattern analysis and machine intelligence submitted in step takes o kqt q kt time n steps take o q q kt kt time and the rest takes o kt time the total time complexity for algorithm is o kqt q kt q kt kt algorithm approximate gsvt in parallel approxgsvtpl z r input partitioned matrix z and r b q z r b b z q b b p b b a p b u v svd a u v a b u pu a number of s that are in corollary b ua a leading columns of u b va a leading columns of v for i a do obtain from end for return the components of qua diag and va and checking of objectives steps as shown in figures c computation of xt o in f can be directly parallelized and takes o time as r only relies on only one thread is needed to evaluate r xt thus computing f xt takes o rqt time similarly computing f takes o kqt time as xt tr p xt xt xt the lowrank factorized forms of and xt can be utilized based on figures a and b it can be performed in o q kt time thus the time complexity for steps in algorithm is o kqt q kt the iteration time complexity for algorithm is thus o m n kt q kt compared with the speedup the number of threads q is almost linear e xperiments in this section we perform experiments on matrix completion rpca and the parallelized variant of algorithm appendix a experiments are performed on a windows server system with intel xeon cpu cores and memory all the algorithms in sections and are implemented in matlab for section we use the for matrix operations and the standard thread for programming matrix completion we compare a number of matrix completion solvers including models based on i the commonly used convex nuclear norm regularizer ii factorization models which decompose the observed https http matrix o into a product of matrices u and its optimization problem can be written as minu v uv o and iii nonconvex regularizers including the with in table set to lsp with and tnn with the nuclear norm minimization algorithms to be compared include accelerated proximal gradient apg algorithm with the partial svd by propack an inexact and acceleration proximal algorithm the sparse plus structure of the matrix iterate is utilized to speed up computation section and active subspace selection denoted active which subspaces from the active set in each iteration the nuclear norm optimization problem is then reduced to a smaller problem defined only on this active set we do not compare with the algorithm and stochastic gradient descent as they have been shown to be less efficient for the factorization models where the rank is tuned by the validation set we compare with the two algorithms matrix fitting lmafit algorithm and economical matrix pursuit which pursues a basis in each iteration we do not compare with the procedure since it has been shown to be inferior to irnn for models with nonconvex regularizers we compare with the following solvers iterative reweighted nuclear norm irnn generalized proximal gradient gpg algorithm with the underlying problem solved using the solutions in and the proposed fancl algorithm algorithm and its accelerated variant algorithm we set j and p all the algorithms are stopped when the difference in objective values between consecutive iterations becomes smaller than synthetic data the observed m m matrix is generated as o uv g where the elements of u v with k are sampled from the standard normal distribution n and elements of g sampled from n a total of log m random elements in o are observed half of them are used for training and the rest as validation set for parameter tuning testing is performed on the unobserved elements for performance evaluation we use i the normalized mean squared error nmse x uv kf uv kf where x is the recovered matrix and denotes the unobserved positions ii rank of x and iii training cpu time we vary m in the range each experiment is repeated five times ieee transactions on pattern analysis and machine intelligence submitted table matrix completion performance on the synthetic data here nmse is scaled by cpu time is in seconds and the number in brackets is the data sparsity the best results according to the pairwise with confidence are highlighted m m m nmse rank time nmse rank time nmse rank time nuclear norm apg active fixed rank lmafit capped irnn gpg fancl lsp irnn gpg fancl tnn irnn gpg fancl results are shown in table as can be seen nonconvex regularization lsp and tnn leads to much lower nmse s than convex nuclear norm regularization and factorization moreover the nuclear norm and output much higher ranks in terms of speed among the nonconvex solvers fancl is fast and fanclacc is the fastest the larger the matrix the higher is the speedup of fancl and over gpg and irnn movielens experiment is performed on the popular movielens data set table which contain ratings of different users on movies we follow the setup in and use of the observed ratings for training for validation and the rest for testing for performance evaluation we use q the root mean squared a error on the test set rmse x o where x is the recovered matrix the experiment is repeated five times table recommendation data sets used in the experiments users movies ratings movielens netflix yahoo b lsp fig objective vs cpu time for the and lsp on the plot of tnn is similar and thus not shown results are shown in table again nonconvex regularizers lead to the lowest rmse s moreover fanclacc is also the fastest among nonconvex solvers even faster than the gpg in particular fancl and its accelerated variant are the only solvers for nonconvex regularization that can be run on the and data sets figure compares the objectives vs cpu time for the nonconvex regularization solvers on as can be seen fancl and decrease the objective and rmse much faster than the others figure shows the testing rmses on again is the fastest use of the observed ratings for training for validation and the rest for testing each experiment is repeated five times results are shown in table apg gpg and irnn can not be run as the data set is large has similar running time as lmafit but inferior performance and thus is not compared again the nonconvex regularizers converge faster yield lower rmse s and solutions of much lower ranks figure shows the objectives and rmse vs time and is the netflix and yahoo next we perform experiments on two very large recommendation data sets netflix and yahoo table we randomly on these two data sets easily overfits as the rank increases hence the validation set selects a smaller rank relative to that obtained by the nuclear norm and stops earlier however as can be seen its rmse is much worse ieee transactions on pattern analysis and machine intelligence submitted table matrix completion results on the movielens data sets time is in seconds the best results according to the pairwise with confidence are highlighted rmse rank time rmse rank time rmse rank time nuclear apg norm active fixed lmafit rank capped lsp tnn irnn gpg fancl irnn gpg fancl irnn gpg fancl table results on the netflix and yahoo data sets cpu time is in minutes the best results according to the pairwise with confidence are highlighted netflix yahoo rmse rank time rmse rank time fixed rank lmafit fancl lsp fancl tnn fancl a netflix fig rmse vs cpu time on the data sets robust principal component analysis synthetic data in this section we first perform experiments on a synthetic data set the observed m m matrix is generated as o where elements of u v with k are sampled from n and elements of g are sampled from n matrix is sparse with of its elements randomly set to or with equal probabilities the whole data set is then randomly split into training and test sets of equal size the standard regularizer is used as the sparsity regularizer g in and different lowrank regularizers are used as hyperparameters and in are tuned using the training set for performance evaluation we use i nmse k x s uv kf where x and s are the recovered and sparse components respectively ii b yahoo fig rmse vs cpu time on the netflix and yahoo data sets accuracy on locating the sparse support of percentage of entries that and sij are nonzero or zero together iii the recovered rank and iv cpu time we vary m in each experiment is repeated five times note that irnn and active subspace selection can not be ieee transactions on pattern analysis and machine intelligence submitted table rpca performance on synthetic data nmse is scaled by and cpu time is in seconds the best results according to the pairwise with confidence are highlighted m m m nmse rank time nmse rank time nmse rank time nuclear norm apg gpg lsp gpg tnn gpg table psnr in db and cpu time in seconds on the video background removal experiment the psnrs for all the input videos are bootstrap campus escalator hall psnr time psnr time psnr time psnr time nuclear norm apg gpg lsp gpg tnn gpg used here their objectives are of the form smooth function plus regularizer but rpca also has a nonsmooth regularizer similarly is only for matrix completion moreover fancl which has been shown to be slower than will not be compared results are shown in table the accuracies on locating the sparse support are always for all methods and thus are not shown moreover while both convex and nonconvex regularizers can perfectly recover the matrix rank and sparse locations the nonconvex regularizers have lower nmse s as in matrix completion is much faster the larger the matrix the higher the speedup psnr mn kx where x is the recovered video and o is the results are shown in table as can be seen the nonconvex regularizers lead to better psnr s than the convex nuclear norm moreover is much faster than gpg figure shows psnr vs cpu time on the bootstrap and campus data sets again converges to higher psnr much faster results on hall and escalator are similar background removal on videos in this section we use rpca for background removal in videos four benchmark videos in are used table and example frames are shown in figure as in the image background is considered while the foreground moving objects contribute to the sparse component table videos used in the experiment bootstrap campus escalator pixels frame total frames a bootstrap b campus c escalator a bootstrap hall d hall fig example image frames in the videos given a video with n image frames each frame is first reshaped as a column vector where m and then all the frames are stacked together to form a m n matrix the pixel values are normalized to and gaussian noise from n is added the experiment is repeated five times for performance evaluation we use the commonly used peak ratio b campus fig psnr vs cpu time on the bootstrap and campus videos parallel matrix completion in this section we experiment with the proposed parallel algorithm in section on the netflix and yahoo data sets table we do not compare with ieee transactions on pattern analysis and machine intelligence submitted algorithms as they have inferior performance section the machine has cores and we use one thread for each core as suggested in we randomly shuffle all the matrix columns and rows before partitioning we use the lsp penalty with and fix the total number of iterations to the hyperparameters are the same as in section experiments are repeated five times convergence of the objective for a typical run is shown in figure as we have multiple threads running on a single cpu we report the clock time instead of cpu time as can be seen the accelerated algorithms are much faster than the ones and parallelization provides further speedup a netflix b yahoo fig objective value vs clock time for the versions of fancl on the netflix and yahoo data sets figure shows the speedup with different numbers of threads as can be seen the parallelized variants scale well with the number of threads in particular scaling is better on yahoo the observed entries in its partitioned data submatrices are distributed more evenly which improves performance of parallel algorithms another observation is that the speedup can be larger than one as discussed in in performing multiplications with a large sparse matrix a significant amount of time is spent on indexing its nonzero elements when the matrix is partitioned each submatrix becomes smaller and easier to be indexed thus the memory cache also becomes more effective c onclusion in this paper we considered the challenging problem of nonconvex matrix optimization the key observations are that for the popular regularizers the singular values obtained from the proximal operator can fig speedup vs the number of threads for parallel fancl the red dashed line indicates linear speedup be automatically thresholded and the proximal operator can be computed on a smaller matrix this allows the proximal operator to be efficiently approximated by the power method we extended the proximal algorithm in this nonconvex optimization setting with acceleration and inexact proximal step we further parallelized the proposed algorithm which scales well the number of threads extensive experiments on matrix completion and rpca show that the proposed algorithm is much faster than the it also demonstrates that nonconvex lowrank regularizers outperform the standard convex nuclear norm regularizer in the parallel setting typically the observed entries are distributed in the partitioned matrices and so workloads in the different threads are not well balanced one future direction is to allow asynchronized updates of the parallel algorithm this can help to reduce the waiting time for threads with light workloads and makes more efficient use of the cpu moreover while parallel algorithms on multicore machines are easier to implement and do not have communication issues they are less scalable than distributed algorithms to allow further scaleup to massive data sets we will consider extending the proposed algorithms to a distributed computing environment r eferences and recht exact matrix completion via convex optimization foundations of computational mathematics vol no pp ji liu shen and xu robust video denoising using low rank matrix completion in proceedings of the conference on computer vision and pattern recognition pp hu zhang ye li and x he fast and accurate matrix completion via truncated nuclear norm regularization ieee transactions on pattern analysis and machine intelligence vol no pp liu musialski wonka and ye tensor completion for estimating missing values in visual data ieee transactions on pattern analysis and machine intelligence vol no pp lu tang yan and lin nonconvex nonsmooth low rank minimization via iteratively reweighted nuclear norm ieee transactions on image processing vol no pp gu xie meng zuo feng and zhang weighted nuclear norm minimization and its applications to low level vision international journal of computer vision vol no pp li ma and wright robust principal component analysis journal of the acm vol no ieee transactions on pattern analysis and machine intelligence submitted q sun xiang and ye robust principal component analysis via capped norms in proceedings of the international conference on knowledge discovery and data mining pp oh tai bazin kim and kweon partial sum minimization of singular values in robust pca algorithm and applications ieee transactions on pattern analysis and machine intelligence vol no pp wu ganesh shi matsushita wang and ma robust photometric stereo via matrix completion and recovery in proceedings of the asian conference on computer vision pp yang luo qian tai zhang and xu nuclear norm based matrix regression with applications to face recognition with occlusion and illumination changes ieee transactions on pattern analysis and machine intelligence vol no pp liu lin yan j sun yu and ma robust recovery of subspace structures by representation ieee transactions on pattern analysis and machine intelligence vol no pp ji and ye an accelerated gradient method for trace norm minimization in proceedings of the international conference on machine learning pp mazumder hastie and tibshirani spectral regularization algorithms for learning large incomplete matrices journal of machine learning research vol pp yao and kwok accelerated inexact for fast matrix completion in proceedings of the international joint conference on artificial intelligence pp zhang schuurmans and yu accelerated training for regularization a boosting approach in advances in neural information processing systems pp hsieh and olsen nuclear norm minimization via active subspace selection in proceedings of the international conference on machine learning pp zhang analysis of convex relaxation for sparse regularization journal of machine learning research vol pp wakin and boyd enhancing sparsity by reweighted minimization journal of fourier analysis and applications vol no pp j fan and li variable selection via nonconcave penalized likelihood and its oracle properties journal of the american statistical association vol no pp zhang nearly unbiased variable selection under minimax concave penalty annals of statistics vol no pp gui han and gu towards faster rates and oracle property for matrix estimation in proceedings of the international conference on machine learning pp yuille and rangarajan the procedure neural computation vol no pp gong zhang lu huang and ye a general iterative shrinkage and thresholding algorithm for regularized optimization problems in proceedings of the international conference on machine learning pp li and lin accelerated proximal gradient methods for nonconvex programming in advances in neural information processing systems pp lu zhu xu yan and lin generalized singular value thresholding in proceedings of the aaai conference on artificial intelligence pp halko martinsson and tropp finding structure with randomness probabilistic algorithms for constructing approximate matrix decompositions siam review vol no pp yao kwok and zhong fast matrix learning with nonconvex regularization in proceedings of the international conference on data mining pp parikh and boyd proximal algorithms foundations and trends in optimization vol no pp beck and teboulle a fast iterative algorithm for linear inverse problems siam journal on imaging sciences vol no pp ghadimi and lan accelerated gradient methods for nonconvex nonlinear and stochastic programming mathematical programming vol no pp cai and shen a singular value thresholding algorithm for matrix completion siam journal on optimization vol no pp oh matsushita tai and kweon fast randomized singular value thresholding for nuclear norm minimization in proceedings of the conference on computer vision and pattern recognition pp toh and yun an accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems pacific journal of optimization vol no lin chen and ma the augmented lagrange multiplier method for exact recovery of corrupted matrices school of eecs peking university tech larsen lanczos bidiagonalization with partial reorthogonalization department of computer science aarhus university daimi attouch bolte and svaiter convergence of descent methods for and tame problems proximal algorithms splitting and regularized methods mathematical programming vol no pp schmidt roux and bach convergence rates of inexact methods for convex optimization in advances in neural information processing systems pp generalized differentiability duality and optimization for problems dealing with differences of convex functions convexity and duality in optimization pp nesterov introductory lectures on convex optimization a basic course springer wen yin and zhang solving a factorization model for matrix completion by a nonlinear successive algorithm mathematical programming computation vol no pp wang lai lu fan davulcu and ye orthogonal matrix pursuit for low rank matrix completion siam journal on scientific computing vol no pp gemulla nijkamp haas and sismanis matrix factorization with distributed stochastic gradient descent in proceedings of the international conference on knowledge discovery and data mining pp yu hsieh si and dhillon scalable coordinate descent approaches to parallel matrix factorization for recommender systems in proceedings of the international conference on data mining pp recht and parallel stochastic gradient algorithms for matrix completion mathematical programming computation vol no pp demmel heath and van der vorst parallel numerical linear algebra acta numerica vol pp avron kale sindhwani and kasiviswanathan efficient and practical stochastic subgradient descent for nuclear norm regularization in proceedings of the international conference on machine learning pp zhuang chin juan and lin a fast parallel sgd for matrix factorization in shared memory systems in proceedings of the acm conference on recommender systems pp goumas kourtis anastopoulos karakasis and koziris understanding the performance of sparse matrixvector multiplication in proceedings of the euromicro conference on parallel distributed and processing pp bertsekas and tsitsiklis parallel and distributed computation numerical methods athena scientific bertsekas nonlinear programming athena scientific rao separation theorems for singular values of matrices and their applications in multivariate analysis journal of multivariate analysis vol no pp arbenz solving large scale eigenvalue problems department of mathematics eth lecture notes online available http lewis and sendov nonsmooth analysis of singular values analysis vol no pp ieee transactions on pattern analysis and machine intelligence submitted a ppendix a parallel fa acc algorithm shows the parallel version of acceleration is performed at step the first inexact proximal step is performed at steps step checks whether the accelerated iterate is accepted if the condition fails a second inexact proximal step is performed at steps note that the algorithm is equivalent to algorithm and thus the convergence analysis in section still holds algorithm in parallel input choose and initialize rn as random gaussian matrices and partition and o start q threads for parallelization for t t do t b yt xt xt b rt vt b aat yt yt o for p do b rt aat rt b ap f b at f xt b af xt if ap at af then break end if end for b if f f xt yt then b else b at xt xt o for p do b rt at rt b bp f b bt f xt b bf xt if bp bt bf then break end if end for b end if p end for return xt example using proposition we can decompose r as where n x log let the svd of be assume that has k singular values larger than let uk resp vk be the matrix containing the first k columns of u resp v then uk vk b where b c u k c cvk and c let c ci with ci then udiag c v thus a full svd on is needed which is expensive and impractical for large matrices a ppendix c p roofs proposition for simplicity of notations we write z as first we introduce the definition of for a concave function and two lemmas definition for a concave function f its is given by g lemma i inf y g ii assume that yj yi then supgj yj gj inf gi yi gi lemma i max where gi ii if then increases with proof part i let gi from the optimality condition of consider the two possibilities a b in other words the optimal solution is achieved at the boundary and we have and combining these two cases the relationship between and can be expressed as max part ii assume that then becomes let becomes larger as according to we have two possibilities for its corresponding a ppendix b t he c hecking c ondition in the condition in to accept an approximate is where and are two convex functions such that ka for some constant b using the lsp regularizer as an then supgj gj inf gi gi from j i lemma together with the fact that there exists a which is not smaller than to make hold then inf gj gj supgi gi from j i lemma however such a solution may not exist when ieee transactions on pattern analysis and machine intelligence submitted thus while there can be multiple solutions to ensure the first case must exist we take the largest solution of all possible candidates thus if gets larger also becomes larger proof of proposition from lemma we have and if if otherwise max we can see that y once however if becomes smaller will reach before reaches zero this comes from two facts first becomes smaller as gets smaller lemma but inf gi gi i will not become smaller lemma second we have inf y g an illustration of the relationships among and gi is shown in the following figure thus there exists such that once and becomes let arg minqi qi as is also quadratic and can not be we have no solution if qi otherwise note that when there is no solution to as can arbitrarily close to since the possibility for arg minyi h yi is covered by arg thus and have covered all possibilities of using them we have if then h h i min h yi min in order to get we need min which leads to p thus if min then corollary in this section we show how to derive the threshold for the penalty derivations for the other penalties can be obtained similarly penalty proof note that problem considers each singular value separately for simplicity of notations let denote z for the ith singular value let h yi yi min yi thus finally combining the above two cases we can clude once min then thus min proposition proof first we introduce the following theorem where pi qi qi note that is quadratic there are only three possibilities for arg pi if min pi if if pi pi thus if then we have arg yi arg min h yi yi arg minyi yi if then i min h yi min min theorem separation theorem let a and b with b b i then b a a for i min r n let the svd of z be z can then be rewritten as z where contains the leading columns of u and the remaining columns similarly resp contains the leading eigenvalues resp columns of resp v hence q z max q z i ieee transactions on pattern analysis and machine intelligence submitted let q ui and vi u i zvi z p then kbp b p uk uk kf uk uk kf where is a constant proof at the pth iteration of algorithm inside algorithm step since z a and r we have where ui resp vi is the ith column of u resp v q z u qq zvi i i qr where is due to span span q from theorem by substituting q b and a z we have q z z combining with we obtain that is the optimal solution of thus the svd of q z is q with the corresponding left and right singular vectors contained in q and respectively again by theorem we have q z z besides using q z q i then for inside algorithm step we have span span a q thus span span a a q span a a qj span qr bp kbp b p uk uk kf uk uk kf q uk uk kf q z q q q q q b uk uk kf where thus p kbp b p uk uk kf uk uk kf where follows from that q resp is orthogonal to resp shows that there are only singular values in q z larger than thus q and we get finally q z q q z where comes from span span q comes from that svd of z is and z only has singular values larger than note that c in lemma together with and we have then qr where comes from the fact that q is returned after j iterations of algorithm and from the definition of at step in algorithm thus the first singular values are from the term q hence q z max q proposition proof first we introduce the following lemmas lemma in algorithm let the svd of z be and contain the first k columns of we have kqj q kcc j kf k kf where z z and c qr zr lemma in algorithm let the svd of a be uk vk and bp qr proof of proposition for bp in we have bp uk from proposition where uk comes from svd of a as k span span uk then from proposition we have uk prox r u k a prox r a thus prox r a proposition proof first we introduce lemma pm lemma let x f x if f is convex is also convex on x for in assumption it can be rewritten as where for some constant and obviously both and are convex define m m x x x x and x x from lemma both and are convex thus r can also be written as a difference of convex functions r x x x ieee transactions on pattern analysis and machine intelligence submitted proposition proof from step of algorithm which ensures we have f f xt xt proposition proof consider the two cases step in algorithm is performed then f f xt yt step is performed then summing this from t to t we have t x xt f f xt f f xt xt f inf as f is bounded from below assumption f inf f is a positive constant let t we have x xt lim f x f f xt x x yt xt corollary proof combining and we have min xt t t xt t xt t f inf f t lim kxkf as xtj is a subsequence of xt with limit point lim xtj inexactps lim xtj lim rtj tj lim xt x both and are infinite as in the above two cases xt is bounded when either of and is infinite combining the above xt generated from algorithm is bounded and has at least one limit point which implies tj is infinite but is finite for tj note that f xtj f xtj due to and from assumption then the sequence f xtj is bounded again from assumption we have then xtj is also bounded which has at least one limit point from we have lim xtj lim xtj inf f x lemma if x prox r x x then x is a critical point of tj f x which indicates that maxtj kxtj kf together with the sequence xt is bounded which has at least one limit point proof first we introduce lemma tj is finite but is infinite for tj we have from x kxtj xtj from assumption we also have theorem tj tj where f inf f is a constant consider the three cases which implies that kxt kt together with xt is a bounded sequence with at least one limit point partition the iterations t into two sets and such that t if step is performed and t if step is performed sum and from t to t as f is bounded from below assumption x yt xt from assumption we also have kxkf for xtj combining and we have lim inexactps lim xtj lim rtj tj tj tj inexactps lim rtj corollary proof let min from we have f f xt x x yt xt tj thus prox r holds by the tion from lemma is a critical point of t x ct ieee transactions on pattern analysis and machine intelligence submitted thus min ct t ct t ct t t combining and we have lim xtj inexactps lim ytj lim rtj tj tj tj inexactps lim rtj tj f inf f t thus by the assumption we also have from lemma is also a critical point of prox r where the last inequity comes from theorem proof partition the iterations into two sets and such that t if step is performed and t if step is performed consider the three cases is finite but is infinite let xtj be a subsequence of xt where t and limtj xtj from we have lim xtj lim xtj tj tj besides lim xtj inexactps tj steps and are performed then f f xt st ytx yts steps and are performed then tj proposition proof consider the two cases tj combining and we have lim xtj inexactps lim xtj lim rtj tj tj tj inexactps lim rtj both and are infinite from the above two cases we can see that the limit point is also a critical point of when either or is infinite thus limit points of xt are also critical points of lim xtj lim rtj tj f f xt st prox r from lemma is also a critical point of is infinite but is finite let xtj be a subsequence of xt where t and limtj xtj from we have x yt x xt st summing and from t to t we have f f xt st as f is bounded from below assumption we have which indicates where f inf f we consider the three cases lim xtj ytj tj from we have lim ytj lim xtj tj besides lim xtj inexactps is finite but is infinite for tj x kxtj xtj tj x kstj stj tj tj st kf ytx yts tj xt partition t into two sets as and where t if steps are performed otherwise t and steps are performed let thus by the assumption we also have lim ytj lim rtj tj tj again from assumption we have lim kxkf or kskf f x s ieee transactions on pattern analysis and machine intelligence submitted thus maxtj k xtj stj kf and the sequence xtj stj is bounded with at least one limit point is infinite but is finite for tj from and we have f xtj stj f xtj stj as f is bounded from below assumption the sequence f xtj stj must be bounded besides again from assumption we have which indicates the sequence xtj stj must be bounded with at least one limit point partition into two sets as and where t if steps are performed otherwise t and steps are performed we consider three cases here is finite but is infinite let xtj stj be a subsequence of xt st where t and lim xtj stj tj from we have x both and are infinite as in the above two cases xt st is bounded with at least one limit point once or is infinite xt x st thus the sequence xt st generated from algorithm is bounded and has at least one limit point these indicate lim xtj xtj lim stj stj tj corollary tj proof let min first we have ytx from we have yts lim xtj lim xtj tj x xt st t x k ct tj together with and we have min k ct thus t f holds by the assumption then the proximal operator is always exact for using we have t k ct t x prox r k ct lim stj lim prox g stj tj f inf f t definition if x and s satisfy f x s x x f x s s s f xtj stj f combining with and is a critical point of by using lemma is infinite but is finite let xtj stj be a subsequence of xt st where t and lim xtj stj tj from we have x kxtj ytxj tj then x s is a critical point of f x lemma if x and s satisfy x prox r f x s s prox g s f x s then x s is a critical point of f theorem proof let g be the difference of convex decomposition of g as two blocks of variables are involved its critical points are defined as follows tj prox g where the last inequality comes from combing and we have lim xtj inexactps lim xtj lim rtj tj tj tj inexactps lim rtj tj kstj ytsj tj and then lim xtj ytxj lim stj ytsj tj tj ieee transactions on pattern analysis and machine intelligence submitted thus lim xtj lim ytxj tj tj combing and we have lim xtj inexactps lim ytxj lim rtj tj tj tj inexactps lim rtj tj f holds by the assumption then the proximal operator is always exact for using prox r lim stj lim prox g ytsj t t j j prox g f ytxj ytsj f combining with and is a critical point of by using lemma both and are infinite as above either or is infinite a limit point is a critical point of thus the limit points of the sequence xt st are also critical points of proposition proof as the svd of a a is the svd of a can be written as v where u is an orthogonal matrix containing the span of a from the construction of w we have av diag w v v diag w diag w consider the two cases proof let the svd of b be then p b p note that p p pp i span p span thus proposition where the second equality comes from a is of full column rank then diag w u u which contains the span of a assume that a has k columns and its rank is k then diag w udiag diag where contains the first columns of u as a is only of rank diag w again covers the span of a the proposition then follows thus the svd of p b is p as a result we have v finally from u p we have pu pp where the second equality again comes from
2
when face recognition meets deep learning an extended for face recognition sep xiang xu pengfei dou ha a le ioannis computational biomedicine lab university of houston calhoun rd houston tx usa abstract most of the face recognition works focus on specific modules or demonstrate a research idea this paper presents a face recognition system that is robust to pose variations as large as by leveraging deep learning technology the architecture and the interface of are described and each module is introduced in detail extensive experiments are conducted on the and demonstrating that outperforms existing face recognition systems such as facenet and a commercial software cots by at least on the dataset and on the dataset on average in face identification tasks also achieves performance of on the dataset by comparing the accuracy score from template matching it fills a gap by providing a face recognition system that has compatible results with face recognition systems using deep learning techniques keywords face recognition face recognition deep learning pipeline msc preprint submitted to si on biometrics in the wild image and vision computing september figure depiction of existing pose problem from selected samples distribution of yaw angles are from to in t constrained dataset and in b the wild dataset introduction face recognition is an application in which the computer either classifies human identity according to the face face identification or verifies whether two images belong to the same subject face verification a common face recognition system has two steps enrollment and matching specifically in the enrollment stage features are obtained from a facial image or a set of images to obtain a signature or a template for each subject the enrollment usually has three steps i face detection ii face alignment and iii signature generation in the matching stage these signatures are compared to obtain a distance for the identification or verification problem recently face recognition technology has significantly advanced by the deployment of deep learning technology especially using convolutional neural networks cnn pure face recognition systems have achieved human performance or even better deepface proposed by taigman et al first reported performance on the labeled faces in the wild lfw standard benchmark that was better than human efforts facenet proposed by schroff et al used triplet loss to train a deep neural corresponding author email address ikakadia ioannis kakadiaris network using million labeled faces and obtained a performance of verification accuracy on the same dataset the success of deep learning techniques in face recognition indeed relies on the following four aspects i a large amount of data either from public datasets such as webface and or private datasets ii advanced network architecture such as vgg and resnet iii discriminative learning approaches such as triplet loss center loss range loss sphereface and iv regularization methods such as noisy softmax however face recognition is still not a solved problem in conditions some datasets such as lfw use face detector which is not designed to work in the whole pose distribution from to in an unconstrained scenario especially using surveillance camera there is a plethora of images with large variations in head pose expression illumination and occlusions to overcome these challenges a face model can be applied to assist a face recognition a facial model is intrinsically invariant to pose and illumination to use a face model a model should is fitted on the facial images and a projection matrix is estimated with the help of a projection matrix and fitted model it is easy to rotate the face and align the input images from any arbitrary large pose positions to the frontal position for the feature extraction and signature matching in the last few years researchers focused on the face recognition from pure image view and have developed numerous loss function approaches to learn the discriminative features from the different poses a limited number of face recognition systems have been developed using the model to help align images kakadiaris et al proposed a pose and illumination invariant system which frontalized the face image using annotated face model afm hu et al proposed a unified morphable model which has additional pca subspace for perturbation to address the problem mentioned above this paper presents a face recognition system called which significantly improves face recognition performance using the afm and deep learning technology especially in large pose scenarios there is enormous demand for face recognition systems because frontal face recognition can be considered as a solved problem consists of several independent modules face detection landmark detection model reconstruction pose estimation lifting texture signature generation and matching despite face detection methods all other methods are developed in the computational biomedicine lab it provides sufficient tools and interfaces to use different designed in the system the core code is written in efficient which provides bindings to python the system leverages several libraries such as opencv glog gflags pugixml json for modern and caffe in after detecting the face and landmarks from image a model is constructed from a image or several images by estimating the projection matrix the correspondence between the model and image can be computed then a model is used to help frontalize the face the features and occlusion encodings are extracted to represent the face for matching we use cosine similarity to compute the similarity between two signature vectors in summary this paper extends xu et al and make the following contributions a brief survey of recent face recognition pipeline and each module are summarized a face recognition system using deep learning is developed the intrinsic value of a model is explored to frontalize the face and the features are extracted for representation we demonstrate results that a face recognition system exhibits a performance that is comparable to a only fr system our face recognition results outperform the facenet and cots by at least on the dataset and on the dataset on average in addition we demonstrate that can generate template signatures from multiple images and achieve performance of on the dataset the rest of the paper is organized as follows modern face recognition systems are reviewed in sec in sec the architecture of and its functionalities are discussed in sec each module separately is introduced in detail detailed evaluations on the indoor and datasets are reported in sec related work we divide the current existing work into two categories in sec we discuss detailed recent related work for each module in the common face recognition pipeline from an academic view system level papers about the implementation are discussed in sec modules face detection face detection is the first step as well as the most studied topic in the face recognition domain zefeiriou et al presented a comprehensive survey on this topic they divided the approaches into two categories rigid methods and methods in addition to the methods summarized in the approaches of object detection under the regions with a convolutional neural network framework have been well developed some techniques can be directly integrated to face detection li et al used a mean face model and divided the face into ten parts they joined face proposals into a single model the approach proposed by hu and ramanan explored context and resolution of images to the residual networks resnet which was demonstrated to detect a face as small as three pixels despite the face detectors above using proposal and classification technique detectors have also been developed ssd and yolo classify a fixed grid of boxes and learn regression functions to map to the objects simultaneously lin et al address the issue that the performance of detectors are not as strong as detectors because of unbalanced positive and negative samples with focal loss they also trained object detector very recently ssh has been proposed by najibi et al using loss for both classification and regression in the network face alignment face alignment refers to aligning the face image to a specific position usually researchers include landmark detection in this topic jin and tan summarized the categories of popular approaches for this task cascaded regression was a major trend in this topic and classification frameworks tend to be popular recently zhu et al searched for similar shapes from exemplars and regressed the shapes by using sift features and updating the probability of shapes an ensemble of random ferns are used to learn the local binary discriminative features xu and kakadiaris proposed to jointly learn head pose estimation and face alignment tasks in a single framework jfa using global and local cnn features some researchers treat the face alignment task as a classification problem kepler joined cnn features from different layers and captured the response map to localize the landmarks wu et al proposed the godp algorithm to localize landmarks under a fully convolutional network fcn framework by exploring information some recent works use generative adversarial networks gan to frontalize the face huang et al used gan for photorealistic frontal synthesis images but kept identity and details of texture yin et al incorporated a model with gan to frontalize faces for large poses in the wild drgan was proposed by tran et al to generate the frontalized face from face images under different poses they also demonstrated the usage of gan in face recognition signature generation an emerging topic in face recognition research is generating a discriminative representation for a subject when training with millions of face images using deep learning technology many feature descriptors have been proposed recently parkhi et al proposed the descriptor within architectures triplet loss was proposed by schroff et al to train a deep neural network using million labeled faces from google masi et al developed face recognition for unconstrained environments by the resnet and on rendering images in addition to frontalizing the face they also rendered face images to and masi et al addressed the question of whether we need to collect millions of faces for training a face recognition system they argued that we can use synthesized images instead of real images to train the model and still obtain comparable results despite triplet loss many other loss functions have been proposed recently center loss was added by wen et al alongside cross entropy loss to obtain more discriminative features for deep face recognition range loss was designed by zhang et al to train deep neural networks with a long tail distribution loss was used in sphereface and demonstrated efficiency in learning the discriminative features marginal loss was proposed to name category core detection alignment representation matching openbr x x x x x x x deepface x x x facenet x x x x x x openface torch x x x x x x x x x x x x x modern active x table comparison of recent existing face recognition pipelines we employ the same definition of modern and active made by klontz et al means that this information is not provided in the paper enhance the discriminative ability by maximizing the distances from large scale training data system opencv and openbr are some well known computer vision and pattern recognition libraries however the eigenface algorithm in opencv is openbr has not been updated since both libraries only support nearly frontal face recognition since the face detector can only detect the frontal face openface is an implementation of facenet by amos et al using python and torch which provides four demos for usage openface applied dlib face detector and landmark detector to do the which is better than openbr there is another official tensorflow implementation of facenet in which the authors use mtcnn to detect and align face which boosts performance speed and detection accuracy to the best of our knowledge there is a limited amount of system papers most papers focus on different or the research of face representations the comparison of recent existing face recognition systems is presented in tab including the research on face representation system design is a face recognition system designed for face recognition moreover this system is suitable for research and can fast images provide baselines plot the results and support further development system requirements is written in clean and efficient which is developed on a linux platform ubuntu system it requires gcc or above for compilation it leverages a list of libraries and tools such as cmake boost opencv gflags glog puxixml json and caffe most of the dependencies are available in the ubuntu repository except caffe therefore to install dependencies it only requires installing caffe manually architecture overview figure illustrates the architecture of which explicitly illustrates modules and functionality the blue blocks are external shared libraries the other three components belong to our system as a base of the software green blocks provide the basic functions the algorithm modules are constructed as apis the applications and guis are the top of the software and are built by combining these apis the users can directly call these applications and obtain the results the advantages of this architecture are that it is simple and with full development of libraries the system can use and other features easily data structures in the basic element is file on the disk all operations or algorithms are based on the files the basic data structure is data which is a hash table with pairs of keys and values both keys and values are in string type unlike openbr to avoid saving giant data in the memory we only keep the file path in the memory gui applications detection file system external core caffe utility matching logging dataset utility io glog caffe gflag modules internal qt opencv apps figure depiction of s architecture in addition to external libraries it includes some other base libraries to process files use cuda manage the data files etc based on these basic libraries high level apis were implemented by calling function from each module based on s sdk it is easier to write various applications for different purposes also we created the guis to demonstrate our configuration we have two approaches to run the first one is defining the configuration file json format which points out the datasets input files output directories involving modules and their model locations and evaluation attribute dataset contains the information of input dataset including the name and path attribute input contains the list of galleries and probes attribute output defines the output directories attribute pipelines defines the modules used in the pipeline the pip command line application only accepts the argument of the configuration file which will parse the configuration file load the models and run defined modules the advantages of this approach are simplicity and flexibility unlike the openbr framework it does not require a detailed understanding of the option or input long arguments in the command line the users only need to change some values in the attributes dataset and input set dataset directory and file to enroll and program pip will generate the output they defined in this configuration file command line interface to make full use of sdk of we created some corresponding applications to run each module all applications accept the file list text or csv file by default which includes tag at the top line a folder or a single image the io system will load the data in the memory and process the data according to the data list the arguments specify the location of the input and where the output should be saved s enrollment is executed and generates signatures to the output directory the path of the signature is recorded in the data by calling the api from io system the list of data will be written to the file default is in format face recognition figure depicts the overview of enrollment in the which contains face detection face alignment face reconstruction pose estimation texture lifting and signature generation face detection a serious problem in openbr openface and even the commercial face recognition software cots is the face detection rate openbr only supports opencv frontal face detector openface also supports dlib face detector however in recent years many face detection algorithms have been developed with deep learning technology to support images to detect the face in poses some modern detectors such as headhunter and ddfd and face detector are supported in our system mathias et al trained headhunter by using templates ddfd face detector is proposed by farfade et al by alexnet and using suppression to support different face detectors for downstream modules we perform the bounding box regression on detected bounding box to reduce the variations of the bounding generate bounding box face detection gallery lift textures from original image according to correspondence reconstruct face model from a single image reconstruction euclidean distance texture lifting refinement landmark detection pose estimation representation localize landmarks based on the response map compute projection matrix residual network to learn features probe cosine similarity feature occlusion encoding images enrollment matching figure depiction of the whole pipeline follow the arrow in the middle of the rounded rectangles represent different modules dashed arrows represent the workflow the enrollment encompasses the modules listed a face is first detected and then transferred to localize landmarks a model is constructed directly from a image with a bounding box with landmarks and a model a projection matrix can be estimated the frontalized image and occlusion map are generated according to the model and projection matrix the pose robust features are extracted from these images along with occlusion encoding the matching step computes features from visible parts and outputs a similarity score box the first advantage of this approach is that we do not need to or the models for downstream modules after switching the face detector the second advantage is that this approach provides a more robust bounding box for the landmark localization module landmark localization to detect face landmarks we use godp proposed by wu et al which is demonstrated to be robust to pose variations godp landmark detector relies on confidence maps generated by a fully convolutional network a confidence map is generated for each landmark to indicate the possibility of a landmark appearing at a specific location in the original image the prediction is made by simply selecting the location that has the maximum response in the confidence map this strategy helps to suppress false alarms generated by background regions and improves the robustness of the algorithm under large head pose variations compared to other landmark detectors the novel architecture of godp merges the information of the deep and shallow layers based on a new loss function increases the resolution and discrimination of the confidence maps and achieves results on multiple challenging face alignment databases reconstruction of facial shape to reconstruct the facial shape of the input image we integrate into our pipeline the algorithm proposed by dou et al it uses a subspace model to represent a afm as a parameter vector and employs cnn to estimate the optimal parameter values from a single image to train the deep neural network a large set of synthetic and data has been created using the rendering of randomly generated afms to improve the robustness to illumination variation the deep neural network is on real facial images and on the synthetic data compared with existing work it is more efficient due to its architecture which requires a single operation to predict the model parameters moreover it only relies on face detection to localize the facial region of interest on the image as a result compared with approaches it is more robust to the pose variation that can degrade landmark detection accuracy pose estimation given landmarks obtained from landmark detection and landmarks obtained from a model the transformation matrix p can be estimated by solving a problem as follows min p p in our implementation we use the algorithm also known as dls to solve this equation texture lifting facial texture lifting is a technique first proposed by kakadiaris et al which lifts the pixel values from the original images to a uv map given the projection matrix p afm model m and original image i it first generates the geometry image g each pixel of which captures the information of an existing or interpolated vertex on the afm surface with g a set of coordinates referring to the pixels on an original facial image is computed in this way the facial appearance is lifted and represented into a new texture image t a model m and technique are used to estimate the occlusion status for each pixel this process generates an occlusion mask z this module has the following two advantages it generates the frontal normalized face images which is convenient for feature extraction and comparison second it generates occlusion masks which identify the parts of the face images that are occluded providing the evidence to exclude the face regions signatures to improve the performance of face recognition in matching facial images we integrate into our pipeline the algorithm proposed by dou et al for extracting face signature prfs a face representation with discriminative local facial features and explicit pose and encoding the facial texture t and the mask z are first divided into multiple local patches then on each local patch discriminative features are extracted and selfocclusion encoding is computed the ensemble of local features each enhanced by the encoding forms the face signature we use two types of local features namely the dfd feature proposed by lei et al and a deep feature we trained by following wen et al using center loss to train the dfd feature we use a small subset of the database that consists of frontal facial images of subjects we divide the facial texture into patches and train a dfd feature extractor for each local patch separately to train the deep feature the casia webface dataset is used as training data we divide the facial texture into patches and train a deep neural network for each local patch separately in this paper we call the face signature with the dfd feature prfs and the face signature with the deep feature dprfs experiments in this section we provide a systematical and numerical analysis on two challenging datasets in both constrained and scenarios first the datasets used to verify are introduced then a fair comparison of with vgg face descriptor and a commercial face recognition software cots on these two challenging datasets is conducted for the image matching in the end the template matching experiments on dataset was performed dataset images subjects environment constrained poses illuminations usage face recognition various various unconstrained face recognition table comparison of datasets and both are challenging due to pose variations illumination and resolution datasets was created in a controlled lab environment which allows facerelated research on pose and illumination issues in addition to images it also provides the corresponding model of subjects an interesting fact of this dataset is that pose follows the uniform distribution on three dimensions pitch yaw and roll for each subject a total of images from different views and data are collected at the same time then a model is registered from the data from different poses to generate a specific face model in addition to three illuminations the resolutions are downsampled to and from the original size is another challenging dataset which consists of images in the wild this dataset was proposed by iarpa and is managed by nist this dataset merges images and frames together and provides evaluations on the template level a template contains one or several of a subject according to the protocol it splits galleries and probes into folders in our experiment we modify this protocol to use it for face identification the details will be introduced in sec a summary of these two datasets is presented in tab our system provides dataset utility to parse and load the data from these two datasets yaw pitch table comparison of percentage of different systems on the methods are ordered as cots facenet and the index of poses are ordered from the left to right and from the top to bottom pose is pitch and yaw pose is pitch and yaw the frontal face is gallery while the other poses are probes in all cases our system achieves the best performance compared with the baselines to perform a fair comparison with current face recognition systems we choose and cots as baselines the descriptor was developed by parkhi et al the original release contains a caffe model and a matlab example we their model implemented their embedding method on images and fused the features in in our implementation we tried different combinations of descriptor and matching methods we found that embedding features with cosine similarity metrics works the best for the in our experiment we use to represent the embedding features with matching using a cosine similarity metric as in the baseline module provides api to obtain the features the facenet algorithm was proposed by schroff et al we use a personal implemented facenet from github trained using webface and they first use mtcnn to align face and extract dimensions features they provide models that achieves accuracy on the lfw dataset the accuracy is a little bit lower than the original paper but still can be https sidered cots is a commercial software developed for scalable face recognition it provides sdk and applications which can be used directly in our experiments we used version to compare with our system this version is considered to be a significant boost compared with previous versions in our experiment we report the performance using both prfs and dprfs features the summary of software configuration is reported in tab we compute the identity accuracy from successfully enrolled signatures system features dims metric embedding cosine cots facenet cosine prfs cosine dprfs cosine table comparison of systems configuration used in our experiments face recognition in this experiment we chose a configuration from the dataset named this is a subset in which all images are to the size in the neutral illumination this subset was chosen to demonstrate that our system is robust to different poses therefore we use this configuration to exclude the other variations such as illumination expressions etc but only keep the pose variations we treated the frontal face images pose as gallery and images from the other poses poses as probes independently both the gallery and the probe contain images each of which belongs to a subject the face identification experiment was performed using pairs of sigsets table depicts the comparison of accuracy among poses except pose which is used for gallery which indicates that is robust to the different poses compared with other systems we observed the and cots algorithms can not generalize all pose distributions facenet works better than and cots on the extreme poses one possible answer is that this model is trained from the most available datasets using and webface which provide more extreme pose cases however in cases such as pose and pose in tab the performance of only face recognition pipelines still has significant room for improvement on the other hand with the help of the model our system keeps the consistent and symmetric performance among the different poses even in the cases with yaw or our system can tolerate the pose variations and achieves around identity accuracy with dprfs features and around identity accuracy with prfs features on average face recognition however in a case a face recognition system does not suffer only from pose variations in this experiment we want to explore whether our system is can also be used in an environment we designed a different protocol for face identification experiments based on the original splits unlike the original templatelevel comparison we conducted an image pairs comparison first we removed some samples in the splits to make comparison pairs then we cropped the face according to the annotations image thumbnails with resolution below were while those with resolution larger than were herein we do not compare with facenet since there are overlapping samples between the training set and dataset method avg cots table comparison of percentage of different systems on splits of achieves the best performance with dprfs features on each split table depicts the identification rate with different methods on dataset our system with dprfs reports better performance compared with and cots also our system results are consistent on splits which indicates that our system is robust why do prfs features in our system not perform well on the dataset one possible answer is that prfs features are trained on the frgc dataset which has notably fewer variations of pose illumination and resolution problems the current prfs features can not generalize on these images with large variances the corresponding solution is retraining the prfs feature model on the dataset third cots performs well on this challenging dataset since it is designed for the real scenario by comparing the experiment in sec we are left with the question why does our system perform only slightly better than baselines we argue that in scenarios there are complicated combinations of pose variations illumination expression and occlusions a robust face recognition system should take all cases into consideration in addition cots dropped hard samples and enrolled fewer signatures than ours which would boost the performance to some extent method openbr wang et al dcnn pam table comparison of percentage of different systems on splits of achieves the best performance with dprfs features on each split we extended to enroll the several images for a subject to generate a template the template is an average of signatures computed by generating a unified model from several images here we use the results from to do the comparison table lists the average identification accuracy for each method achieved method avg dprfs table detailed percentage of different systems on splits of the best performance the detailed comparison of identification accuracy with is summarized in tab for splits in the dataset memory usage and running time we conducted the analysis of in terms of both memory and time cafferelated implementation runs on gpu gtx titan x cots makes full use of eight cpus table summarizes the system for different systems some modules in our implementation or external libraries run on cpu such as face detection pose estimation and prfs feature extraction therefore the time used by prfs features takes s more than dprfs features due to loading several large models dprfs requires more memory the user can define the suitable feature extractors according to their needs since we optimized memory for dprfs it shares the memory block in the gpu the memory cost is reduced to the same level as prfs we use dprfs by default system gpu memory gb time s full cots no prfs partial dprfs partial table comparison of system partial in gpu column denotes part of the code does not support gpu acceleration time means the average enrollment time for a single image conclusion in this paper a face recognition system that is robust to pose variations as large as using deep learning technology has been presented an overview of the architecture interface and each module in are introduced i detailed extensive experiments are conducted on and to demonstrate that is robust to the pose variations and it outperforms existing face recognition systems such as vgg face descriptor facenet and a commercial face recognition software by at least on dataset and on dataset in average and the system achieves the performance of in template matching on dataset acknowledgment this material is based upon work supported by the department of homeland security under grant award number this grant is awarded to the borders trade and immigration bti institute a dhs center of excellence led by the university of houston and includes support for the project image and video person identification in an operational environment awarded to the university of houston the views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies either expressed or implied of the department of homeland security references references le i kakadiaris a dataset for better understanding face recognition across pose and illumination variation in proc ieee international conference on computer vision workshops venice italy klare klein taborsky blanton cheney allen grother mah burge jain pushing the frontiers of unconstrained face detection and recognition iarpa janus benchmark a in proc ieee conference on computer vision and pattern recognition boston massachusetts pp taigman yang ranzato wolf deepface closing the gap to performance in face verification in proc ieee conference on computer vision and pattern recognition columbus ohio pp huang mattar berg labeled faces in the wild a database for studying face recognition in unconstrained environments in proc workshop on faces in images detection alignment and recognition marseille france schroff kalenichenko philbin facenet a unified embedding for face recognition and clustering in proc ieee conference on computer vision and pattern recognition boston massachusetts pp yi lei liao li learning face representation from scratch arxiv preprint guo zhang hu x he gao a dataset and benchmark for face recognition in proc european conference on computer vision amsterdam netherlands pp parkhi vedaldi zisserman deep face recognition in proc british machine vision conference swansea uk pp he zhang ren j sun identity mappings in deep residual networks in proc european conference on computer vision amsterdam the netherlands pp wen zhang li qiao a discriminative feature learning approach for deep face recognition in proc european conference on computer vision amsterdam netherlands pp zhang fang wen li qiao range loss for deep face recognition with arxiv preprint liu wen yu li raj song sphereface deep hypersphere embedding for face recognition in proc ieee conference on computer vision and pattern recognition honolulu hawaii pp chen deng j du noisy softmax improving the generalization ability of dcnn via postponing the early softmax saturation in proc ieee conference on computer vision and pattern recognition honolulu hawaii pp i kakadiaris toderici evangelopoulos passalis chu zhao shah theoharis face recognition with normalization computer visiona and image understanding hu yan chan deng christmas kittler robertson face recognition using a unified morphable model in proc european conference on computer vision amsterdam netherlands ding tao a comprehensive survey on face recognition acm transactions on intelligent systems and technology opencv http glog https gflags https pugixml https json for modern https jia shelhamer donahue karayev j long girshick guadarrama darrell caffe convolutional architecture for fast feature embedding in proc international conference on multimedia orlando florida usa pp xu le dou wu i kakadiaris evaluation of pose invariant face recognition system in proc international joint conference on biometrics denver colorado zafeiriou zhang zhang a survey on face detection in the wild past present and future computer vision and image understanding girshick donahue darrell malik rich feature hierarchies for accurate object detection and semantic segmentation in proc ieee conference on computer vision and pattern recognition columbus oh pp jiang face detection with the faster in proc ieee international conference on automatic face gesture recognition washington dc pp li b sun wu wang face detection with integration of a convnet and a model in proc european conference on computer vision amsterdam netherlands pp hu ramanan finding tiny faces in proc ieee conference on computer vision and pattern recognition honolulu hawaii pp he zhang ren j sun deep residual learning for image recognition in proc computer vision and pattern recognition las vegas nv pp liu anguelov erhan szegedy reed fu berg ssd single shot multibox detector in proc european conference on computer vision amsterdam netherlands pp redmon farhadi better faster stronger in proc ieee conference on computer vision and pattern recognition honolulu hawaii pp lin goyal girshick he dollar focal loss for dense object detection arxiv preprint najibi samangouei chellappa davis ssh single stage headless face detector arxiv preprint jin tan face alignment a survey computer vision and image understanding zhu li loy tang face alignment by shape searching in proc ieee conference on computer vision and pattern recognition boston ma pp xu shah i kakadiaris face alignment via an ensemble of random ferns in proc ieee international conference on identity security and behavior analysis sendai japan xu i kakadiaris joint head pose estimation and face alignment framework using global and local cnn features in proc ieee conference on automatic face gesture recognition washington dc pp kumar alavi chellappa kepler keypoint and pose estimation of unconstrained faces by learning efficient regressors in proc ieee conference on automatic face gesture recognition washington dc pp wu shah i kakadiaris godp globally optimized dual pathway system for facial landmark localization image and vision computing under review huang zhang li he beyond face rotation global and local perception gan for photorealistic and identity preserving frontal view synthesis arxiv preprint yin yu sohn liu chandraker towards face frontalization in the wild arxiv preprint tran yin liu disentangled representation learning gan for face recognition in proc ieee conference on computer vision and pattern recognition honolulu hawaii pp masi rawls medioni natarajan face recognition in the wild in proc ieee conference on computer vision and pattern recognition las vegas nv pp masi trn hassner leksut medioni do we really need to collect millions of faces for effective face recognition in proc european conference on computer vision amsterdam the netherlands pp deng zhou zafeiriou marginal loss for deep face recognition in proc ieee conference on computer vision and pattern recognition honolulu hawaii pp klontz klare klum jain burge open source biometric recognition in proc ieee conference on biometrics theory applications and systems washington dc y sun liang wang tang face recognition with very deep neural networks arxiv preprint amos ludwiczuk mahadev openface a face recognition library with mobile applications tech cmu school of computer science pittsburgh pa zhang zhang li qiao joint face detection and alignment using multitask cascaded convolutional networks ieee signal processing letters king a machine learning toolkit journal of machine learning research farfade saberian li face detection using deep convolutional neural networks in proc acm on international conference on multimedia retrieval shanghai china pp mathias benenson pedersoli gool face detection without bells and whistles in proc european conference on computer vision zurich switzerland pp krizhevsky sutskever hinton imagenet classification with deep convolutional neural networks in proc neural information processing systems lake tahoe nv pp dou shah i kakadiaris face reconstruction with deep neural networks in proc ieee conference on computer vision and pattern recognition honolulu hawaii pp dou zhang wu shah i kakadiaris face signature for face recognition in proc international conference on biometrics theory applications and systems arlington va pp lei pietikainen li learning discriminant face descriptor ieee transactions on pattern analysis and machine intelligence wang otto jain face search at scale ieee transactions on pattern analysis and machine intelligence chen zheng patel chellappa unconstrained face verification using deep cnn features in proc winter conference on applications of computer vision wacv lake placid ny usa
1
nov on driss a and mohammed el b department of mathematics laboratory of analysis algebra and decision support faculty of sciences mohammed v university in rabat rabat morocco driss bennis abstract recentely anderson and dumitrescu s has attracted the interest of several authors in this paper we introduce the notions of sfinitely presented modules and then of rings which are of finitely presented modules and coherent rings respectively among other results we give an of the classical chase s characterization of coherent rings we end the paper with a brief discussion on other of finitely presented modules and coherent rings we prove that these last can be characterized in terms of localization key words presented modules rings mathematics subject classification introduction throughout this paper all rings are commutative with identity in particular r denotes such a ring and all modules are unitary s will be a multiplicative subset of we use i a for an ideal i and an element a r to denote the quotient ideal x r xa i according to an r module m is called if there exists a finitely generated submodule n of m such that sm n for some s also from an bennis and el hajoui m is called if each submodule of m is in particular r is said to be an ring if it is as an that is every ideal of r is it is clear that every noetherian ring is the notions of modules and of rings were introduced by anderson and dumitrescu motivated by the works done in and they succeeded to generalize several results on noetherian rings including the classical cohen s result and hilbert basis theorem under an additional condition since then the has attracted the interest of several authors see for instance recentely motivated by the work of anderson and dumitrescu of some classical notions have been introduced see for instance in this paper we are inerested in of finitely presented modules and coherent rings actually there are two possibilities which could be considered as of finitely presented modules which lead to two of coherent rings we prove that the of coherent rings defined by one of them has a characterization similar to the classical one given by chase for coherent rings theorem this is why we adopt this notion as the suitable of finitely presented modules however it seems not evident to characterize this notion in terms of localization we prove that indeed it is the other which is briefly studied at the end of the paper has a characterization in terms of localization the organization of the paper is as follows in section we introduce and study an of finitely presented modules we call it an presented module see definition then we study the behavior of in short exact sequences see theorem we end section with some change of rings results see proposition and corollary section is devoted to the of coherent rings which are called rings see definition our main result represents the of chase s result theorem see theorem also an of coherent modules is introduced see definition and proposition we end the paper with a short section which presents the other sversion of see definitions and we prove that these notions can be characterized in terms of localization see proposition and theorem we end the paper with results which relate with the notion of see propositions and and corollary on presented modules in this section we introduce and investigate an of the classical finitely presented modules other version is discuted in section definition an m is called presented if there exists an exact sequence of k f m where k is and f is a finitely generated free clearly every finitely presented module is presented however the converse does not hold in general for that it suffices to note that when r is a nonnoetherian ring then there is an ideal i which is not finitely generated then the is presented but it is not finitely presented also it is evident that every presented module is finitely generated to give an example of a finitely generated module which is not presented it suffices to consider an ideal i which is not and then use proposition given hereinafter one could remark that in definition we assume that the free module f is finitely generated rather than in fact because of the following result both of notions coincide for free modules proposition every free is finitely generated proof let m l rei be an free where ei is a basis of m and i is an index set then there exist a finitely generated n and an s s such that sm n m then n rmn for some mn m n is an integer for every k n there exists a finite subset jk of i n s p jk then the finitely generated ej let j such that mk l rej contains n we show that m m deny there exists an p ej for some such that m but n m and so this is impossible since ei is a basis remark similarly to the proof of proposition above one can prove that any module can not be decomposed into an infinite direct sum of modules this shows that any projective module is countably bennis and el hajoui generated by kaplansky theorem then naturaly one would ask of the existence of projective module which is not finitely generated for this consider q the boolean ring r ki where ki is the field of two elements for every i consider the projective ideal m l ki and the element e see example then s e is a multiplicative subset of since em is a finitely generated m is the desired example of projective module which is not finitely generated however determining rings over which every projective module is finitely generated could be of interest it is worth noting that rings over which every projective module is a direct sum of finitely generated modules satisfy this condition these rings were investigated in next result shows that as in the classical case lemma an presented module does not depend on one specific short exact sequence of the form given in definition proposition an m is presented if and only of m is finitely f generated and for every surjective homomorphism of f m where f is a finitely generated free ker f is proof obvious since m is presented there exists an exact sequence of k f m where k is and f is finitely generated and free then by schanuel s lemma k f ker f f then ker f is the following result represnts the behavior of in short exact sequences it is a generalization of theorem for modules with at most note that one can give an of the classical see page however here we prefer to focus on the notion of presented modules and a discussion on the suitable of the could be the subject of a further work f g theorem let m m m be an exact sequence of rmodules the following assertions hold if m and m are then m is in particular every finite direct sum of modules is if m and m are presented then m is presented in particular every finite direct sum of presented modules is sfinitely presented on if m is then m is in particular a direct summand of an module is if m is and m is presented then m is presented if m is presented and m is then m is proof since m is there exist a finitely generated submodule n of n p m and an s s such that sm n let n rei for some ei m and n since g is surjective there exists an mi m such that g mi ei for every i n let x m so sx n g n then g sx g n n and n n n n p p p p so g sx ei g mi g mi then g sx mi thus sx n p mi ker g imf which is so there exist a finitely generated submodule n of imf and an s such that imf n then sx n and so sm is a submodule of n n p n p rmi rmi which is a finitely generated submodule of m therefore m is since m and m are presented there exist two shorts exacts sequences k f m and k f m with k and k are and f and f are finitely generated free then by horseshoe lemma we get the following diagram o o m o o f f o f o o o k o by the first assertion k is therefore m is presented obvious bennis and el hajoui since m is presented there exists a short exact sequence of k f m where k is and f is a finitely generated free consider the following pullback diagram o o m o o m ko ko by d is therefore m is presented since m is presented there exists a short exact sequence k f m where k is and f is a finitely generated free consider the following pullback diagram o m o o o ko ko since f is free d m f and so d is since m and f are therefore m is as a simple consequence we get the following result which extends corollary corollary let and be two presented submodules of an rmodule then is presented if only if is on proof use the short exact sequence of we end this section with the following change of rings results the following result extends theorem proposition let a and b be rings let a b be a ring homomorphism making b a finitely generated and let v be a multiplicative subset of a such that v every which is v presented as an it is v presented as a proof let m be a which is v presented as an then m is a finitely generated then m is a finitely generated thus there is an exact sequence of k b n m where n is an integer this sequence is also an exact sequence of since m is an v presented and b n is a finitely generated since b is a finitely generated k is an v and so k is a v therefore m is a v presented the following result extends theorem proposition let i be an ideal of r and let m be an assume that i s so that t s i s s is a multiplicative subset of then m is an if and only if m is a t if m is an presented then m is a t presented the converse holds when i is an ideal of proof easy use the canonical ring surjection r and proposition conversely if m is a t presented then there is an exact sequence of and then of k n m where n is an integer and k is a t by the first assertion k is also an and since i is an ideal of r n is an presented therefore by theorem m is an presented bennis and el hajoui rings before giving the definition of rings we give following the calssical case the definition of modules definition an m is said to be if it is finitely generated and every finitely generated submodule of m is presented clearly every coherent module is however using proposition below one can show that for an ideal i of r which is not finitely generated the is but it is not coherent the reason of why we consider finitely generated submodules rather than submodules is explained in assertion of remark the following result studies the behavior of of modules in short exact sequences it generalizes theorem f g proposition let p n m be an exact sequence of rmodules the following assertions hold if p is and n is then m is if m and p are then so is n in particular every finite direct sum of modules is if n is and p is finitely generated then p is proof it is clear that m is finitely generated let m be a finitely generated submodule of m then f p g m so there exist two shorts exacts sequences of k rn p and k rm m where n and m on are two positive integers then by horseshoe lemma we get the following diagram o g m o o rn o o rm o o k o o since g m is a finitely generated submodule of the module n g m is presented then using theorem k is and so k is therefore m is presented clearly n is finitely generated let n be a finitely generated submodule of f g n consider the exact sequence ker n g n then g n is a finitely generated submodule of the module m then g n is presented then ker is finitely generated by theorem and since p is ker is presented therefore by of theorem n is presented evident since a submodule of p can be seen as a submodule of n now we set the definiton of rings definition a ring r is called if it is as an that is if every finitely generated ideal of r is presented remark note that every ring is indeed this follows from the fact that when r is every finitely generated free is see the discussion before lemma next in example we give an example of an ring which is not clearly every coherent ring is the converse is not true in general as an example of an ring which is not coherent we consider the trivial extension a z n and the multiplicative set bennis and el hajoui v n n n since m is not finitely generated t is not coherent now for every ideal i of a i is finitely generated in fact i where j a z n a b i since j is an ideal of z j az for some element a z then i a this shows that a is v and so v it is easy to show that if m is an presented then ms is a finitely presented rs thus if r is a ring rs is a coherent ring however it seems not evident to give a condition so that the converse holds as done for rings see proposition f in section we give another of coherent rings which can be characterized in terms of localization one would propose for an of coherent rings the following condition every ideal of r is presented however if r satisfies the condition then in particular every ideal of r is finitely generated so every ideal of r is finitely presented in particular r is coherent this means that the notion of rings with the condition can not be considered as an of the classical coherence nevertheless these rings could be of particular interest as a new class of rings between the class of coherent rings and the class of noetherian rings to give an example of a coherent ring which does not satisfy the condition q one could consider the boolean ring b ki where ki is the field of two elements for every i n and the multiplicative subset v e of b l where e b indeed the ideal b ki is v but not finitely generated also note that the following condition every ideal of r is finitely generated could be of interest indeed clearly one can show the following equivalences a a ring r satisfies the condition if and only if r is coherent and satisfies the condition b a ring r is coherent if and only if r is and satisfies the condition c a ring r is noetherian if and only if r is and satisfies the condition to give an example of an ring which is not we use the following result on proposition let r s n y n y ri be a direct product of rings ri n n and si be a cartesian product of multiplicative sets si of ri then r is coherent if and only if ri is si for every i n proof the result is proved using standard arguments example consider the ring a given in remark let b be a coherent ring which has a multiplicative set w such that bv is not noetherian then a b is v w by proposition but it is not v w by proposition f now we give our main result it is the of the classical chase s result theorem we mimic the proof of theorem so we use the following lemma lemma lemma let r be a ring let i un be a finitely generated ideal of r n n and let a set j i ra let f be f a free module on generators and let k f j be an exact sequence with f xi ui i r and f a then there exists n l g an exact sequence k f k i a where f rxi theorem the following assertions are equivalent r is every presented is every finitely generated of a free is presented i a is an ideal of r for every finitely generated ideal i of r and a a is an ideal of r for every a r and the intersection of two finitely generated ideals of r is an ideal of bennis and el hajoui proof the proof is similar to that of theorem see also theorem however for the sake of completeness we give its proof here follows from proposition obvious let n be a finitely generated submodule of a free f hence there exists a finitely generated free submodule f of f containing n then by f is therefore n is presented trivial let i be a finitely generated ideal of then i is presented consider j i ra where a then j is finitely generated and so it is presented thus there exists an exact sequence k j where k is by lemma there exists a surjective homomorphism g k i a which shows that i a is this is proved by induction on n the number of generators of a finitely generated ideal i of for n use assertion and the exact sequence i r i for n use assertion and lemma since r is proposition applied on the exact sequence a r ar shows that the ideal a is now let i and j be two finitely generated ideals of then i j is finitely generated and so sfinitely presented then applying theorem on the short the exact sequence i j i j i j we get that i j is this is proved by induction on the number of generators of a finitely generated ideal i of r using the two short exact sequences used in it is worth noting that in chase s paper coherent rings were characterized using the notion of flat modules then naturaly one can ask of an of flatness that characterizes rings similarly to the classical case we leave it as an interesting open question we end this section with some change of rings results the following results extends theorem proposition let i be an ideal of assume that i s so that t s i s s is a multiplicative subset of then an m is t if only if it is an in particular the following assertions hold if r is an ring then is a t ring if is a t ring and i is an then r is an ring on proof use proposition next result generalizes theorem it studies the transfer of under localizations lemma let f a b be a ring homomorphism such that b is a flat amodule and let v be a multiplicative set of a if an m is v a v presented then m b is an f v f v presented proof follows using the fact that flatness preserves injectivity proposition if r is then rt is an st ring for every multiplicative set t of proof let j be a finitely generated ideal of rt then there is a finitely generated ideal i of r such that j it since r is i is presented then using lemma the ideal j i rt of rt is st presented as desired other of finiteness in this short section we present another of and we prove that this notion can be characterized in terms of localization the following definition gives another of finitely presented modules definition an r module m is called presented if there exists a finitely presented submodule n of m such that sm n m for some s remark clearly every finitely presented module is presented however the converse does not hold in general for that it suffices to consider a coherent ring which has an module which is not finitely generated an example of a such ring is given in remark the inclusions in definition complicate the study of the behavior of of presented modules in short exact sequences as done in theorem this is why we think that presented modules will be mostly used by commutative rings theorists rather than researchers interested in notions of homological algebra this is the reason behind the use of the letter c in presented bennis and el hajoui it seems that there is not any relation between the two notions of presented and presented modules nevertheless we can deduce that in a ring defined below every presented ideal is presented it is that if for an m ms is a finitely presented rs then there is a finitely presented n such that ms ns nevertheless what doest not make things work with respect to localization for presented modules is the fact that the module n which satisfies ms ns is not necessarily a submodule of m for presented modules we give the following result proposition if an m is presented then ms is a finitely presented rs a finitely generated m is presented if and only if there is a finitely presented submodule n of m such that ms ns proof obvious clear since m is finitely and ms ns there is an s s such that sm n as desired now we define the other of the classical coherence of rings definition a ring r is called if every ideal of r is sfinitely presented clearly every coherent ring is the converse is not true in general the ring given in example can be used as an example of a ring which is not coherent also it is evident that every ring is as done in example we use the following result to give an example of a ring which is not proposition let r s n y n y ri be a direct product of rings ri n n and si be a cartesian product of multiplicative sets si of ri then r is if and only if ri is for every i n proof the result is proved using standard arguments on example consider a ring a which is not coherent let b be a coherent ring which has a multiplicative set w such that bv is not noetherian then is by proposition but it is not v by proposition f the follwoing result characterizes rings can be characterized in terms of localization theorem the following assertions are equivalent r is every finitely generated ideal of r is presented for every finitely generated ideal i of r there is a finitely presented ideal j i such that is js in particular rs is a coherent ring proof straightforward let i be an ideal of then there exist an s s and a finitely generated ideal j of r such that si j i by assertion there is a finitely presented ideal k j such that ks js then there is a t s such that tj therefore tsi k i as desired we end the paper with a result which relates rings with the notion of in the notion of is used to characterize rings assume that r is an integral domain let sats i denotes the of an ideal i of r that is sats i irs in proposition b it is proved that if sats i is then i is and sats i i s for some s this fact was used to prove that a ring r is if and only if rs is noetherian and for every finitely generated ideal of r sats i i s for some s s see proposition f the following result shows that the implication of proposition b is in fact an equivalence in more general context consider n m an inclusion of let f m ms be the canonical homomorphism denote by f n rs the rs of ms generated by f n we set sats m n f f n rs and n m s m m sm n proposition let n be an of an m sats m n is sfinite if and only if n is and sats m n n m s for some s bennis and el hajoui proof set k sats m n since k is there exist an s s and a finitely generated j such that sk j thus sn sk j we can write j rxn for some xn j for each xi there n q exists a ti s such that ti xi n we set t ti then tsn tsk tj n then n is on the other hand since sk tj n k k n m s conversely let x n m s then sx n so x k as desired since n is there exist a t s and a finitely generated j such that tn j n on the other hand since k n s for some s s sk n consequently tsk tn j n therefore k is the following result is proved similarly to the proof of proposition however to guarantee the preservation of finitely presented modules when multiplying by elements of s we assume that s does not contain any of proposition assume that every element of s is regular let n be an rsubmodule of an m sats m n is presented if and only if n is presented and sats m n n m s for some s corollary assume that every element of s is regular the following assertions are equivalent for every finitely generated ideal i of r sats i is presented r is and for every finitely generated ideal i of r sats i i s for some s acknowledgement a part of this work was presented by the second author at the scientific day of algebra ja graaf held in faculty of sciences of rabat may the authors would like to thank professor zine el abidine abdelali for his helpful comments during the preparation of this paper references anderson and dumitrescu rings comm algebra anderson kwak and zafrullah agreeable domains comm algebra on u chase direct products of modules trans amer math soc costa parameterizing families of rings comm algebra glaz commutative coherent rings lecture notes in berlin hamed and hizem modules satisfying the property and comm algebra hamed and hizem rings of the forms a x and a x comm algebra hamann houston and johnson properties of uppers to zero in r x com alg kaplansky projective modules ann math kim kim and lim on mori domains algebra lim and oh properties on amalgamated algebras along an ideal j pure appl algebra lim and oh properties of composite ring extensions comm algebra mcgovern puninski and rothmaler when every projective module is a direct sum of finitely generated modules algebra zhongkui on rings arch math brno
0
fast construction of efficient composite likelihood sep equations zhendong huang and davide ferrari school of mathematics and statistics the university of melbourne abstract growth in both size and complexity of modern data challenges the applicability of traditional inference composite likelihood cl methods address the difficulties related to model selection and computational intractability of the full likelihood by combining a number of likelihood objects into a single objective function used for inference this paper introduces a procedure to combine partial likelihood objects from a large set of feasible candidates and simultaneously carry out parameter estimation the new method constructs estimating equations balancing statistical efficiency and computing cost by minimizing an approximate distance from the full likelihood score subject to a penalty representing the available computing resources this results in truncated cl equations containing only the most informative partial likelihood score terms an asymptotic theory within a framework where both sample size and data dimension grow is developed and properties are illustrated through numerical examples keywords composite likelihood estimation likelihood truncation corresponding author davide ferrari school of mathematics and statistics the university of bourne parkville vic australia dferrari introduction since the idea of likelihood was fully developed by fisher inference has played a role of paramount importance in statistics the complexity of modern data however poses nontrivial challenges to traditional likelihood methods one issue is related to model selection since the full likelihood function can be difficult or impossible to specify in complex multivariate problems another difficulty concerns computing and the necessity to obtain inferences quickly these challenges have motivated the development of composite likelihood cl methods which avoid intractable full likelihoods by compounding a set of likelihood objects besag pioneered cl inference in the context of spatial data lindsay developed cl inference in its generality due to its flexible framework and established theory the cl framework has become a popular tool in many areas of applied statistics see varin et al for an overview of cl inference and common applications consider n independent observations on the d random vector x xd t with pdf in the parametric family f x x x rp where denotes the true parameter in this paper we are mainly concerned with large data sets where both the data dimension d and the sample size n are large given observap tions x x n on x we write efn g g x i for the empirical mean of p the function g where fn x i x i x is the empirical cdf and use e g to denote its expected value the operator denotes differentiation with respect to in the cl setting the maximum likelihood score um l log f and the associated estimating equations efn um l are intractable due to difficulties in computing or specifying the full density f suppose however that one can obtain m tractable pdfs fm sm for sm of x where each sj has dimension much smaller than for example could represent a single element of x like a variable pair like or a conditional like typically the total number of m grows quickly with d for instance taking all variable pairs in x results in m d d candidate the specific choice for the set of pdfs fj j m is sometimes referred to as cl design lindsay et and is typically specified by the practitioner for simplicity here the cl design is treated as given and we assume fm as it is often the case in applications b we focus on the maximum composite likelihood estimator mcle w defined as the solution to the cl estimating equations efn u w efn wm um where uj log fj is the jth partial score score associated with the jth subset sj of x here w rm is a given vector of coefficients to be determined which we refer to as composition rule in addition to computational advantages compared to mle and flexible modeling the mcle enjoys properties analogous to those of the maximum likelihood estimator mle since the partial scores commonly define unbiased estimating equations euj at for all j m the cl b score u w in is also unbiased a property leading to consistency of w unfortunately the mcle does not have the same properties as the mle since the asymptotic b variance of w is generally different from the inverse of fisher information l with the two coinciding only in special families of models the choice of the composition rule w is crucial in determining both efficiency and computb ing cost associated with w established theory of unbiased estimating equations prescribes b to find w so to minimize the asymptotic variance of w heyde chapter given by the inverse of the p p godambe information matrix g w e w var u w e w although theoretically appealing this is a notoriously difficult task due the instability of common estimators of the term var u w in g w lindsay et on the other hand the common practice of retaining all terms in by choosing fixed wj for all j wj j is undesirable from both computational and statistical efficiency perspectives especially when the partial scores uj exhibit pronounced correlation cox and reid discuss the detrimental effect caused by the presence of b many correlated scores on the variance of w when n is small compared to m in pairwise likelihood estimation in the most serious case where the correlation between scores is overwhelming keeping all the terms in may lead to lack of consistency for the implied b mcle w motivated by the above considerations we introduce a new method called sparse composite likelihood estimation and selection scle consisting of two main steps a truncation step and an estimation step in the the composition rule w is obtained by minimizing an approximate distance between the unknown full likelihood score um l and the cl score u w subject to a constraint on this step may be viewed as maximizing statistical accuracy for given afforded computing alternatively it may be interpreted as minimizing the computing cost for given level of statistical efficiency due to the geometry of the the resulting composition rule say w b contains a number of elements see lemma while the most useful terms for improving mcle s statistical accuracy are retained the noisy contributing little or no improvement are dropped in the we solve the estimating equations with w w b and find b w the final estimator b compared to traditional cl estimation the main advantage of our approach is to reduce the computational burden while retaining relatively high efficiency in large data sets the reduced number of terms in the estimating equations translates into fast computing and enhanced stability for the final estimator at a relatively small cost in terms of statistical efficiency the remainder of this paper is organized as follows in section we describe the main methodology for simultaneous likelihood truncation and parameter estimation in section we study the properties of the truncated composition rule and for the implied estimator within a framework where both the sample size n and the data dimension d are allowed to diverge section illustrates the properties of our methodology in the context of estimation of location and scale for multivariate normal models in section we study the between computational and statistical efficiency in finite samples through numerical simulations section concludes with final remarks technical lemmas used in our main results are deferred to the appendix main methodology throughout the paper we consider unbiased partial scores uj j m satisfying euj for all j m when and assume that is the unique solution for all the equations in the approach described in this section is applicable to problems with arbitrary sample size n and data dimension d but we are mainly concerned with the situation where the data dimension d and number of available objects m is large compared to the sample size although we focus on partial scores for concreteness our methodology and the properties in section remain essentially unchanged if uj is any arbitrary unbiased equation for instance when is a location parameter a more appropriate choice in the presence of outliers may be the partial score uj sj where z if z k z z if k and z k if z k with k another suitable choice in the same setting is the estimating equation of ferrari and yang defined by uj logq fj sj where logq z log u if q and logq z z q if q in the rest of the paper we use u to denote the p m matrix with column vectors um and define the m m matrix s u t u with jk th entry s j k uj t uk we write ua for the of u with columns sponding to a m while denotes the containing the remaining columns accordingly we define the matrix sa u ta ua and use wa to denote the of w with elements wj j a while represents the vector containing all the elements in w not in wa sparse and efficient estimating equations our main objective is to solve the cl estimating equations efn u w defined in with respect to using coefficients w obtained by minimizing the ideal criterion m x wj uj w e um l m x where k denotes the euclidean norm is a given constant and the s are constants not depending on the data for clarity of exposition we set for all j in the remainder of the paper the optimal solution is interpreted as one that maximizes the statistical accuracy of the implied cl estimator subject to a given level of computing alternatively may be viewed as to minimize the complexity of the cl equations subject to given efficiency compared to mle the tuning constant balances the between statistical efficiency and computational burden the first term in w aims to obtain efficient estimating equations by finding a cl score close to the ml score when and the composition rule is optimal in the sense that the score function u is closest to the mle score um l although this choice gives estimators with good statistical efficiency it offers no control for the cl score complexity since all the partial likelihood scores are included in the final p estimating equation the second term m in is a penalty discouraging overly complex estimating equations in section we show that typically this form of penalty implies a number of elements in exactly zero for any for relatively large many elements in are exactly zero thus simplifying considerably the cl estimating equations efn u when a very large fraction of such elements is zero we say that and the cl equations efn u are sparse sparsity is a key advantage of our approach to reduce the computational burden when achievable without loosing much statistical efficiency on the other hand if is too large one risks to miss the information in some useful data subsets which may otherwise improve statistical accuracy empirical criterion and estimation obvious difficulties related to direct minimization of the ideal criterion w are the presence of the intractable likelihood score um l and the expectation depending on the unknown parameter to address these issues first note that up to a negligible term not depending on w criterion can be written as m x e wj uj m x m x ml t wj e u uj if we have e um l uj t e uj uj t to see this recall that partial scores are unbiased and differentiate both sides of euj under appropriate regularity conditions this result is used to eliminate the explicit dependency on the score um l finally replacing expectations in by empirical averages leads to the following empirical objective m x b w efn wj uj m x m x t wj efn uj uj under appropriate regularity conditions the empirical criterion estimates consistently the population criterion up to an irrelevant constant not depending on w with the caveat that must be close to these considerations motivate the following estimation strategy b compute the truncated given a preliminary consistent estimator composition rule w by solving b w w argmin q update the parameter estimator by the iteration h b b b bw efn w efn u theorem shows that the convex minimization problem in the has unique solution particularly let eb m is the subset of partial scores such that n o t b b efn uj rj w where rj is the defined by rj w uj u w and and write for b then the solution of the is the set m o n n b sign w b eb w diag efn seb w eb efn seb b sign w is the vector where seb uebt ueb and ueb is a matrix with column vectors uj j e sign function with jth element taking values and if wj wj and wj respectively and diag a denotes the diagonal of the square matrix a more insight on the meaning of may be useful differentiating in wj and then expanding around under conditions and in section gives n o b t rj b w e uj t um l u w op efn uj combining and highlights that the jth partial likelihood score uj is selected when it is sufficiently correlated with the residual difference um l u w hence our criterion retains only those uj s which are maximally useful to explain the gap between the full likelihood score um l and the cl score u w while it drops the remaining scores when we have eb m meaning that the corresponding composition rule w does not contain zero elements from for it is required that the empirical b is which is violated when n covariance matrix for all partial scores efn s b may be nearly singular due to the presence of largely even for n m however efn s correlated partial scores on the other hand setting always gives a b and guarantees existence of w matrix efn seb eb the proposed approach requires an initial consistent estimator which is often easy to obtain when the partial scores are unbiased one simple option entails solving efn u w with w t if m is large one may choose w by the stochastic cl strategy of dillon and lebanon where the elements of w may be set as either or randomly according to some scheme although the initial estimator could be quite inefficient the update improves upon this situation moreover the estimator and coefficients w can be refined by iterating the with and the a few times computational aspects lars implementation and selection of the empirical composition rule w in can not be computed using apb w to address this issue we propose an proaches due to of q implementation based on the regression lars algorithm of efron et al originally developed for sparse parameter estimation in the context of linear regression modb our implementation of lars minimizes q b w by including one score els for given b at the time in the composite likelihood score u b w in each step the score with the uj b u b w is included largest correlation with the currently available residual difference uj followed by an adjustment step on the numerical examples in section suggest that our implementation of the lars algorithm for cl selection is very fast in at most m p steps it returns a path of estimated composition rules w w where here is the value of the tuning constant in at which the jth partial score enters the cl estimating equation selection of is of practical importance since it balances the between statistical and computational efficiency for a given budget on afforded computing say we include one partial score at the time for example using the lars approach above and stop when b max for some where we reach b tr efn i b tr efn s here efn efn denotes the empirical covariance matrix for the selected partial scores indexed by the set j w j the criterion can be viewed as the proportion of score variability explained by currently selected partial scores in practice we choose close to such as or if the computing budget is reached we set b in analogy with principal component analysis the selected combination of scores accounts for the largest variability in the collection of empirical scores properties this section investigates the asymptotic behavior of the sparse composition rule w and the corresponding scle defined in within a setting where m the number of candidate partial likelihoods is allowed to grow with the sample size we use ekum l to denote the trace of fisher information based on the full likelihood here may be interpreted as the maximum knowledge about if the full likelihood score um l were available although can grow with m reflecting the rather natural notion the one can learn more about the true model as the overall data size increases it is not allowed to grow as fast as n o log n this is a rather common situation in cl estimation occuring for instance when the scores are substantially correlated or they are independent but with heterogeneous and increasing variances see examples in section sparsity and optimality of the composition rule in this section we give conditions ensuring uniqueness of the empirical composition rule w and weak convergence to its population counterpart to this end we work within the neighborhood of k for some and assume the following regularity conditions on s there exist positive constants such that e s j k and v ar s j k for all j k each element es j k is continuous with uniformly bounded first and second order derivatives on our analysis begins by deriving the condition kktc kuhn for the population objective w defined in the kktc characterizes the amount of sparsity and the computational complexity associated with the selected estimating equations depending on the value of the tuning constant let c w diag s w where s u u t is as defined in section lemma kktc under condition the minimizer of w defined in satisfies c j j m where if j if j and if j c j is the jth element of vector c proof let dj j j and note that the tayor expansion of around j is tr ij where t and ij e uj uj t is the p p fisher information matrix for the jth likelihood component and tr ij by condition we have dj otherwise if dj choosing such that sign if j dj and implies but this is a we need to show j diction since minimizes if j assume j and take such that sign sign ec j and j then tr i j which implies but this is contradicted by being the minimizer of hence ec j for all j an argument analogous to that used in the proof of lemma leads to the kktc for b w specifically for w bw b w the minimizer of the empirical loss q we have efn c j bj j m where bj if w j bj if w j and bj if w j lemma has important implications in our current setting since it relates to the size of the covariance between the jth score uj and the residual difference um l u w at particularly if such a covariance is sufficiently small e c j uj u uj e uj u um l then the correspondent coefficient is j thus the tuning parameter controls the level of sparsity of the composite score u by forcing the weights of those score components with small c j to be exactly zero for uniqueness of and w a simple condition is that the partial scores can not replace each other we require that the scores are in general position specifically we say that the score components um are in general position if any affine subspace l rm of dimension l m contains at most l elements of excluding antipodal pairs of points the partial scores uj x j are continuous and in general position with probability for all theorem under conditions the solution of the w defined in is unique and is given by for any moreover w contains at most np m elements proof let eb j m to be the index set of elements of w where where is as defined after lemma first note that the composite likelihood score bw b tw b defined in due u u is unique for all solutions w which minimize q b w uniqueness of u bw to strict convexity of q implies that b and the corresponding index set eb are unique by lemma b we first note that the square matrix next to show uniqueness of w j for all j e b has full rank otherwise some row the matrix can be written as a linear b t u b efn ueb e b b p aj efn uj b t u b b t u b b efn uk combination of other rows in the set e e e p b aj efn uj b then lemma implies also the event efn uk aj for the same set of coefficients aj s which has probability equals to since each uj is continuous b b b t has full rank meaning that the size of eb satisfies b and random thus e ueb u e b t implies strict convexity of b b b full rank of e u b u np for fixed wj j e e b w b where w b is the of w containing elements indexed by b hence w q eb e e is unique the arguments in theorem go through essentially unchanged for the population composition rule by showing the full rank of e ue t ue using condition and lemma where e is the index set of elements in this implies also uniqueness of next we turn to convergence of the empirical composition rule w to thus showing w is a suitable replacement for the intractable criterion w that the objective q since criterion b w wt efn s b b t w q efn diag s is used as an approximation of the population criterion w defined in clearly the b and es affects the accuracy of such an approximation let distance between efn s b and kefn s es be the supreme variation between matrices efn s es where is the matrix induced for matrix a as n the rate at which goes to depends mainly on the number of partial scores m and the behavior of the random elements in s which can vary considerably in different models for example when the elements of s are one needs only log m o cai et in more general cases o suffices to ensure op vershynin next we investigate how m and should increase compared to when as n to ensure a suitable behavior for w to obtain weak convergence of w to we introduce the additional requirement that the covariance matrix of the partial scores es does not shrink to zero too fast there exists a sequence cn such that cn and xt es x cn for any x rm as n condition is analogous to the compatibility condition in estimation for regression and van de geer where it ensures a good behavior of the observed design matrix of regressors differently from the sparse regression setting where condition is applied to the set of true nonzero regression coefficients here no sparsity assumption on the composition rule w is imposed p theorem under conditions if op then kw as n b w proof from lemma efn ku op note that b w b w efn ku w t efn s n o t t b w es w w efn s es w and the second term of the last equality is op by lemma thus w p t es w op which implies kw by condition corollary let be a sequence such that as n under conditions if op we have p sup efn ku w u as n bw b op the result follows by noting proof from lemma efn ku u bw b that for any the difference efn ku w u efn ku u b w is op according to conditions and theorem w t efn s s corollary states that the composite likelihood score u w is a reasonable approximation to u particularly even for close to zero the composite score u w still b uses a fraction of components at the same time u w is near the optimal score u where is the composition rule yielding the closest cl score u to the maximum likelihood score um l moreover the implied godambe information g w e w var w u w is expected to be close to g w with w however while the mcle based on or other choice of wj j may be unavailable or computationally intractable due to common difficulties in estimating var lindsay et varin et our truncated composition rule w implies a more stable estimation of g w by requiring only a fraction of scores asymptotic behavior of the scle in this section we show consistency and give the asymptotic distribution for the scle bw defined in the one advantage of estimation is that consistency and asymptotic normality are treated separately the estimator inherits b under standard rethe properties leading to consistency from the preliminary estimator quirements on s for normality additional conditions on the scores are needed let h be the matrix obtained by stacking all the let maxj k h j k j k be the maximum variation between the emb euj be pirical and the optimal hessian matrices let maxj kefn uj the supreme variation between empirical scores and their expected value around in the rest of this section we use cov u and to denote the population variability and sensitivity p p matrices respectively both depending implicitly on we further assume there exist positive constants and such that e h j k and v ar h j k for all j k each element eh j k j k of the matrix eh is continuous with uniformly bounded first and second derivatives on theorem suppose there exist n such that is with all eigenvalues bounded away from for all n n under conditions if op op and op then as n we have p i and ii d np i where cov u and denote p p population variability and sensitivity matrices proof without loss of generality we only prove the case p since p is fixed the proof can be easily generalized to the case p without additional conditions let k bw be the empirical sensitivity matrix then can be written as bw b efn u k with being a consistent preliminary estimator note that eu and bw bw b kefn u eu kefn u efn u b efn u kefn u eu kefn u the first term on the right hand side of is op by lemma the second term is op b efn uj which converges b efn u maxj uj since kefn u to by theorem s assumptions and lemma the last term of is also op by the p bw law of large numbers this shows that efn u moreover from lemma b k op since k has all eigenvalues bounded away from for large n we have kk p p p bw bw b efn u b efn u k since we have k which shows part i of the theorem bw b efn u to show normality in ii k and obtain bw b k b efn u k e k efn u w k k e u efn u w efn u k ew e where k and is some value between and the second equality b for the first term in we follows from the expansion of efn u w b at d have efn u np i since the central limit theorem applies to u by lemma by lemma o so the first term in is p op the second term in efn u w efn u efn u t w p is of smaller order compared to first term efn u efn u t since kw k by theorem for the last term in we have b k e max efn k j e max efn j kw kw and the last expression in is op by lemma theorem s assumption that op implies that the last term in is of smaller order compared to the b k op according to lemma slutsky s theorem first term finally since kk implies the desired result consistency and asymptotic normality for the estimator follow mainly from w converging in probability to the target composition rule since each score is unbiased and asymptotically normal their linear combination is also normally tributed the overall convergence rate is given by k which is of order between n and nm the actual order depends on the underlying correlation between partial scores um while the optimal rate nm is achieved when the scores are perfectly independent combining highly correlated scores into the final estimating equation will give rates closer to examples for special families of models in this section we illustrate the scle through estimation of location and scale estimation for special multivariate normal models estimation of common location for heterogeneous variates let x nm where the m m covariance matrix has elements j k and diagonal elements j k computing the mle of requires and in b efn x t x when n m b is singular and the mle practice is replaced by the mle of is not available in practice whilst cl estimation is still feasible the jth partial score is uj xj and the cl estimating equation based on the sample x x n is efn u w m n x wj x i xj j leading to the profiled mcle b w m x wj x j m x wk which is a weighted average of marginal sample means x j pn i xj j in this example one can work out directly the optimal composition rule and no estimation is required particularly it is useful to inspect the special case where x has independent components for all j k this corresponds to the model where estimators from m independent studies are combined to improve accuracy under independence we have the explicit solution j i j m which highlights that overly noisy data subsets with variance are dropped and thus do not influence the final estimator the number of elements in is pm i note that when we have uniform weights t and the corresponding b mcle is the usual optimal solution although the implied estimator w has minimum variance it offers no control for the overall computational cost since all m are selected on the other hand choosing judiciously may lead to low computational burden with negligible loss for the resulting estimator for instance assuming j for a straightforward calculation shows e u u x since the number of the scores pm x j o j i j c the first term the mean squared difference between u and the optimal score u is bounded by up to a vanishing term thus if o the composite score u converges to the optimal composite score u particularly if decreases at a sufficiently slow rate the truncated score u can still contain a relatively small number of terms b is approximately equal the optimal estimator w b while the correspondent estimator w in terms of statistical accuracy if the elements of x are correlated for j k the partial scores contain overlapping information on in this case tossing away some highly correlated partial scores improves computing while maintaining satisfactory statistical efficiency for the final estimator figure shows the solution path of and the asymptotic relative efficiency of b compared to mle for different values of when m is large the corresponding scle w m the asymptotic relative efficiency drops gradually until a few scores are left this example illustrates that a relatively high efficiency can be achieved by our truncated cl equations when a few partial scores already contains the majority of information about in such cases the final scle with a sparse composition rule is expected to achieve a good between computational cost and statistical efficiency location estimation in exchangeable normal variates in our second example we consider exchangeable variables with x nm with im the marginal scores uj xj are identically distributed and exchangeable with equal correlation differently from example the m number of log are log are log log log log log w j w j w j are m number of m number of log log figure top row solution paths for the minimizer of criterion q w in for different values of with corresponding number of bottom row asymptotic b compared to mle the vertical dashed lines on relative efficiency are of the scle w b selected by criterion with results correspond to the the bottom represent common location model x nm jth diagonal element of equal to j and jk th element of equal to jk solution to criterion has equal elements j i j m m b pm x j regardless of the value of so the optimal parameter estimator is w the first eigenvalue of es is m whilst the remaining m eigenvalues are all equal to suggesting that the first score contains a relatively large information on b compared to the other scores when m is much larger than n we have var w m mn the between statistical and computational efficiency may be measured by the ratio of estimator s variance with m compared to that with m this ratio is t m m which increases quickly for smaller m and much slower for larger m t t and t if thus although all the elements in are nonzero a few partial scores contain already the majority of the information on this suggests that in practice taking a sufficiently large value for so that the sparse empirical solution w contains only a few of zero elements bw already ensures a relatively high statistical efficiency for the corresponding mlce exponentially decaying covariances let x nd where the jkth element of is exp j k the quantity d j k may be regarded as the distance between spatial locations j and evaluating the ml score in this example is computationally expensive when d is large since it requires computing the inverse of a task involving o operations on the other hand the cl score is obtained by inverting covariance matrices thus requiring at most b o operations given observations x x n on x the mcle w solves the equation x wjk j k n x i i ujk xj xk i i i i n x xj xk xk d j k wjk jk j k i i n x x xj xk d j k wjk j k x where ujk corresponds to the score of a bivariate normal distribution for the pair xj xk figure shows the analytical solution path of the minimizer of criterion for b compared to different values of and the asymptotic relative efficiency of the scle w mle we consider a number of pairs ranging from m to m for various choices of when the scle has relatively high asymptotic efficiency interestingly efficiency remains steady around until only a few are left this suggests again that a very small proportion of components contains already the majority of the information about in such cases the scle reduces dramatically the computing burden while retaining satisfactory efficiency for the final estimator numerical examples in this section we study the performance of the scle in terms by assessing its mean squared error and computing cost when the data dimension d increases as a b preliminary estimator we use the mcle w with w t which is perhaps the most common choice for w in cl applications varin et example we generate samples of size from x nm l we specify the following covariance structures im is diagonal with kth diagonal elements k has w j log log are are log log log log log log w j w j are d m number of d m number of d m number of log figure top row solution paths for the minimizer of criterion q w defined in for different values of with corresponding number of reported botb compared to mle the tom row asymptotic relative efficiency are of scle w b selected by criterion with vertical dashed lines on the bottom row correspond to results correspond to the model x nd with j k th element of equal p to exp unit diagonal elements with the first elements of x uncorrelated with any other element while the other elements in x have pairwise correlations j k d has unit diagonal elements and a block diagonal structure with independent blocks of six elements each and correlation of figure left shows the relative mean squared error of the scle compared to that of the mle for a moderate data dimension d m the points in the trajectories correspond to inclusion of a new component according to the the algorithm described in section the scle achieves more than efficiency compared to mle for all the covariance structures considered always before all the candidate partial likelihoods are included the advantage of scle becomes evident when the scores exhibit relatively strong correlation for example for where are independent between blocks the maximum efficiency is achieved when only a few representative partial scores are selected from each block figure right shows the ratio between mean squared error of the scle compared and that of the mle for a relatively large data dimension d m compared to the sample size n although here the mle is used as a theoretical benchmark in practice such an estimator is not available as m is larger than the sample size interestingly when the sample size n is fixed including all the eventually leads to substantial loss of efficiency in this examples selecting too many not only wastes computing resources but also implies estimators with larger errors on the other hand a proper choice of the tuning constant corresponding to about selected can balance computational and statistical efficiency example in our second numerical example we consider covariance estimation for the model x nd with j k exp j k here the covariance between components xj and xk in the random vector x decreases rapidly as the distance j k between msemle msescle msemle msescle number of partial scores included number of partial scores included figure estimate of the mean square error of the mle msemle divided by that of the scle msescle for the model x nm each trajectory is based on samples of size n each point in the trajectories correspond to inclusion of a new component based on the algorithm described in section left different specifications for detailed in section with m right covariance with m ranging from to components of x increases figure shows estimates for the mean square bw error of the scle compared to that of the mcle with uniform composition rule b w with w t for and each point in the trajectories correspond to inclusion of a new component using the algorithm described in section the scle is already more efficient than the uniform mcle when a handful of partial scores are selected for example if and m selecting ten already ensures times the accuracy of the uniform mcle since the uniform mcle uses all the m pairs of the scle obtains more accurate results at a much lower computing cost mseunif msescle mseunif msescle number of partial scores included number of partial scores included figure estimate of the mean square error of the mcle with w t mseunif divided by that for the scle msescle each point in the trajectories corresponds to the inclusion of a new component based on the algorithm described in section results are based on monte carlo samples of size n from the model x nd with j k exp j k trajectories correspond to left and right for different numbers of m ranging from to conclusion and final remarks in recent years inference for complex and large data sets has become one of the most active research areas in statistics in this context cl inference has played an important role in applications as a remedy to the drawbacks of traditional likelihood approaches despite the popularity of cl methods how to address the between computational parsimony and statistical efficiency in cl inference from a methodological perspective remains a largely unanswered question motivated by this gap in the literature we introduced a new likelihood selection methodology which is able truncate quickly overly complex cl equations potentially encompassing many terms while attaining relatively low mean squared error for the implied estimator this is achieved by selecting cl estimating equations satisfying a on the cl complexity while minimizing an approximate from the score inference based on statistical objective functions with on the parameter is not new in the statistical literature see giraud for a exposition on this topic note however that differently from existing approaches the main goal here is to reduce the computational complexity of the overall cl estimating equations regardless of the model parameter which is viewed as fixed in size accordingly our involves only the composition rule w but not the model parameter in the future developing approaches for simultaneous penalization on and w may be useful to deal with situations where both the data dimension and the size of the parameter space increase two main perks of the proposed approach make it an effective alternative to traditional cl estimation from practitioner s perspective the first advantage is that the scle methodology constructs cl equations and returns inferences very quickly theorem shows that for any the empirical composition rule w retains at most np m elements this is an important feature of our method which reduces sometimes dramatically the bw amount of computing needed to obtain the implied mcle and its standard error lemma highlights that the elements in w correspond to partial scores maximally correlated with the residual difference r w um l u w this means that our approach constructs estimators with relatively high efficiency by dropping only those uj s contributing the least in the cl equations for approximating um l the second desirable feature of our method concerns model selection and the ability to reduce the complexity of large data sets in essence the truncation step described in is a dimensionreduction step starting from observations on a possibly large the vector x our method generates a collection of subsets sj j where j w j while individually the selected data subsets in are of size much smaller than d collectively they contain most of the information on for a given level of computing represented by from a theoretical perspective little work has been devoted to study the properties of cl estimators when the number of m diverges cox and reid discuss estimators based cl equations with m d terms by taking all pairwise and marginal scores for the vector x they take and more rigid composition rules compared to ours with wjk for all pairs j k and wjj d for all marginals j k where a is a tuning constant used to increase efficiency to our knowledge the current paper is the first studying the behavior of more flexible composition rules and implied cl estimating equations in the setting where both m and n grow theorem and corollary provide us with guidance on when the selected score u w is a meaningful approximation to the unknown ml score in the sense of the objective a first requirement is that the total information on available if the full likelihood were actually known kum l is not overwhelming compared to the sample size if x nm we require tr this condition is very mild when relatively few elements x contain a strong signal on whilst the remaining elements are noisy and with heterogeneous variances in section we illustrate this by taking diagonal with increasing diagonal elements a second requirement is that the tuning constant dominates asymptotically where represents the convergence rate of the empirical covariance of scores efn s for instance if the elements of s are subgaussian we have op log m meaning that should be asymptotically larger than p log m finally we show that statistical optimality and computationally parsimony can within the same selection procedure when is judiciously selected if at the rate described in theorem the truncated composition rule w with scores approximates the optimal composition rule consisting of m nonzero terms accordingly corollary suggests that the implied truncated cl score function u w approximates the optimal score u w uniformly on a neighborhood of extending this type of result and developing further theoretical insight on the interplay between the type of penalty and the mcle accuracy beyond the current setting would represent another exciting future research direction for example findings would be particularly valuable in spatial statistics where often the number of components is overwhelming and poses serious challenges to traditional cl methods appendix in this section we show technical lemmas required by the main results in section lemma kw and are decreasing in proof denote the first term of criterion w defined in without the penalty term by w suppose and let be the minimizers of w w respectively then and subtracting the last two inequalities gives since we have an analogous argument shows that kw is decreasing lemma under conditions if op then o and kw op proof for note that ekum l u or hence o bw b for w we have q bw b tw b since eum l t uj b w efn s diag efn s q euj t uj we have b tw b es t w diag efn s diag efn s eum l t u w b es t w b es t k kw with diag efn s maxj k s kw and eum l t u w op hence kw op lemma let as n under conditions if op we bw b have efn u u p as n where is the preliminary consistent estimator used to compute w in the proof note that op implies op therefore we have kw bw b gives op by lemma moreover q q b t w b efn u bw s w ef w bt s n b b t t w subtracting efn u u from both sides gives b t w b t w efn ku efn c h it h it b ec w b ec efn c efn c ec t w ec t h it b efn c ec w h it b b diag efn s es efn s e s w where the inequality is implied by lemma the last expression is op since op and kw op by lemma and the matrix maximum norm is bounded by matrix lemma if op and op then under conditions b k op kk p b k kw proof this is a direct result since kk b according to lemma p assumption and kw b by theorem lemma under conditions eku o proof note that ekum l ekum l ekum l expanding ekum l u gives eku l t u q q ekum l eku eku gives eku lemma assume conditions for every we have n o p x n e u w i w nj i where ui w pm as n i wj log fj xj is the composite likelihood score corresponding to the ith observation proof without loss of generality assume p recall that eu for every and constants a b such that n o p x n e u w i w nj i o p n e ui i b j e n where the inequality follows by applying s and chebyshev s inequalities by the assumption at the beginning of section that o log n lemma implies o log n hence converges to as n which proves the desired result references besag spatial interaction and the statistical analysis of lattice systems journal of the royal statistical society series b methodological pages and van de geer statistics for data methods theory and applications springer science business media cai zhang zhou et al optimal rates of convergence for covariance matrix estimation the annals of statistics cox and reid a note on pseudolikelihood constructed from marginal densities biometrika dillon and lebanon stochastic composite likelihood journal of machine learning research oct efron hastie johnstone tibshirani et al least angle regression the annals of statistics ferrari and yang maximum estimation the annals of statistics fisher on the mathematical foundations of theoretical statistics philosophical transactions of the royal society of london series a containing papers of a mathematical or physical character pages giraud introduction to statistics volume crc press heyde and its application a general approach to optimal parameter estimation springer science business media kuhn nonlinear programming a historical view in traces and emergence of nonlinear programming pages springer lindsay composite likelihood methods contemporary mathematics lindsay yi and j sun issues and strategies in the selection of composite likelihoods statistica sinica varin reid and firth an overview of composite likelihood methods statistica sinica vershynin how close is the sample covariance matrix to the actual covariance matrix journal of theoretical probability
10
noname manuscript no will be inserted by the editor identifying hazardousness of using classification methods a comparative study may varun kumar ojha parmartha dutta atal chaudhuri received date accepted date abstract in this work we formulated a problem related to sewerpipeline gas detection using the approaches the primary goal of this work was to identify the hazardousness of to offer safe and access to workers so that the human fatalities which occurs due to the toxic exposure of sewer gas components can be avoided the dataset acquired through laboratory tests experiments and various were organized to design a predictive model that was able to hazardous and situation of to design such prediction model several classification algorithms were used and their performances were evaluated and compared both empirically and statistically over the collected dataset in addition the performances of several ensemble methods were analyzed to understand the extent of improvement offered by these methods the result of this comprehensive study showed that the algorithm performed better than many other algorithms such as perceptron radial basis function network support vector machine reduced pruning tree etc similarly it was observed that ensemble approach enhanced the performance of base predictors ojha technical university of ostrava ostrava czech republic and dept of computer science engineering jadavpur university kolkata india dutta dept of computer system sciences university india a chaudhuri dept of computer science engineering jadavpur university kolkata india neural computing and applications doi varun kumar ojha et al keywords sewer gas detection neural network classification ks test introduction this is in the view of providing a solution to a problem using technology where the human fatalities need to be avoided hence the technology should be as simple as possible in this work we addressed a complex realworld problem related to gas detection where safety detection in terms of environment was required to allow maintenance and cleaning of the pipeline the sewer gas detection is a highly complex problem because of the presence of several toxic gases in a mixture form and a single gas detector may not offer reliable solution therefore we studied the complexity of this problem in terms of gas mixture the primary goal was to offer a simple solution with a high accuracy so that it was easy to categorize the hazardous situation in straightforward way such as hazardous or to meet this simplicity we formulated gas detection problem as a classification problem contains a mixture of several toxic gases such as hydrogen sulphide s ammonia methane carbon dioxide nitrogen oxides nox usually this mixture is generated due to the biodegradation of the waste and the sewage into the such toxic is fatal for those who come to the of these gases following this an alarming number of human fatalities are reported each year by the newspapers and the other agencies the authorities those are responsible for maintaining and cleaning of the sewer pipeline provides various electronic portable gas detectors available in the market to the employed persons so that they can determine the safeness of the before physically get involve into the maintenance work however the available electronic portable gas detectors are not providing satisfactory results it is evident from the recent comments from the judiciary to these authorities in a judgment to a civil appeal number of the supreme court of india stated the state and its can not absolve themselves of the responsibility to put in place effective mechanism for ensuring safety of the workers employed for maintaining and cleaning the sewage system similarly in another judgment the supreme court of india stated entering sewer lines without safety gears should be made a crime even in emergency situations this motivated us to carry out our research in this domain and to come out with a simple solution so that without having the minimum knowledge of the technicalities of gas composition and safety limits a person is able to understand the environment of a sewer system before entering to ensure the simplicity in model we collected and preprocessed data to realize sewer as a binary class classification problem however in this work apart from the objective of constructing a prediction model we set a secondary objective which was to analyze the performances of the classifiers identifying hazardousness of both empirically and statistically to meet these objectives we used base predictors from four different categories such as neural network based classifiers tree based classifiers instance based classifiers and rule based classifiers the algorithms were applied over the collected dataset and the performance of the algorithms were collected in terms of the accuracy the collected results were then used for analyzing the performance superiority of the one algorithm over another or the one category of algorithms over another we observed that the performance of the algorithms were independent of the category they belong to for example the performance of instance based neighbor logistic model tree and support vector machine came from three different categories but they had a very competitive performance however we must consider the theorem that suggests that some algorithms perform better on some problem and some on another therefore to find out which predictor performs best in this case we used base predictors and nine ensemble methods rest of the article is organized as follows a background study is provided in section which leads to setting ground for describing our contribution to the gas detection in section we provide a detailed description of the data collection and preprocessing mechanisms which constitute the core and significant part for formulating gas detection problem as a binary classification problem section deals with the brief descriptions of the and methods used for constructing the prediction model the design of comprehensive experiment set for the evaluation of the classifiers is reported in section whereas section describes empirical and statistical evaluation of the classifiers discussions and conclusion are reported in sections and respectively methodology in this section we put together the background study the data collection mechanisms and the classification methods definitions the background study describes the significance of the gas detection problem and the data collection mechanism describes the formulation of as a classification problem background study literature review was conducted in the perspective of enose and to cover a broad area of research in the field of gas detection and modeling using intelligent computing although not much work specifically on sewer was reported in the past few notable contributions were observed li et al reported a noticeable research work on the development and design of an electronic nose and gas detection system where a neural network varun kumar ojha et al nn mixed gas nox and co measurement system was developed on the other hand sirvastava et al proposed a design of intelligent enose system using backpropagation bp and approach llobet et al presented a pattern recognition approach based on the wallet transformation for gas mixture analysis using single sensor liu et al addressed a algorithm to recognize patterns of mixed gases a mixture of three component gases using infrared gas sensor lee et al illustrated uses of micro gas sensor array gsa combined with nn for recognizing combustible leakage gases ambard et al have demonstrated use of nn for gas discrimination using a gsa for the gases co and in authors have illustrated a technique for developing a gas sensory system for sensing gases in a dynamic environment pan et al have shown several applications of wongchoosuka et al have proposed an detection system based on carbon gas sensors for detecting methanol zhang et al developed a genetic algorithm for detecting mixed gas in mines won et al proposed a system for estimation of hazardous gas release rate using optical sensor and technique the following salient points came out of the above mentioned articles mainly bp and approaches were studied so far for detecting mostly the systems reported in the past were developed for the of only two or three gases and the sensors of the gases used were less to the other gases in mixtures during sensing is an important factor in gas detection system which was least reported in literature as yet however ojha et al offered a few methods such as annealing where factor has been addressed to some extent however these works were primarily related to regression modeling the impact of humidity and temperature on sensors remained ignored so far the gas detection system or was viewed only in the framework of regression problems and not classification problem classification based approach led us to determine the hazardous and nonhazardous situation of a in addition the collection organization and the preprocessing of the collected data enabled us to address the issue firmly the issue occurs because of the sensitivity of one towards multiple gases so was our case where a gsa was designed using five each was typically meant for detecting its respective target gas hence when the gsa was used for collecting data for a mixture of gases the crosssensitivity in the sensed values collected data became inevitable therefore rather than considering pure results of the respective gases we registered the results as a part of data since a identifying hazardousness of ally intelligent model learned from the data and also maintained the crosssensitivity patterns registered in terms of data values itself a learned model accurately predicts an unknown gas mixture equipment and data collection mechanism eeprom before explaining the details of data collection and equipment we need to explain the basic design and the purpose of our work which is to offer an intelligent gas detection system an electronic portable gas detector that will be a result of embedding into an electronic system the data flow into our developed intelligent system is shown in fig which describes the entire process of the intelligent system design which is divided into three phases the data acquisition unit which consists of gas chamber gsa and data block an intelligent unit classifier unit which receives data from dataacquisition unit and classifying the acquired data patterns the output unit which prompts the result in terms of colored light and buzzer hence our objective here was limited to only train a classifier using the collected data we describe the data collection process as follows fig block diagram of intelligent system design real time data flow process at first we collected the data samples from the literature and laboratories test of the collected gas mixture samples from second we designed our own metal oxide semiconductor mos gas sensors array gsa that was used for verifying the literature and laboratory data and for generating the data samples for the purpose experiments our designed gsa consists of five for sensing five different gases they include hydrogen sulphide s ammonia methane carbon dioxide and nitrogen oxides nox typically mos sensors are electrical sensors where responses are change in circuit resistance proportional to gas concentration a resistance type sensor responds to change in resistance due to change in the concentration of gases the change in resistance is given as where is change in mos sensor resistance and is base resistance or the sensing resistance at a specifics gas concentration in clean air the of the sensors mics mq mq mq and mq is ppm ppm ppm ppm and ppm respectively here ppm is the unit for measuring concentration of gas into air which is defined as follows ppm is equal to volume of a gas into volume of air varun kumar ojha et al a typical arrangement of a gas sensor array is shown in fig the circuitry shown in fig left was developed in our laboratory here the fabricated and installed sensors were mics mq mq mq and mq for gases co s and respectively the gas sensors used were sensitive to not only their target gases but they were sensitive also to other gases in the hence crosssensitivity effect over mos sensors was confirmed it was moreover confirmed that the sensor responses were noisy and accordingly the pattern of such noise were considered and recorded as an instance into our dataset hence a use of raw values of sensor response for hazardousness prediction may be misleading in operating environment therefore a training electronic portable gas detector may be used to predict sewer hazardousness accurately so was the effort in this work to provide a classifier data collection had vital role in training of a classifier data samples were collected as per the following steps at first several manhole samples collected from the kolkata india municipal area were tested in laboratory to identify the presence of several toxic gases such as nitrogen dioxide carbon monoxide co hydrogen sulphide s ammonia methane and carbon dioxide secondly gas sensors were identified for each of the respective gases as a result we came out with the procurement of gas sensor mics mq mq mq and mq for co s and respectively we collected data sheets form the companies for the respective sensors in the third step a laboratory was setup for the verification and collection of the sensor response of the respective gas sensors in certain range of their concentration specifically the concentration range in ppm laid down in sensor manuals of sensors mics mq mq mq and mq are and of the gases co s and respectively in addition the lab was setup see fig right where gas cylinders were connected to a gas concentration measuring unit called mass flow controller mfc which was further connected to a gas chamber where each gas was allowed to pass in a specific concentration over an array of gas sensor more specifically the behavior of each of the gas sensors was recorded fig gas sensor array gsa identifying hazardousness of the following steps were used for preparing data sample for the classifiers training first hazardous safety limits of the component gases of manhole gas mixture were collected secondly three different levels i above safetylimit ii at and iii below for each manhole gas were recognized thirdly gases were mixed in different combination to prepare several mixture sample that were used to pass over gsa table indicates few examples of such mixture of gases in different combinations for example when we mix five gases each of which has three different recognized concentration levels we get different combinations in addition we considered the role of humidity and temperature to influence the sensor s behavior accordingly the data values were recorded hence our collected dataset contained seven input features and an output class each sample was labeled with for safe sample if the responses of all five sensors were under the maximum safety limit or for unsafe sample if the responses of any among the five sensors were above the maximum safety limit the safety limits of the manhole gases are as follows safety limit of is between ppm and ppm co is in between ppm and ppm s is in between ppm and ppm is in between ppm and ppm and is in between ppm and ppm table illustrates a fraction of the collected data samples classification based approach we categorized the classifiers in the four different groups of classifiers each category of classifiers contains three classifiers table samples of in different concentration concentration of gases in ppm humidity temperature co nh ch class status safe safe unsafe safe unsafe safe unsafe unsafe unsafe varun kumar ojha et al network based classifiers perceptron mlp is a computational model that imitates human brain and learn from environment data in our work we used threelayered mlp where layers are input layer hidden layer and output layer radial basis function network rbf is a special class of mlp where inputs are mapped onto a hidden layer that consists of radial basis function which does the mapping of input to a hidden layer support vector machine svm is a supervised learning computational model that maps input to a high dimension feature space using kernel trick hence separable patterns in input space are linearly classified on a high dimensional feature space tree based classifiers reduced pruning tree rep is a tree based classifier method where a treelike structure is designed for predicting target class based on the input variables more specifically the leaves of tree offers decision of the class based on the conjunction of the input feature represented by the branches of the tree rep tree is a decision tree where the tree size is reduced by pruning inefficient branches naive bayes tree nbt is a special class of decision tree where the leaf nodes of decision tree that offer decision on the class is replaced by a naive bayes classifier which decides the class label based on the features and learned threshold table samples of calibrated sensor responses based on the knowledge gathered from literature lab tests and scaling process sensors response humidity temperature inco innh inch class identifying hazardousness of logistic model trees lmt is similar to nbt that does the transformation of leaves of a decision tree into a logistic regression node a logistic regression maps independent variables to categorical dependent variables using a logistic function hence lmt is a simple idea where nodes of a tree are replaced by logistic regression model rule based classifiers decision table dt is a simple representation of data into a table based system where the decision is made based on the features matching or searched into a decision table on a successful search the majority class label is returned otherwise the majority class label of the entire dataset is returned as a decision for an unlabeled data part is a rule based classification method based on partial decision tree that generates a list of rules used subsequently for making prediction of unknown data instance the rules are generated based on the partial decision tree which splits dataset into subsets until the entire dataset gets exhausted to form nodes and leaf nodes of the tree majority predictor zero r is the simplest possible form of classification method it is based on the majority of class label into a dataset in simple words it always predicts the majority class instance based classifiers learning ibk provides the concept description which is the primary output of an ibk algorithm it is a function that maps an instance to a category class label the concept description function is updated based on training procedure that involves two functions similarity and classification the similarity function computes the similarity between the training instances and the instances and returns a then the classification function provides class label to the instances based on the results of similarity function accordingly the concept description is updated k star is an learner that uses an similarity matching function for test instances to the learned instances locally weighted learning lwl in a locally weighted learning the prediction models are allowed to create at local points in a dataset or the specific point of interest rather than creating model for entire dataset hence a linear regression or naive bayes classifier or any other classifier may be used to create local models in this case we use decision stamp which is a single level decision tree model for prediction varun kumar ojha et al ensemble methods in this work we tried to exploit different method of making ensemble for an ensemble to perform well we need to take into account two things which are accuracy of predictors and diversity among the predictors for example bagging maintains diversity by bootstrapping dataset addboost combines several weak predictors random subspace maintains diversity by splitting feature space random committee maintains diversity by creating predictors using different random seeds and rotation forest maintains diversity by splitting and extracting feature subspace using principal component analysis similarly in and voting scheme we combine several predictors to maintain diversity here we describe the ensemble methods as follows bagging in bagging several copies of same predictor is created each copy of the predictor learns a different replicate of learning set created from the complete training set using bootstrapping finally the predictor s decision is combined using plurality voting method adaptive boosting adaboost is an ensemble technique that combines several weak predictors and inaccurate rules to create an accurate predictor random subspace random sub in random subspace ensemble method feature space is divided into several feature subset hence predictors are constructed for each feature subset finally the decision of each constructed predictors are combined using voting method random committee random com in a random committee ensemble several predictors are constructed over similar dataset but they use different random seeds to maintain diversity in the ensemble rotation forest rotation frst in this approach training set for the predictors are created by splitting feature set into k subsets and principal component analysis is applied to extract all the principle components hence diversity among the predictors are maintained by k axis rotation to form new feature set for training ensemble selection ensemble sel in the ensemble selection approach the ensemble starts with an empty bag and the predictors chosen from a library of trained predictors maximizing the performance of ensemble are added to the bag one by one to compute the decision of ensemble by using voting method voting scheme vote the voting scheme combines probability distribution of several chosen or predictors available in a bag for making ensemble using majority voting combination method identifying hazardousness of multi the ensemble approach uses a bag of predictors and selects the output class by selecting a predictor from the bag of predictors based on performance of the predictors weighted predictor ensemble wpe in this scheme of ensemble the weight of predictors were determined subsequently the ensemble output of k many predictors were computed as follows c y arg max k x wj i pj where c is the number of classes here it is two i pj is a function that returns value one for the predicted class experimental framework and results our aim in the experiment design was to obtain a highly accurate model for predicting hazardousness of the environment in a sewer pipeline the sewerpipeline environment was represented by the collected dataset the second objective of the experiment design was to obtain results for analyzing the classifiers predictors accordingly the results of the classifiers were collected table represents the parameter setting of the chosen classifiers for the evaluation of the classifiers we repeated our experiments times finally the results were compared based on empirical and statistical smirnov test evaluation we used weka and matlab tools for the purpose of our experiments we organized the experimental results into three parts as reflected in table the first part in the table describes the category wise performance of classifier hence the performance of the category of classifiers was evaluated we represented the performance of the classifiers as per their training and test accuracy an accuracy close to indicates classification accuracy accordingly the standard deviation std of training and test accuracies were reported for understanding the consistency of the classifiers performance in table the performance of the classifiers were arranged as follows the category is arranged in the ascending order of their average accuracy over cv test set better performing classifier to the less performing classifier the dataset was portioned into equal sets and each time sets were used for training and one set for testing this process was repeated times and each time a unique test set was used in the second part we organized the results according to rank of the classifiers performance over test set it may please be noted that for each classifier we collected instances of cv training and test results hence the results in table reflect averaged training and test accuracy of the classifiers however ranking the classifiers based only on the average results does not say much about the quality of the classifier hence in the varun kumar ojha et al table parameter setting of different classifiers category classifiers classifiers classifiers classifiers ensemble classifiers ensemble classifiers classifiers parameters mlp learning rate momentum factor iteration nodes in hidden layer kernel gaussian basis function kernel radial basis function minimum no of instance per leaf split proportion leaf node nave bayes classifier node logistic function number of instance per node for splitting similarity function linear nearest neighbor search neighbor size similarity function entropy distance measure similarity function linear nearest neighbor search weight function linear classifier decision stamp evaluation metric accuracy search method best first confidence threshold for pruning ensemble size classifier rep tree ensemble size classifier decision stamp ensemble size classifier rep tree ensemble size classifier random tree ensemble size classifier random tree ensemble size classifier rep tree ensemble size classifiers and ensemble size classifiers and ensemble size classifiers and rbf svm rep nbt lmt ibk k star lwl dt part zero r bagging adaboost random sel random com rotation frst ensemble sel vote multi scheme wpe third part of the results we used pairwise comparison of the classifiers using ks test which ascertains whether the supremacy of one classifier over the other is statistically significant or not a comprehensive matrix of the pairwise ks test results are presented in table the ks test is a statistical test that determines the difference between the cumulative frequency distribution cfd of two samples in other words it indicates whether the empirical cfd of one sample is equal larger or smaller than the other it tells whether two dataset a and b are statistically similar dissimilar where a being statistically dominated by b or dissimilar a b where a being statistically dominant over b in our experiments the ks test was evaluated with significance level with confidence discussions since the developed electronic portable gas detector shall be used by naive persons who are engaged in maintaining we are looking for binary answer hence our objective is to search for classification accuracy and identifying hazardousness of table experimental results of classifiers over fold cross validation error category classifiers classifiers classifiers classifiers ensemble classifiers classifiers training avg accuracy std test avg accuracy std svm mlp rbf lmt rep tree nb tree ibk k star lwl part decision table zero r multi rotation frst random com bagging wpe ensemble sel vote random sub table ranking algorithms according to their performance on test set fold cv rank category classifiers training test multi ibk kstar rotation frst random com bagging wpe lmt ensemble sel svm reptree vote part nbtree mlp random sub dt rbf lwl zeror adaboost varun kumar ojha et al rbf reptree svm zeror vote adaboost bagging ensemble sel random com wpe part random sub nbtree rotation frst mlp lmt lwl ibk dt ibk kstar lmt lwl mlp nbtree part rbf reptree svm zeror vote adaboost bagging ensemble sel random com random sub rotation frst wpe kstar classifiers dt table ranking algorithms according to their performance on test set fold cv the model weights with the highest accuracy so that such a combination may be implemented into electronic portable gas detector form moreover it is also a difficult task to be certain with the accuracy of an implemented electronic portable gas detector because the toxic exposure of a gas is also proportional to the time and not only its safety limit however with a monitoring and requisite maintenance involved the accuracy of detector may be relaxed and hence we resorted to choose accuracy as the accuracy for our developed detector so the classifier s performance was compared with a threshold setting of accuracy first let us discuss on the obtained results for the classifiers belonging to category the classifier svm performs better than its counterparts mlp and rbf both in terms of high accuracy test accuracy and high consistency std on test accuracy on the other hand the performance of mlp was reported next to svm with high consistency the performance of the rbf was found to be inconsistent and poorer in comparison to its counterparts in the tree based category the performance of lmt and reptree was comparable to whereas nbtree has shown poor performance compared to its counterparts identifying hazardousness of in category the performance of ibk and k star was comparative with a high accuracy and high consistency lwl performed poor with a very low accuracy when it came to the category of rule based classifier part has outperformed others in its category but the consistency was not as high as the consistency of the other well performing classifiers ibk svm mlp etc the classifier zeror consistently performed poor in comparison to all other classifiers in the ensemble category and the random com rotation frst bagging wpe and ensemble sel performed with high accuracies over and consistency however the performance of the ensembles random forest vote and adaboost were not as satisfactory as compared to the other ensembles one of the reason behind poor performance of random sub was the usage of subset of the features therefore the feature selection may not help in case of this dataset because of the high correlation maintained by each of the features with the output feature similarly voting used probability measures to combine the predictors and addboost combined weak predictors whereas the entirely better performing ensemble exploited the best predictors hence they performed better in this scenario considering the assumption of accuracy being a good predictor for implementation as gas detector we can figure out from table that the classifiers belong to category exception of the classifier lwl had performed better than the classifiers of other categories however the instance based classifier ibk is not suitable for the implementation as electronic gas detector since it required a large memory for its computation for saving all the instances of the training set ibk prediction is computed based on all training samples hence it takes long time to compute the output which is unacceptable in real time the next category whose performance was found close to ibk were the classifiers of category tree based classifier two classifies lmt and rep tree qualified the accuracy threshold on the contrary two classifiers from each and had performed lower than accuracy however svm performed significantly well with a very high accuracy similarly classifier part from category had an accuracy of however since the svm produced less number of parameters than the tree based predictor and it robustly accommodates the noisy attributes it was recommended from these experiments that svm is a proper choice for the implementation of the proposed gas detector conclusion in this work we explored a real world problem in the context of classification where we simplified the approach by offering binary decision to the problem we explored the problem related to the detection of hazardousness of a sewer pipeline environment this is very crucial problem since it is related to the safety of the persons who have to work under the toxic environment of varun kumar ojha et al the usually a environment contains mixture of toxic gases hence we collected samples from sewer pipelines from different locations then we examined those samples to identify data samples for our experiments we prepared a large dataset by collecting gas sensor responses from laboratory tests literature and scaled the collected gas sensor responses to form a dataset where samples were labeled and hazardous samples were labeled finally we applied different classifiers over the identified dataset and their empirical and statistical performance were evaluated we discovered that for this problem the instance based classifier performed best followed by the performance of tree based classifiers however we found that the performance of the classifiers were dependent on the ability and mechanism of the classifiers themselves and not on the information regarding which category they belong to acknowledgements this work was supported by the iprocom marie curie initial training network funded through the people programme marie curie actions of the european unions seventh framework programme references whorton the insidious foe gas western journal of medicine vol no pp lewis sax s dangerous properties of industrial materials ed wiley gromicko sewer gases in the home http hindu deaths in the drains http accessed on dec ndtv he died on diwali inside a sewage pipe http accessed on dec anand dying in the gutters tehelka magazine vol no dec achttp cessed on dec hindu provide safety gear to sewer workers who enter manholes says court http accessed on dec sewer deaths http accessed on dec supreme court orders states to abolish manual scavenging http accessed on dec wolpert and macready no free lunch theorems for optimization ieee transactions on evolutionary computation vol no pp li a mixed gas sensor system based on thin film saw sensor array and neural network in proceedings of the twelfth southern biomedical engineering conference pp srivastava srivastava and shukla on the design issue of intelligent electronic nose system in proceedings of ieee international conference on industrial technology vol ieee pp in search of a good computational paradigm in proceedings of ieee international conference on industrial technology vol ieee pp identifying hazardousness of llobet ionescu brezmes vilanova correig barsan and gardner multicomponent gas mixture analysis using a single tin oxide sensor and dynamic pattern recognition ieee sensors journal vol no pp lee ban lee and lee micro gas sensor array with neural network for recognizing combustible leakage gases ieee sensors journal vol no pp ambard guo martinez and bermak a spiking neural network for gas discrimination using a tin oxide sensor array in ieee international symposium on electronic design test and applications ieee pp baha and dibi a novel neural technique for smart gas sensors operating in a dynamic environment sensors vol no pp pan li and liu application of electronic nose in gas mixture quantitative detection in ieee international conference on network infrastructure and digital content ieee pp wongchoosuk wisitsoraat tuantranont and kerdcharoen portable electronic nose based on carbon gas sensors and its application for detection of methanol contamination in whiskeys sensors and actuators b chemical vol no pp zhang li and tang genetic algorithms data fusion and its application in mine detection in chinese control and decision conference ccdc ieee pp so koo shin and yoon the estimation of hazardous gas release rate using optical sensor and neural network computer aided chemical engineering vol pp ojha dutta and saha performance analysis of neuro genetic algorithm applied on detecting proportion of components in manhole gas mixture international journal of artificial intelligence applications vol no pp ojha and dutta performance analysis of neuro swarm optimization algorithm applied on detecting proportion of components in manhole gas mixture artificial intelligence research vol no pp ojha dutta chaudhuri and saha convergence analysis of backpropagation algorithm for designing an intelligent system for sensing manhole gases in hybrid soft computing approaches springer india pp dutta and ojha conjugate gradient trained neural network for intelligent sensing of manhole gases to avoid human fatality in advances in secure computing internet services and applications igi global pp ojha dutta chaudhuri and saha understating continuous ant colony optimization for neural network training a case study on intelligent sensing of manhole gas components international journal of hybrid intelligent systems vol no pp a concurrent neurosimulated annealing algorithm a case study on intelligent sensing of manhole gases international journal of hybrid intelligent systems vol no pp ghosh roy singh saha ojha and dutta sensor array for manhole gas analysis in international symposium on physics and technology of sensors ispts ieee pp ghosh saha roychaudhuri ojha and dutta portable sensor array system for intelligent recognizer of manhole gas in sixth international conference on sensing technology icst ieee pp cantalini valentini armentano lozzi kenny and santucci sensitivity to and analysis to ethanol and humidity of carbon nanotubes thin film prepared by pecvd sensors and actuators b chemical vol no pp mitzner sternhagen and galipeau development of a micromachined hazardous gas sensor array sensors and actuators b chemical vol no pp varun kumar ojha et al liu zhang zhang and cheng cross sensitivity reduction of gas sensors using genetic algorithm neural network in optical methods for industrial processes farquharson vol proceedings of spie donham exposure limits related to air quality and risk assessment iowa concentrated animal feeding operations air quality study weaver carbon monoxide poisoning new england journal of medicine vol no pp simonton human health effects from exposure to concentrations of hydrogen sulfide occupational health safety shilpa new insight into panic attacks carbon dioxide is the culprit journal of young investigators http fahey and hegglin twenty questions and answers about the ozone layer update scientific assessment of ozone depletion pp weigend b huberman and rumelhart predicting the future a connectionist approach international journal of neural systems vol no pp lowe and broomhead multivariable functional interpolation and adaptive networks complex system vol pp cortes and vapnik networks machine learning vol no pp olshen j stone et classification and regression trees wadsworth international group vol no quinlan programs for machine learning elsevier esposito malerba semeraro and tamma the effects of pruning methods on the predictive accuracy of induced decision trees applied stochastic models in business and industry vol no pp mohamed salleh and omar a comparative study of reduced error pruning method in decision tree algorithms in ieee international conference on control system computing and engineering iccsce ieee pp walker and duncan estimation of the probability of an event as a function of several independent variables biometrika vol no pp cox the regression analysis of binary sequences journal of the royal statistical society series b methodological pp landwehr hall and frank logistic model trees machine learning vol no pp kohavi the power of decision tables in machine learning springer pp frank and witten generating accurate rule sets without global optimization aha kibler and albert learning algorithms machine learning vol no pp cleary trigg et k an learner using an entropic distance measure in proceedings of the international conference on machine learning vol pp frank hall and pfahringer locally weighted naive bayes in proceedings of the nineteenth conference on uncertainty in artificial intelligence morgan kaufmann publishers pp atkeson moore and schaal locally weighted learning artificial intelligence review vol no pp polikar ensemble based systems in decision making ieee circuits and systems magazine vol no pp breiman bagging predictors machine learning vol no pp freund and schapire a generalization of learning and an application to boosting journal of computer and system sciences vol no pp identifying hazardousness of ho the random subspace method for constructing decision forests ieee transactions on pattern analysis and machine intelligence vol no pp rodriguez kuncheva and alonso rotation forest a new classifier ensemble method ieee transactions on pattern analysis and machine intelligence vol no pp caruana crew and ksikes ensemble selection from libraries of models in proceedings of the international conference on machine learning acm kuncheva combining pattern classifiers methods and algorithms john wiley sons weka data mining software in java accessed online available http matlab statistics and machine learning toolbox accessed online available http
9
the generic model of computation nachum dershowitz school of computer science tel aviv university tel aviv israel over the past two decades yuri gurevich and his colleagues have formulated axiomatic foundations for the notion of algorithm be it classical interactive or parallel and formalized them in the new generic framework of abstract state machines this approach has recently been extended to suggest a formalization of the notion of effective computation over arbitrary countable domains the central notions are summarized herein background abstract state machines asms invented by yuri gurevich constitute a most general model of computation one that can operate on any desired level of abstraction of data structures and native operations all ordinary models of computation are instances of this one generic paradigm here we give an overview of the foundational considerations underlying the model cobbled together primarily from programs of the sequential variety in this formalism are built from three components there are generalized assignments f sn t where f is any function symbol in the vocabulary of the program and the si and t are arbitrary terms in that vocabulary statements may be prefaced by a conditional test if c then p or if c then p else q where c is a propositional combination of equalities between terms program statements may be composed in parallel following the keyword do short for do in parallel an asm program describes a single transition step its statements are executed repeatedly as a unit until no assignments have their conditions enabled additional constructs beyond these are needed for interaction and parallelism which are not dealt with here as a simple example consider the program shown as algorithm describing a version of selection sort where f f n contain values to be sorted f being a unary function symbol initially n is the quantity of values to be sorted i is set to and j to the brackets indicate statements that are executed in parallel the program proceeds by repeatedly modifying the values of i and j as well as of locations in f referring to terms f i and f j when all conditions fail that is when j n and i n the values in f have been sorted the relation the program halts as there is nothing left to do declarations and initializations for program constants and variables are not shown this sorting program is not partial to any particular representation of the natural numbers which are being used to index whether an implementation uses natural language or decimal numbers for a video lecture of gurevich s on this subject see http kashefi krivine van raamsdonk eds dcm eptcs pp c dershowitz this work is licensed under the creative commons attribution license generic model of computation algorithm an program for sorting i i if j n then if i n then do j i f i f j if f i f j then do f j f i else do j j algorithm an program for bisection search if sgn f a b sgn f a then a a b if then do if sgn f a b sgn f b then b a b or binary strings is immaterial as long as addition behaves as expected and equality and disequality too furthermore the program will work regardless of the domain from which the values of f are drawn be they integers reals strings or what not so long as means are provided for evaluating the inequality relation another simple asm program is shown in algorithm this is a standard bisection search for the root of a function as described in algorithm the point is that this abstract formulation is as the author of wrote applicable to any continuous function over the ones that can not be programmed what is remarkable about asms is that this very simple model of computation suffices to precisely capture the behavior of the whole class of ordinary algorithms over any domain the reason is that by virtue of the abstract state machine asm representation theorem of theorem below any algorithm that satisfies three very natural sequential postulates can be emulated by an asm those postulates articulated in section formalize the following intuitions i an algorithm is a system ii given the algorithm state information determines future transitions and can be captured by a logical structure and iii state transitions are governed by the values of a finite and set of terms the significance of the sequential postulates lies in their comprehensiveness they formalize which features exactly characterize a classical algorithm in its most abstract and generic manifestation programs of all models of effective sequential computation satisfy the postulates as do idealized algorithms for computing with real numbers algorithm or for geometric constructions with compass and straightedge see for examples of the latter abstract state machines are a computational model that is not wedded to any particular data representation in the way say that turing machines manipulate strings using a small set of tape operations the representation theorem restated in section establishes that asms can express and precisely emulate any and all algorithms satisfying the premises captured by the postulates for any such algorithm there is an asm program that describes precisely the same function state after state as does the algorithm in this sense asms subsume all other computational models it may be informative to note the similarity between the form of an asm namely a single repeated loop of a set of generalized assignments nested within conditionals with the folk theorem to the effect dershowitz that any flowchart program can be converted to a single loop composed of conditionals sequencing and assignments with the aid of some auxiliary variables see parallel composition gives asms the ability to perform multiple actions sans extra variables and to capture all that transpires in a single step of any algorithm this versatility of asms is what makes them so ideal for both specification and prototyping indeed asms have been used to model all manner of programming applications systems and languages each on the precise intended level of abstraction see and the asm website http for numerous exemplars asms provide a complete means of describing algorithms whether or not they can be implemented effectively on account of their abstractness one can express generic algorithms like our bisection search for arbitrary continuous functions or like gaussian elimination even when the field over which it is applied is left unspecified asml an executable specification language based on the asm framework has been used in industry in particular for the behavioral specification of interfaces see for example church s thesis asserts that the recursive functions are the only numeric functions that can be effectively computed similarly turing s thesis stakes the claim that any function on strings that can be mechanically computed can be computed in particular by a turing machine more generally one additional natural hypothesis regarding the describability of initial states of algorithms as explained in section characterizes the effectiveness of any model of computation operating over any countable data domain theorem on account of the ability of asms to precisely capture single steps of any algorithm one can infer absolute bounds on the complexity of algorithms under arbitrary effective models of computation as will be seen theorem at the end of section sequential algorithms the sequential postulates of regarding algorithmic behavior are based on the following key observations a state should contain all the relevant information apart from the algorithm itself needed to determine the next steps for example the instantaneous description of a turing machine computation is just what is needed to pick up a machine s computation from where it has been left off see similarly the continuation of a lisp program contains all the state information needed to resume its computation structures suffice to model all salient features of states compare pp the values of programming variables in and of themselves are meaningless to an algorithm which is implementation independent rather it is relationships between values that matter to the algorithm it follows that an algorithm should work equally well in isomorphic worlds compare an algorithm can relations between values stored in a state via terms in its vocabulary and equalities and disequalities between their values algorithms are expressed by means of finite texts making reference to only finitely many terms and relations among them see for example the three postulates given below from modified slightly as in assert that a classical algorithm is a system operating over structures in a way that is invariant under isomorphisms an algorithm is a prescription for updating states that is for changing some of the interpretations given to symbols by states the essential idea is that there is a fixed finite set of terms generic model of computation that refer possibly indirectly to locations within a state and which suffice to determine how the state changes during any transition sequential time to begin with algorithms are deterministic systems postulate i sequential time an algorithm determines the following a nonempty s of states and a nonempty subset s of initial states a partial transition function s s terminal states s are those states x for which no transition x is defined having the transition depend only on the state means that states must store all the information needed to determine subsequent behavior prior history is unavailable to the algorithm unless stored in the current state are deterministic classical algorithms in fact never leave room for choices nor do they involve any sort of interaction with the environment to determine the next step to incorporate nondeterministic choice probabilistic choice or interaction with the environment one would need to modify the above notion of transition this postulate is meant to exclude formalisms such as in which the result of a the continuation of a depend on the limit of an infinite sequence of preceding finite or infinitesimal steps likewise processes in which states evolve continuously as in analog processes like the position of a bouncing ball rather than discretely are eschewed though it may appear at first glance that a recursive function does not fit under the rubric of a system in fact the definition of a traditional recursive function comes together with a computation rule for evaluating it as rogers writes we obtain the computation uniquely by working from the inside out and from left to right abstract state algorithm states are comprehensive they incorporate all the relevant data including any program counter that when coupled with the program completely determine the future of a computation states may be regarded as structures with finitely many functions relations and constants to simplify matters relations will be treated as functions and constants as nullary functions so each state consists of a domain base set universe carrier and interpretations for its symbols all relevant information about a state is given explicitly in the state by means of its interpretation of the symbols appearing in the vocabulary of the structure the specific details of the implementation of the data types used by the algorithm can not matter in this sense states are abstract this crucial consideration leads to the second postulate postulate ii abstract state the states s of an algorithm are structures over a finite vocabulary f such that the following hold if x is a state of the algorithm then any structure y that is isomorphic to x is also a state and y is initial or terminal if x is initial or terminal respectively transitions preserve the domain that is dom x dom x for every state x or class the distinction is irrelevant for our purposes dershowitz transitions respect isomorphisms so if x y is an isomorphism of states x y then also x y state structures are endowed with boolean truth values and standard boolean operations and vocabularies include symbols for these as a structure a state interprets each of the function symbols in its vocabulary for every symbol f in the vocabulary of a state x and values ak in its domain some domain value b is assigned to the location f ak for which we write f b in this way x assigns a value t x in dom x to ground terms vocabularies are finite since an algorithm must be describable in finite terms so can only refer explicitly to finitely many operations hence an algorithm can not for instance involve all of knuth s arrow operations etc instead one could employ a ternary operation x y x y this postulate is justified by the vast experience of mathematicians and scientists who have faithfully and transparently presented every kind of static mathematical or scientific reality as a logical structure in restricting structures to be we are limiting the syntax to be this precludes states with infinitary operations like the supremum of infinitely many objects which would not make sense from an algorithmic point of view this does not however limit the semantics of algorithms to notions the domain of states may have sequences or sets or other objects in which case the state would also need to provide operations for dealing with those objects closure under isomorphism ensures that the algorithm can operate on the chosen level of abstraction the states internal representation of data is invisible and immaterial to the program this means that the behavior of an algorithm in contradistinction with its implementation as a c can not for example depend on the memory address of some variable if an algorithm does depend on such matters then its full description must also include specifics of memory allocation it is possible to liberalize this postulate somewhat to allow the domain to grow or shrink or for the vocabulary to be infinite or extensible but such enhancements do not materially change the notion of algorithm an extension to structures with partial operations is given in see section effective transitions the actions taken by a transition are describable in terms of updates of the form f b meaning that b is the new interpretation to be given by the next state to the function symbol f for values to program such an update one can use an assignment f t such that x and t x b we view a state x as a collection of the graphs of its operations each point of which is a pair also denoted f b thus we can define the update set x as the changed points x x when x is a terminal state and x is undefined we indicate that by setting x the point is that encapsulates the relation of an algorithm by providing all the information necessary to update the interpretation given by the current state but to produce x for a particular state x the algorithm needs to evaluate some terms with the help of the information stored in x the next postulate will ensure that has a finite representation and its updates can be determined and performed by means of only a finite amount of work simply stated there is a fixed finite set of ground terms that determines the stepwise behavior of an algorithm postulate iii effective transitions for every algorithm there is a finite set t of ground critical terms over the state vocabulary such that states that agree on the values of the terms in t also share the same update sets that is x y for any two states x y such that t x t y for all t t in particular if one of x and y is terminal so is the other or bounded exploration generic model of computation the intuition is that an algorithm must base its actions on the values contained at locations in the current state unless all states undergo the same updates unconditionally an algorithm must explore one or more values at some accessible locations in the current state before determining how to proceed the only means that an algorithm has with which to reference locations is via terms since the values themselves are abstract entities if every referenced location has the same value in two states then the behavior of the algorithm must be the same for both of those states this its fixed finite set of critical programs of infinite size like an infinite table lookup or which are a careful analysis of the notion of algorithm in and an examination of the intent of the founders of the field of computability in demonstrate that the sequential postulates are in fact true of all ordinary sequential algorithms the only kind envisioned by the pioneers of the field in other words all classical algorithms satisfy postulates i ii and iii in this sense the traditional notion of algorithm is precisely captured by these axioms definition classical algorithm an object satisfying postulates i ii and iii shall be called a classical algorithm equivalent algorithms it makes sense to say that two algorithms have the same behavior or are behaviorally equivalent if they operate over the same states and have the same transition function two algorithms are syntactically equivalent if their states are the same up to renaming of symbols in their vocabularies and if transitions are the same after renaming for a discussion of algorithm equivalence see abstract state machines abstract state machines asms are an description language for the classical algorithms we have been characterizing programs the semantics of the asm statements assignment parallel composition and conditionals are as expected and are formalized below the program as such defines a single step which is repeated forever or until there is no next state for convenience we show only a simple form of asms bear in mind however that much richer languages for asms are given in and are used in practice programs are expressed in terms of some vocabulary by convention asm programs always include symbols for the boolean values true and false undef for a default undefined value standard boolean operations and equality the vocabulary of the sorting program for instance contains f f n i j in addition to the standard symbols suppose that its states have integers and the three standard values for their domain the nullary symbols and n are fixed programming constants and serve as bounds of the nullary symbols i and j are programming variables and are used as array indices all its states interpret the symbols as well as the standard symbols as usual unlike i j and f these are static their interpretation will never be changed by the program initial states have n i j some integer values for f f n plus undef for all other points dershowitz states x such that update set x j n i j n i i i j i j n f i f j f i f j f j f i j j j n f i f j j j table update sets for sorting program of this program always terminates successfully with j n i and with the first n elements of f in nondecreasing order there are no hidden variables in asms if some steps of an algorithm are intended to be executed in sequence say then the asm will need to keep explicit track of where in the sequence it is up to semantics unlike algorithms which are observed to either change the value of a location in the current state or not an asm might update a location in a trivial way giving it the same value it already has also an asm might designate two conflicting updates for the same location what is called a clash in which case the standard asm semantics are to cause the run to fail just as programs might abort an alternative semantics is to imagine a nondeterministic choice between the competing values both were considered in here we prefer to ignore both nondeterminism and implicit failure and tacitly presume that an asm never involves clashes albeit this is an undecidable property to take the various possibilities into account a proposed update set p x cf for an asm p may be defined in the following manner sn x f x sn x t x x pn x do x x if x c x if c then p else q x otherwise q p x if x c x if c then p otherwise here x c means of course that boolean condition c holds true in x when the condition c of a conditional statement does not evaluate to true the statement does not contribute any updates when x for asm p its execution halts with success in terminal state x since no confusion will arise we are dropping the subscript otherwise the updates are applied to x to yield the next state by replacing the values of all locations in x that are referred to in x so if the latter contains only trivial updates p will loop forever for terminal states x the update set x is to signify that there is no next state for x x is the set of updates in x the update sets for the sorting program algorithm are shown in table with the subscript in x omitted for example if state x is such that n i generic model of computation j f and f then per row x f f j for this x x x and the next state x x has i as before j f and f after one more step per row in which f is unchanged the algorithm reaches a terminal state x x with j n i then by row x and x the representation theorem abstract state machines clearly satisfy the three sequential postulates asms define a function they operate over abstract states and they depend critically on the values of a finite set of terms appearing in the program and on the unchanging values of parts of the state not modified by the program for example the critical terms for our sorting asm are all the terms appearing in it except for the sides of assignments which contribute their proper subterms instead these are j n j n i n f i f j i j and their subterms only the values of these affect the computation thus any asm describes a classical algorithm over structures with the same vocabulary similarity type the converse is of greater significance theorem representation theorem every classical algorithm in the sense of definition has a behaviorally equivalent asm with the exact same states and function the proof of this representation theorem constructs an asm that contains conditions involving equalities and disequalities between critical terms closure under isomorphisms is an essential ingredient for making it possible to express any algorithm in the language of terms a typical asm models partial functions like division or tangent by using the special value undef denoting that the argument is outside the function s domain of definition and arranging that most operations be strict so a term involving an undefined subterm is likewise undefined the state of such an asm would return true when asked to evaluate an expression undef and it can therefore be programmed to work properly despite the partiality of division in the analysis and representation theorem have been refined for algorithms employing truly partial operations operations that cause an algorithm to hang when an operation is attempted outside its domain of definition rather than return undef the point is that there is a behaviorally equivalent asm that never attempts to access locations in the state that are not also accessed by the given algorithm such partial operations are required in the next section effective algorithms the thesis thesis asserts that standard models capture effective computation specifically all effectively computable numeric partial functions are partial recursive all partial string functions can be computed by a turing machine we say that an algorithm computes a partial function f dk d if there are input states i with particular locations for input values such that running the algorithm results in the correct output values of f specifically the domain of each input state is there are k terms such that their values in input states cover all tuples in dk other than that input states all agree on the values of all other terms dershowitz for all input values the corresponding input state leads via a sequence of transitions to a terminal state in which the value of a designated term t in the vocabulary of the algorithm is f whenever the latter is defined and leads to an infinite computation whenever it is not to capture what it is that makes a sequential algorithm mechanically computable we need for input states to be finitely representable accordingly we insist that they harbor no information beyond the means to reach domain values plus anything that can be derived therefrom we say that function symbols c construct domain d in state x if x assigns each value in d to exactly one term over c so restricting x to c gives a free herbrand algebra for example the domain of the sorting algorithm consisting of integers and booleans can be constructed from true false undef and a successor function call it c that takes integers n to the predecessor of their negation and negative integers to their absolute value n postulate iii ensures that the transition function is describable by a finite text the text of asm for an algorithm to be effective its states must also be finitely describable definition effectiveness a state is effective if it includes constructors for its domain plus operations that are almost everywhere the same meaning that all but locations these can hold input values have the same default value such as undef a classical algorithm is effective if its initial states are moreover effective algorithms can be bootstrapped a state is effective also if its vocabulary can be enriched to c g so that c constructs its domain while every total or partial operation in g is computed by an effective algorithm over those constructors a model of computation that is a set of algorithms with shared domain s is effective if all its algorithms are via the same constructors this effectiveness postulate excludes algorithms with ineffective oracles such as the halting function having only free constructors at the foundation precludes the hiding of potentially uncomputable information by means of equalities between distinct representations of the same domain element this is the approach to effectiveness advocated in extended to include partial functions in states as in for each n our sorting algorithm is effective in this sense since addition of the natural numbers and comparisons of integers operations that reside in its initial states can be programmed from the constructors true false undef c in particular for natural numbers and turing machines for strings form effective models furthermore it is shown in that three prima facie different definitions of effectiveness over arbitrary domains as proposed in respectively comprise exactly the same functions strengthening the conviction that the essence of the underlying notion of computability has in fact been captured theorem thesis for every effective model there is a representation of its domain values as strings such that its algorithms are each simulated by some turing machine call an effective computational model maximal if adding any function to those that it computes results in a set of functions that can not be simulated by any effective model remarkably or perhaps not there is exactly one such model theorem effectiveness theorem the set of partial recursive functions and likewise the set of string functions is the unique maximal effective model up to isomorphism over any countable domain generic model of computation we have recently extended the proof of the thesis and demonstrated the validity of the widely believed extended thesis theorem extended thesis every effective algorithm can be polynomially simulated by a turing machine conclusion we have dealt herein with the classical type of algorithms that is to say with the meaning only bounded parallelism deterministic no interaction with the outside world case abstract state machines can faithfully emulate any algorithm in this class as we have seen in theorem furthermore we have characterized the distinction between effective algorithms and their more abstract siblings in theorem there are various declarative styles of programming for which the relation is implicit rather than explicit as it is for our notion of algorithm for such programs to be algorithms in the sense of definition they would have to be equipped with a specific execution mechanism like the one for recursion mentioned above for prolog for example the mechanism of unification and the mode of search would need to be specified the paradigm can be extended to handle more modern notions when desired an algorithm can make an explicit distinction between successful and failing terminal states by storing particular values in specific locations of the final state alternatively one may declare failure when there is a conflict between two or more enabled assignments see there is no difficulty in allowing for nondeterminism that is for a multivalued transition function if the semantics are such that a choice is made between clashing assignment statements then transitions are indeed nondeterministic see more general forms of nondeterminism can be obtained by adding a choice command of some sort to the language see nothing needs to be added to the syntax of asms to apply to cases for the environment provides input incrementally one need only imagine that the environment is allowed to modify the values of some specified set of locations in the state between machine steps see in the analysis of algorithms was extended to the case when an algorithm interacts with the outside environment during a step and execution waits until all queries of the environment have been responded to in all forms of interaction are handled in the analysis was extended to massively parallel algorithms distributed algorithms are handled in the fact that asms can emulate algorithms facilitates reasoning about the complexity of algorithms as for theorem above parallel asms have been used for studying the complexity of algorithms over unordered structures see quantum algorithms have been modeled by asms in current research includes an extension of the framework for hybrid systems combining discrete sequential steps and analog evolving over time behaviors dershowitz acknowledgements i thank yuri gurevich and nikolaj for their perspicacious suggestions the referees for their questions and evgenia falkovich for her help references mike barnett wolfram schulte the abcs of specification asml behavior and components informatica slovenia pp available at http theabcsofspecification viewed june andreas blass nachum dershowitz yuri gurevich when are two algorithms the same bulletin of symbolic logic pp available at http viewed mar andreas blass nachum dershowitz yuri gurevich exact exploration and hanging algorithms in proceedings of the eacsl annual conferences on computer science logic brno czech republic lecture notes in computer science springer berlin germany pp available at http pdf viewed may longer version at http viewed may andreas blass yuri gurevich ordinary interactive algorithms part acm transactions on computational logic pp available at http viewed may andreas blass yuri gurevich ordinary interactive algorithms part ii acm transactions on computational logic article available at http viewed may andreas blass yuri gurevich ordinary interactive algorithms part iii acm transactions on computational logic article available at http viewed may andreas blass yuri gurevich abstract state machines capture parallel algorithms correction and extension acm transactions on computation logic article available at http viewed andreas blass yuri gurevich dean rosenzweig benjamin rossman interactive algorithms part i axiomatization logical methods in computer science paper available at http viewed june andreas blass yuri gurevich dean rosenzweig benjamin rossman interactive algorithms part ii abstract state machines and the characterization theorem logical methods in computer science paper available at http viewed july andreas blass yuri gurevich saharon shelah on polynomial time computation over unordered structures journal of symbolic logic pp available at http viewed july udi boker nachum dershowitz the thesis over arbitrary domains in arnon avron nachum dershowitz alexander rabinovich editors pillars of computer science essays dedicated to boris boaz trakhtenbrot on the occasion of his birthday lecture notes in computer science generic model of computation springer pp available at http viewed udi boker nachum dershowitz three paths to effectiveness in andreas blass nachum dershowitz wolfgang reisig editors fields of logic and computation essays dedicated to yuri gurevich on the occasion of his birthday lecture notes in computer science springer berlin germany pp available at http viewed egon the origins and the development of the asm method for high level system design and analysis journal of universal computer science pp available at http viewed june egon dean rosenzweig a mathematical definition of full prolog science of computer programming pp available at ftp viewed july olivier bournez nachum dershowitz foundations of analog algorithms in proceedings of the third international workshop on physics and computation p c nile river egypt pp available at http viewed may olivier bournez nachum dershowitz evgenia falkovich towards an axiomatization of simple analog algorithms in manindra agrawal barry cooper angsheng li editors proceedings of the annual conference on theory and applications of models of computation tamc beijing china lecture notes in computer science springer verlag pp available at http available at http pdf viewed july nachum dershowitz evgenia falkovich a formalization and proof of the extended thesis in proceedings of the seventh international workshop on developments in computational models dcm july zurich switzerland electronic proceedings in theoretical computer science available at http viewed july nachum dershowitz yuri gurevich a natural axiomatization of computability and proof of church s thesis bulletin of symbolic logic pp available at http viewed apr robin gandy church s thesis and principles for mechanisms in the kleene symposium studies in logic and the foundations of mathematics pp andreas glausch wolfgang reisig an of a class of distributed algorithms in abrial uwe editors rigorous methods for software construction and analysis lecture notes in computer science springer berlin pp available at http viewed mark gold limiting recursion j symbolic logic pp saul gorn algorithms bisection routine communications of the acm erich antje nowack quantum computing and abstract state machines in proceedings of the international conference on abstract state machines advances in theory and practice asm taormina italy berlin pp available at http viewed july yuri gurevich evolving algebras lipari guide in egon editor specification and validation methods oxford university press pp available at http viewed apr dershowitz yuri gurevich sequential abstract state machines capture sequential algorithms acm transactions on computational logic pp available at http viewed apr yuri gurevich benjamin rossman wolfram schulte semantic essence of asml theoretical computer science pp available at http viewed june yuri gurevich wolfram schulte margus veanes toward industrial strength abstract state machines technical report microsoft research available at http viewed yuri gurevich tatiana yavorskaya on bounded exploration and bounded nondeterminism technical report microsoft research available at http viewed apr david harel on folk theorems communications of the acm pp stephen kleene mathematical logic wiley new york stephen kleene reflections on church s thesis notre dame journal of formal logic pp emil post absolutely unsolvable problems and relatively undecidable propositions account of an anticipation in davis editor solvability provability definability the collected works of emil post boston ma pp unpublished paper hilary putnam trial and error predicates and the solution to a problem of mostowski j symbolic logic pp wolfgang reisig on gurevich s theorem on sequential algorithms acta informatica pp available at http viewed wolfgang reisig the computable kernel of abstract state machines theoretical computer science pp draft available at http viewed hartley rogers theory of recursive functions and effective computability new york marc spielmann abstract state machines verification problems and complexity thesis rwth aachen aachen germany available at http viewed july alan turing on computable numbers with an application to the entscheidungsproblem proceedings of the london mathematical society pp corrections in vol pp reprinted in davis ed the undecidable raven press hewlett ny available at http
6
commonsense l ocated n ear relation extraction nov frank bill and kenny zhu frankxu yuchenlin kzhu department of computer science and engineering shanghai jiao tong university shanghai china introduction artificial intelligent systems can benefit from incorporating commonsense knowledge as background such as ice is cold h as p roperty chewing is a of eating h as s ubevent chair and table are typically found near each other l ocated n ear etc this kind of commonsense facts have been utilized in many downstream tasks such as textual entailment and visual recognition tasks the commonsense knowledge is often represented as relation triples in commonsense knowledge bases such as conceptnet by mit one of the largest commonsense knowledge graph available today however this kind of commonsense knowledge bases are usually manually curated or by community efforts and thus do not scale well this paper aims at automatically extracting the commonsense l ocated n ear relation between physical objects from textual corpora which is defined as two objects typically found near each other in real life we focus on l ocated n ear relation for these reasons i l ocated n ear facts are helpful prior knowledge for object detection in complex image scenes figure illustrates two motivating examples ii such commonsense knowledge can potentially benefit general reasoning in reading comprehension question answering as well as many other ai tasks iii existing knowledge bases have very few facts for this relation conceptnet has only triples of l ocated n ear figure l ocated n ear relation facts assist the detection of vague objects in a dimly lit room with settings shown in the left if a bright laptop is present on a table one may guess that a lamp a photo frame or books maybe nearby similarly in the right if a set of knife fork and plate is on the table one may believe there could be a glass beside based on the commonsense even though these objects are hardly visible due to low light we propose two novel tasks in extracting l ocated n ear relation from textual corpora one is a binary relation classification problem which judges whether or not a sentence is describing two objects physically close by the other task is to produce a ranked list of l ocated n ear facts with the given classified results of large number of sentences we believe both two tasks can help the community further automatically complete and populate existing commonsense knowledge bases the first two authors contribute equally conference on neural information processing systems nips long beach ca usa additionally we also create two benchmark datasets for evaluating l ocated n ear relation extraction systems on the two tasks one is sentences each describing a scene of two physical objects and with a label indicating if the two objects are in the scene the other consists of pairs of objects with scores indicating confidences that a certain pair of objects are commonly located near in the real life we propose several methods to solve the tasks including and neural architecture the proposed neural architecture compares favorably with the current method for relation classification problem from our relatively smaller proposed datasets we extract in total new l ocated n ear triples that are not in conceptnet l ocated n ear relation classification given a sentence s mentioning a pair of physical objects ei ej we call s ei ej an instance in this section we aim to determine whether ei and ej are located near each other in a physical scene described in the sentence for example suppose ei is dog ej is cat and s the king puts his dog and cat on the as it is true that the two objects are located near in this sentence a successful classification model is expected to label this instance as true while if my dog is older than her then the answer to the instance ei ej is false for is just talking about a general comparison in the following subsections we present two different kinds of baseline methods for this binary classification task methods and neural architectures methods our first baseline is an svm classifier based on following features we claim that such semantic and syntactic features are widely utilized among existing relation classification models note that we put special focus on adverbs and prepositions based on the assumption that these lexical units describing directions and positions in physical world will help identify l ocated n ear relations proposed features bag of words bw the set of words that ever appeared in the sentence bag of path words bpw the set of words that appeared on the shortest dependency path between objects ei and ej in the dependency tree of the sentence s plus the words in the two subtrees rooted at ei and ej in the parse tree bag of adverbs and prepositions bap the existence of adverbs and prepositions in the sentence as binary features global features gf the length of the sentence the number of nouns verbs adverbs adjectives determiners prepositions and punctuations in the whole sentence shortest dependency path features sdp from the dependency parse tree of the sentence and the shortest path between the two objects ei and ej semantic similarity features ss the cosine similarity between the glove word embeddings of the two object words obtaining such features for every instances we then feed processed data into a svm classifier we evaluate linear and rbf kernels with different parameter settings and the rbf kernel with c performs the best overall neural architectures long short term memory based recurrent neural architectures lstms are widely used in relation classification we observe that the existence of l ocated n ear relation in an instance s depends on two major information sources one is from the semantic and syntactical features of sentence s and the other is from the object pair by this intuition we design our model with two parts shown in figure the left part is for encoding the syntactical and semantic information of the sentence s while the right part is encoding the semantic similarity between the word embeddings of and output confidence lstm dense layer token vector representation position normalized sequence original sentence dt lead s lead dt into pr jj the king token word vectors of and position led the dog into his nice garden dog garden figure the proposed model sentence normalization using the original word sequence as of a sentence s as input has two problems i the irrelevant words in the sentence can take noise into model ii the large vocabulary of original words induce too many parameters which may cause for example given two sentences the king led the dog into his nice and a criminal led the dog into a poor the object pair is dog garden in both sentences the two words lead and into are essential for determining whether the object pair is located near but they are not given more bias than other words also the semantic differences between irrelevant words such as king and criminal beautiful and poor are not useful to the relation between the dog and garden and thus tends to act as noise level objects lemma dependency role pos tag examples open lead into open s open o into o dt pr cc jj table examples of four types of tokens during sentence normalization s represents the subject of given verb or preposition and o represents the object considering above problems we propose utilizing pos tags instead to capture more syntactical information and reduce the vocabulary size however solely doing this loses too much semantic dependency between the words thus we propose a normalized sentence representation method merging the three most important and relevant kinds of information about each instance lemma pos tags and dependency role we first replace the two nouns in the object pair as and keep the lemmatized form of the original words for all the verbs adverbs and prepositions which are highly relevant to describing physical scenes then we replace the subjects and direct objects of the verbs and prepositions nsubj dobj for verbs and case for prepositions in dependency parse tree with special tokens indicating their dependency roles for the remaining words we simply use their pos tags to replace the originals the four kinds of tokens are illustrated in table table is a real example of our normalized sentence representation where the object pair of interest is dog garden the dt king open s opened open the dt door open o and cc led lead the dt dog into into his pr table sentence normalization example we utilize stanford corenlp tool https nice jj garden model training as shown in figure the bottom of the figure shows the original sentence which is transformed to normalized sequence described above apart from the normalized tokens of the original sequence to capture more structural information we also encode the distance from each token to and such word position embeddings features are proposed by with the intuition that information needed to determine the relation between two target nouns normally comes from words which are close to the target nouns then we leverage lstm to encode the whole sequence of the tokens of normalized representation plus position embedding in the meantime two pretrained glove word embeddings of the original two physical object words are fed into a hidden dense layer finally we concatenate both outputs and then use sigmoid activation function to obtain the final prediction we choose to use the standard binary as our loss function and rmsprop is used as optimizer following we add dropout in lstm as well as embedding layer and utilize batch normalization for overfitting problem due to relatively small dataset l ocated n ear relation extraction figure shows the overall workflow of our automatic framework to mine locatednear relations from raw text we first construct a vocabulary of physical objects and generate all candidate instances for each sentence in the corpus if a pair of physical objects ei and ej appear as nouns in a sentence s then we apply our l ocated n ear relation classifier on this instance the relation classifier yields a probabilistic score s indicating the confidence of the existence of l ocated n ear relation finally all scores of s ei ej instances from the corpus are grouped by the object pairs and aggregated where each object pair is associated with a final score such mined physical pairs with scores can easily be integrated into existing commonsense knowledge base more specifically for each object pair ei ej we find all the m sentences in our corpus mentioning both objects we classify the m instances with the relation classifier and get confidences for each instance feed them into a function f to obtain the final score of the object pair there are five variants of the scoring functions m m x x m conf sk ei ej conf sk ei ej m m x m x conf sk ei ej m conf sk ei ej object pairs object classifier corpus classification confidence locatednear relation scores figure computing the l ocated n ear scores of object pairs datasets our proposed vocabulary of physical objects is constructed by the intersection of all entities that belong to physical object class in wikidata and all conceptnet concepts we then manually filtered out some words that have the meaning of an abstract concept which results in physical objects in total afterwards we utilize a cleaned subset of the project gutenberg corpus which contains english books written by authors an assumption here is that sentences in fictions are more acc p r random majority svm svm svm svm acc p r svm svm drnn svm table performance of baselines on classification task with ablation means without certain feature likely to describe real life scenes we sample and investigate the density of l ocated n ear relations in gutenberg with other widely used corpora namely wikipedia used by mintz et al and new york times corpus created by riedel et al and used by lin et al hoffmann et al surdeanu et al in the english wikipedia dump out of all sentences which mentions at least two physical objects turn out to be positive in the new york times corpus the percentage of positive sentences is only in contrast that percentage in the gutenberg corpus is much higher than the other two corpora making it a good choice for l ocated n ear relation extraction from this corpus we identify pairs that in more than sentences among these pairs we randomly select object pairs and sentences with respect to each pair for annotators to label their commonsense l ocated n ear each instance is labeled by at least three annotators who are college students and proficient with english the final truth label of a sentence is decided by a majority vote from the four annotators the cohen s kappa among the three annotators is which suggests substantial agreement we randomly choose instances as the training set and as the test set for evaluating the first relation classification task for the second task we further ask the annotators to label whether each pair of objects are likely to locate near each other in the real world majority votes determine the final truth labels the agreement here is both datasets are made publicly evaluation l ocated n ear relation classification we evaluate the proposed methods against the general domain relation classification model drnn the results are shown in table for svm we do feature ablation on each of the feature types section for model we experiment on variants of input sequence of original sentence uses the original words as the input tokens while uses just the pos tag sequence as the input tokens uses the tokens of sequence after sentence normalization from the results we find that the svm model without the global features performs best which indicates that features benefit more in shortest dependency paths than on the whole sentence we find that drnn performs best on precision but not significantly higher than the experiment also shows that enjoys the highest recall score in terms of the overall performance is the best one one possible reason is that our proposed the normalization representation reduces input sequences token vocabulary size while preserving important syntactical and semantic information while also reduces the vocabulary size it loses too much information another reason is that l ocated n ear relation are described in sentence mostly with the decorating them which are the descendants of object word in the dependency tree other than words merely along the shortest dependency path thus drnn can not capture the information from the words belonging to the descendants of the two object words in the tree while this https besides we added two naive baselines random baseline classifies the instances into two classes with equal probability majority baseline considers all the instances to be positive f map p p p p table ranking performances of the scoring methods information is captured by for the rest of the experiments we will use as the classifier of our choice l ocated n ear relation extraction once we have classified the sentences using we can extract l ocated n ear relation using the four scoring functions in section we first present the quantitative results we use each of the scoring functions to rank the commonsense l ocated n ear object pairs described in section table shows the ranking results using mean average precision map and precision at k as metric accumulative scores and generally do better door room ship sea fire wood fire smoke book table boy girl house garden house fire door hall fruit tree cup tea arm leg horse saddle door street table chair table top object pairs returned by best performing scoring function qualitatively we show object pairs with some of the highest scores in table setting a threshold of for which is the minimum score for all true object pairs in the l ocated n ear object pairs data set pairs we obtain a total of l ocated n ear relations with a precision of by human inspection related work classifying relations between entities in a certain sentence plays a key role in nlp applications and thus has been a hot research topic recently methods and neural network techniques are most common xu et al introduce lstm model to classify relations incooperating several different kinds of information of a sentence improved by xu et al which performed best on task and is one of our baseline methods the most related work to ours is the extraction of visual commonsense knowledge by yatskar et al this work learns the textual representation of seven types of visual relations using textual caption for the image in dataset another important related work is from li et al which enriches several popular relations in conceptnet with little textual information from real large corpora however l ocated n ear relation was not studied in this work while this relation is extremely scarce in conceptnet and has its own distinctiveness conclusion we presented a novel study on enriching l ocated n ear relationship from textual corpora based on our two benchmark datasets we proposed several methods to solve the relation classification problem we showed that existing methods do not work as well on this task and discovered that model does not have significant edge over simpler model whereas our sentence normalization turns out to be useful future directions include better utilizing distant supervision incorporating knowledge graph embedding techniques applying the l ocated n ear knowledge into downstream applications of computer vision and natural language processing references bowman angeli potts and manning a large annotated corpus for learning natural language inference arxiv preprint bunescu and mooney a shortest path dependency kernel for relation extraction in cooijmans ballas laurent and courville recurrent batch normalization arxiv preprint dagan dolan magnini and roth recognizing textual entailment rational evaluation and approaches natural language engineering ebrahimi and dou chain based rnn for relation classification in pages hendrickx kim kozareva nakov pennacchiotti romano and szpakowicz task classification of semantic relations between pairs of nominals in proceedings of the international workshop on semantic evaluation semeval acl uppsala university uppsala sweden july pages hinton srivastava and swersky neural networks for machine learning lecture overview of gradient descent lecture coursera hochreiter and schmidhuber long memory neural computation hoffmann zhang ling zettlemoyer and weld weak supervision for information extraction of overlapping relations in proceedings of the annual meeting of the association for computational linguistics human language technologiesvolume pages association for computational linguistics ioffe and szegedy batch normalization accelerating deep network training by reducing internal covariate shift arxiv preprint lahiri complexity of word collocation networks a preliminary structural analysis in proceedings of the student research workshop at eacl pages april li taheri tu and gimpel commonsense knowledge base completion in proceedings of the annual meeting of the association for computational linguistics acl berlin germany august association for computational linguistics lin maire belongie hays perona ramanan and zitnick microsoft coco common objects in context in european conference on computer vision pages springer lin shen liu luan and sun neural relation extraction with selective attention over instances in acl mintz bills snow and jurafsky distant supervision for relation extraction without labeled data in proceedings of the joint conference of the annual meeting of the acl and the international joint conference on natural language processing of the afnlp volume pages association for computational linguistics pennington socher and manning glove global vectors for word representation in empirical methods in natural language processing emnlp pages url http ren wu he qu voss ji abdelzaher and han cotype joint extraction of typed entities and relations with knowledge bases in www riedel yao and mccallum modeling relations and their mentions without labeled text machine learning and knowledge discovery in databases pages socher pennington huang ng and manning recursive autoencoders for predicting sentiment distributions in proceedings of the conference on empirical methods in natural language processing pages association for computational linguistics speer and havasi representing general relational knowledge in conceptnet in lrec pages surdeanu tibshirani nallapati and manning learning for relation extraction in proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning pages association for computational linguistics xu mou li chen peng and jin classifying relations via long short term memory networks along shortest dependency paths in emnlp pages xu jia mou li chen lu and jin improved relation classification by deep recurrent neural networks with data augmentation in coling xu jia mou li chen lu and jin improved relation classification by deep recurrent neural networks with data augmentation arxiv preprint yatskar ordonez and farhadi stating the obvious extracting visual common sense knowledge in proceedings of pages zaremba sutskever and vinyals recurrent neural network regularization arxiv preprint zeng liu lai zhou zhao et al relation classification via convolutional deep neural network in coling pages zhou su zhang and zhang exploring various knowledge in relation extraction in acl zhu fathi and reasoning about object affordances in a knowledge base representation in european conference on computer vision pages springer
2
garch process driven by process mar mohammadi march abstract in this paper we study the simple driven generalized autoregressive conditionally heteroscedastic process the statistical properties of this process are characterized this process has the potential to approximate any driven cogarch processes we show that the state representation of such process can be described by a random recurrence equation with periodic random coefficients the almost sure absolute convergence of the state process is proved the periodically stationary solution of the state process is shown which cause the volatility to be periodically stationary under some suitable conditions also it is shown that the increments with constant length of such process is itself a periodically correlated pc process finally we apply some test to investigate the pc behavior of the increments with constant length of the simulated samples of proposed process keywords garch process process periodically correlated periodically stationary introduction many financial data and indices have heteroscedastic structure examples of this kind are stocks returns network traffic and natural data see popular model for these data are autoregressive conditionally heteroscedastic arch model proposed by engle and generalized arch garch bollerslev the garch type processes have become the most popular tools to model heteroscedasticity in discrete time faculty of mathematics and computer science amirkabir university of technology hafez avenue tehran iran mohammadi and rezakhah rezakhah department of mathematics and computer science allameh tabatabai university tehran iran modarresi in practice for various reasons such as data many time series are irregularly spaced and this has created a demand for models for the first time kluppelberg et al introduced a version of the garch cogarch process which preserves the essential features of the garch processes they replaced the noise of the garch process with the increments of some process the volatility of this process satisfies a stochastic differential equation they proved the stationarity property and also second order properties under some regularity conditions on the corresponding process brockwell et al generalized the driven cogarch process to the driven cogarch p q process for q p when its volatility is a arma carma process they showed that the state representation of the volatility can be expressed as a stochastic recurrence equation with random coefficients periodic behavior is common in many time series such as power market prices car accident claims for an insurance company and sales with seasonal interest the term periodically correlated pc was introduced by gladyshev but the same property was introduced by bennett who called them cyclostationary properties of pc processes are studies by hurd and miamee bibi and lescheb studied the class of bilinear processes with periodic coefficients of periodic arma and periodic garch models processes introduced by have stationary and independent increments and right continuous paths with left limits such processes have potential to be applied to financial data following stochastic volatility structure a generalization of process is process that has periodically stationary increments studied by maejima and sato we considered this process as the underlying process in carma and cogarch processes that can be applied when there is evident that the underlying process has pc increments the observations of such processes have significant dependency to the ones of previous periods so process are more prominent than processes in such cases in this paper we introduce a cogarch process driven by some simple process which we call process the simple process is defined as a compound poisson process with periodic intensity with period this process enables us to provide the statistical properties of the process moreover we find a random recurrence equation with periodic random coefficients for the state representation of such process by some regularity condition we show the absolute convergence of the state equation we also show that the volatility of the process is strictly periodically stationary the increments of the process with constant length h where is some integer is a pc process with period such process has the potential to provide an approximation for every driven cogarch process finally we investigate the theoretical results concerning pc structure of the increment process by simulation we show that the increments of the process with length h is pc with some period and the support of the squared coherence statistics consists of lines parallel to the main diagonal and having spacing of this paper is organized as follows in section we introduce the simple driven cogarch processes for this we present the simple process and obtain the characteristic function of it section is devoted to some sufficient conditions which make the volatility process strictly periodically stationary we obtain the mean covariance function of the state process and volatility process in section we also investigate second order properties of the squared increments of the cogarch process in this section in section we illustrate the results with simulations all proofs are contained in section simple driven cogarch processes in this section we study the preliminaries such as the additive processes and their characteristic functions and process in subsection we also describe the structure of simple process and characteristics it in subsection then we introduce the simple driven cogarch process in subsection preliminaries let f ft p be a filtered probability space where ft is the smallest rightcontinuous filtration such that contains all the sets of a process xt defined on the probability space f ft p is called an additive process if it is stochastically continuous it has independent increments and its sample paths are and have left limits in t further if xt has stationary increments it is a process the characteristic function of the additive process xt has a following representation theorems theorem let xt be an additive process on rd then xt has infinitely divisible distribution for t the law of xt is uniquely determined by its spot characteristic triplet e ei w xt w w i w w w z w rd ei w x i w x i dx rd where is inner product and euclidean vector norm the spot measure satisfies the integrability condition rd min dx for t remark by the spot characteristic triplet t can be defined by z t ds z t ds t b ds b rd where rd is on the rd the triplet t is called the local characteristic triplet of xt which satisfy the following conditions t rd is a deterministic function with finite variation t r is a symmetric continuous and matrix valued function which rt verifies dt t is a family of measures which verifies z tz min dx dt rd as an extension of process we present the definition of processes definition a subclass of additive processes xt is called process with period if for any s t d xt xs d where denotes the equality in distributions structure of simple process for describing the structure of the simple process we define the general structure of the intensities function of the poisson process with periodically stationary increments we also characterize this pure jump process by representation the characteristic function and introduce the corresponding measure definition poisson process with periodically stationary increment a process n t is a poisson process with periodically stationary increment where e n t t z t t u du and the intensity is a periodic function with some period so t t for t k definition simple compound poisson process let be a partition of positive real line also assume that aj pl tj j n and for some integer l n and let n t be a poisson process which has periodically stationary increments with period and intensity function t defined by then the simple compound poisson process st is defined as s t dt n t x zn p s where zn znj i is the arrival time of nth jump zn dj a and r j zn are independent and have distribution fj j l such that r z fj dz for j also dt t is a deterministic drift function with period say dt and one can easily verify that st has independent increment now we find characteristic function of the simple compound poisson process st by the following lemma lemma let n t be a poisson process with periodically stationary increment and mean t defined by then the process st defined by has the following characteristic function for t e eiwst w z w eiwz iwz i dz r where dt l z xx z x z fr dz z fr dz z t fj dz and dz l xx fr dz x fr dz t fj dz where m and t aj for some j proof see appendix remark by remark lemma and the spot characteristic triplet of process st t have the local characteristic triplet t which has the following form l z xx dds s i s fr dz z x s i s fr dz z s i t s fj dz and dz l xx s i s fr dz x s i s fr dz s i t s fj dz it follows from definition and remark that the family t of measures verify z tz dz ds r this implies that st is so it has decomposition and has quadratic variation process corollary by lemma the stochastic process st defined by is a process with period proof see appendix structure of simple driven cogarch process let st be a simple process with period defined by process gt with parameters r r and is a simple driven cogarch p q process p q q p defined by dgt vt dst or equivalently z tp gt vu dsu t in which the volatility process vt is defined by vt t where the state process yt is the unique solution of the stochastic differential equation dyt dt e d s s t t d denotes differentiation with respect to the initial value is and independent of the driving process st and a e periodic stationarity conditions in this section we provide some conditions to prove that the volatility process vt defined by is strictly periodically stationary with period as a result of main theorem we prove that the increments with constant length of process gt is itself a periodically correlated pc process which is the mian aim of this paper we also give a sufficient an necessary condition by which we can determine the volatility is in the following theorem in b a lr norm of the q q c is defined as kckr kcckr kckr sup theorem a let yt be the state process of the p q process with parameters b a and defined by suppose that st be a simple process defined by then for all s t yt js t ys ks t where js t ks t is a family of random q q js t and random vector ks t in q r in addition are independent and identically distributed b let i q be the eigenvalues of invertible matrix b which have strictly negative real parts also suppose that exists one r such that z log p z z r t t r where p is a matrix in which p bp is diagonal and q and is measure defined by then converges in distribution to a finite random vector u t for fixed t as m goes to infinity the distribution of the vector u t is the unique solution of the random equation d where u t u t jt u t kt is independent of jt kt d c let the conditions of b hold and u then yt and vt are strictly periodically stationary with period in the other hands for any sn and borel sets en of rd and borel sets jn of r and k n p ysn en p ysn en and p vsn jn p vsn jn proof see appendix in the following remark we describe the of the lyapunov exponent which leads to the absolutely convergence of the state process yt in theorem remark a the proof of theorem will be based on the use of the general theory of multivariate random recurrence equations as discussed by bougerol and picard brandt and vervaat in the one dimensional case the state vector yt defined by satisfies multivariate random recurrence equation b the condition which provides the stability of the model based on the existence of a vector norm such that jt and kt for all t satisfy the conditions e log e where log x log max x e is equivalent to the assertion that r the lyapunov exponent of the is strictly negative almost surely lim sup k c the conditions of theorem imply with the natural matrix norm r ap for some matrix a which corresponds to the following the natural vector norm r where p is a matrix in which p ap is diagonal c cq corollary if vt is a strictly periodically stationary process with period then increments with constant length of the process gt make a pc process in the other words for any t and h p and k n p p e gt e p p p p cov gt cov p where gt r t vs dss proof see appendix theorem let yt be the state process of the p q process gt with parameters b a and suppose that is a real constant and the following two conditions hold ebt e ebt then with probability one v t conversely if either fails or holds with and fails then there exists a simple process st and such that p the proof of the volatility process vt is similar to the proof of theorem in for process characterization of the state process the aim of this section is to study expected value and covariance function of the state process yt t and volatility process vt t first we prove that by some sufficient conditions the expected value and covariance yt exist then by presenting the first and second moments of the random vector u we find the expected value and covariance function of the state process furthermore a closed form for square increments of the cogarch process is characterized lemma let the assumptios of theorem hold if e for c then a if e then e yt and e u b if e then cov yt and cov u where st t is the simple process proof see appendix remark by theorem b a we find that e u is the solution of the following random equation i e e u e and b e u u is the solution of the following equation e vec e u u e e e u vec e where is the kronecker product of two matrices and for a matrix c vec c is the column vector in cq which is constructed by stacking the columns of matrix a in a vector the following lemmas establish the the mean and covariance function of the state process lemma suppose that yt t be the state process and the conditions of theorem and lemma hold then for t h there exists m n n and such that t m t h n t and t h and e yt e e u e h cov yt e e e e e e u e e e e u e e u e e e e e u e i e e e proof see appendix corollary let vt t be the volatility process then for t h expected value and covariance function of vt have the following forms e vt e yt cov vt cov yt a proof see appendix in financial time series the returns have negligible correlation while the squared returns are significantly correlated therefore we investigate the behavior of the properties of the increments of the cogarch process we assume that volatility process is strictly periodically stationary and p now we present the first and second orders of the increment process gt that in defined in corollary proposition let g be a zero mean simple driven cogarch process then for t and h p a p e gt p p gt cov b there exist m where t i l and t p a i l then z z p e vs s ds e vs s ds e e gt e zi t t x e z e vs s ds moreover there exist n n m and where t h j l and t h p a j j l then cov p p gt z e zj a p s e cov gt ds e zj z p s e cov gt ds t x z e a p s e cov gt ds proof see appendix remark for s if we assume that s r r z dz then p cov gt jt e it yt z cov cov yt cov dsb e where it rt vs dss and z e it yt b t z tz e is ys ds r e is ys z dz ds simulation in this section we simulate the simple process defined by this process is a compound poisson process with arrival rate t defined by then we verify the theoretical results concerning pc structure of the increments of the sscogarch p q process gt defined by by simulation for this we simulate the state process yt defined by at jump time points and time points using its random recurrence equation then we evaluate the discretized version of the volatility process vt defined by and corresponding the p q process gt finally we verify the pc structure of the increments of the process by following the method of for the simple process defined by with the underlying poisson process n t we consider as the time of the first jump and tn n the p time intervals between the n th and nth jumps then tj n n are the arrival times and therefore for j ftsn x p tn x s p n s x n s s where t defined by the arrival times are generated by the following algorithm generate the independent and identically distributed iid sequence from uniform then by as the first arrival time has distribution x x therefore d u where u denotes a uniform so by generating ln can be considered as a generated sample for the first arrival time if n is the n th evaluated arrival time then by tn has distribution x therefore d ln u so by generating un ln un is a generated sample for the nth arravial time thus applying the iid sample we can evaluate successively the nth arrival time by the for the details see so by having the periodic intensity function u in one can evaluate by available software consider some periodic drift function ht and as the successive jump size zn generate independently and has distribution fn if corresponding arrival time belongs to dj a j now evaluate the simple process st from as st ht n t l x x znj i now we consider the following steps for the simulation of the p q process defined by consider p and q as some integer such that q p choose real parameters and and such that the eigenvalues of the matrix b defined by have strictly negative real parts and conditions and are satisfied having evaluated arrival times by the above algorithm generate the state process by the following the recurrence equation after assuming some initial value for b b e e a e zn n this recurrence equation obtained by replacing s and t in the jump size zn can be simulated by for predefined distributions as the simple process st has no jump over n n n therefore for t it follows from that dyt byt dt for t n dyt yt dt so that d yt from this follows that for t n z t d yu hence yt eb by and the version of the process vt is as eb and using that the process st has one jump at time over it follows from that z p z p vu dsu vu dsu z p vu dsu p zn having evaluated values of the process and generate the process by and corresponding the process by finally using the values of and provided by previous step evaluate the sampled processes vih and gih for some h by the followings i suppose that ih for i n since the simple process st has no jump over it follows from and that for ih vih eb note that if ih then it follows from step that vih eb ii using that the process st has no jump over ih it follows from that z ih gih z p vu dsu z p vu dsu ih p vu dsu hence gih test for the pc structure of the increments process to detect the pc structure of a process hurd and miamee and dudek et al showed that their proposed spectral coherence can be used to test whether a process is pc their method is based on the fact that the support of the spectral coherence of a pc process with period is contained in the subset of parallel lines for j the squared coherence statistic for the series xn is computed as follows pm m pm pm p k where n is discrete fourier transform of xk for j n xk e and this statistic satisfies m under the null hypothesis that are complex gaussian with uncorrelated real and imaginary parts for each j squared coherence statistic has probability density p m for type i error the squared coherence is determined elog m the values of statistic m are computed for all r and s that pair by plotting the values of statistic that exceed the if there are some significant values of statistic that lie along the parallel equally spaced diagonal lines then xk is pc the graph of these significant values indicates the presence of the subset of parallel lines s r for j to ensure that periodic structure of the series xk is not a consequence of a periodic mean it is recommended to remove the periodic mean from this series first example let st be a simple process by the rate function t t for t furthermore l and the length of the successive partitions of each period intervals are moreover the distribution of jumps size on these subintervals are assumed to be n n n n and n where n denotes a normal distribution with mean and variance in this example we consider process with parameters of and thus the matrix b is and conditions and are satisfy for such the process we simulate for the duration of period intervals with the parameters specified above and then using step we sample from this process in equally space partition with distance one so we get discretized samples of this period intervals then we follow to verify that the increments of the sampled process are a pc process figure top the increments of the simulated process gi i n of size bottom left the sample autocorrelation plot of gt bottom right the significant values of the sample spectral coherence with in figure graph of the increments of the sampled process of size top with the sample autocorrelation graph of this process bottom left are presented the bottom right graph shows the sample coherent statistics values for a specified collection of pairs and m that exceed the threshold corresponding to the parallel lines for the sample spectral coherence confirm the increments of the sampled process are pc also in this graph the significant is at which verifies the first peak at and shows that there is a second order periodic structure with period table some different values of m for some values r s with and the some different values of sample coherence statistics for the test that the increments of the sampled process have period are presented in table as the corresponding threshold shows the test is significant on the corresponding parallel lines of figure appendix proof of lemma for any t there exist j l and m such that t thus using the definition definition and the fact that st has independence increments we have pn t pn t iw dt zn iwst eiwdt e eiw zn e e e pn t pn pn t iw t zn iw t zn eiwdt e eiw zn e e iwdt l yy iw e e pn t iw t e e pn j zn zn iw e e pn zn since for r l znr are independent and have distribution fr it follows from definition and conditional expected value for k m and r l that pn t pn t r iw zr zn iw n n n e e e e x e eiw pn r zn p n n n n x n e r e eiwzn n hr in iwz t t f dz e r r n n r iwz r e fr dz therefore r e e iwst iwdt e h iwz r e iw dt h r z r pl pl i fr dz t fj dz fr dz fr dz fr dz t fj dz h iwz r e pl fr dz i fr dz t fj dz proof of corollary it is sufficient to prove that for any s t and k n d st ss for any s t there exist m and j j l such that s and t aj l thus pn t pn t pn s iw st iw dt zn zn iw dt e e e e eiw s zn pn j pn t iw t zn iw s zn iw dt e e e iw dt e pn iw n s zn l y e e iw pn zn l y y iw pn e e zn jy iw e e pn t t zn e iw pn t t j zn by similar method in the proof of lemma we have h r iwz e t s fj dz r e eiw st eiw dt e p p m t t l dz m t l t fl dz t l t fl dz t t l dz t t fj dz since s l and t aj l it follows the same method used in the computation of the characteristic function of st ss that h r iwz e l fj dz r iw iw e e e t l t fl dz t t l dz t l t fl dz t t l dz t fj dz by definition and definition we have for partition ti tj and k n tj ti tj ti and dt for t thus iw st e e e iw proof of theorem proof a let s be the simple process defined by and zi be ith jump size furthermore denote the time that the first jump occurs p and tj j be th th the time intervals between the j and j jumps and tj for n n and for n n qn i zn eb rn zn it follows from that yt satisfies in yt js t ys ks t where js t eb t qn t qn s i zn s eb s ks t eb t rn t qn t rn t qn t qn s rn s in order to prove that the sequence is independently identically distributed let s t m such that m we define s s zn s t s t s zn t s zn t t s therefore js t ks t is function of random vector n t s zn t s using the fact that increments of the poisson process n t t are independent and the density function of random vector xn can be computed as follows p x x x x x x n n n n xn f lim xn t n s n as follows we can give the conditional density of f sn n s n t s t s sn since the increment process n t n s is a poisson process with mean t s such that t s t s for all k it follows from definition and conditional density that d js t ks t the independence of the sequence is clear since and are constructed only from the segment su s u t of the process if t there are n m such that s n and t n m n m by iterating we obtain h yt j t j j js ys k t i j t k j t j j ks it follows from and that js t j t j j js ks t k t j t k j t j ks therefore for all k d js t ks t b by iterating we obtain h jt yt i kt since follows immediately d m y are independent and identically distributed it yt kt x jt note that the kt infinite series t u jt is the partial sums of the kt x jt thus using the general theory of random recurrence equations see bougerol and picard brandt and vervaat and condition we prove the almost sure absolute convergence of the series let p be such that p bp is diagonal then we have for t r p r p p p r using and condition show e r e log zn r log r log zn t r log r and it follows from and e r t h n i x log zn r zn log r hence the strong law of large numbers yield k x r r lim sup k lim sup r from cauchy s root criterion follows that series is almost sure absolute convergence since the state process y has cadlag paths it follows that r is almost surely finite therefore t u r m y r u t m b r p t where um it follows from that converges in distribution to u t for fixed t that u t satisfies and is the unique solution is clear by the general theory of random recurrence equations c it suffices to show that for any sn and k d ysn ysn using the recursion equation and analysis is used in a we obtain above relation we give the proof for and the general case is similar therefore and the random vector is function from and with similar argument also shows that the random vector is function from j k using d d a and assumption u it follows that proof of corollary since for all s t t p the process dss is independent of fs it follows from and corollary that z p p e gt e vs e ss t z p d p e e e t p in order to prove that the covariance function of gt is periodic it suffices to show that p p p p e gt e let denote conditional expectation with respect to the since the increments of s on the interval t h t h p are independent of and the increment p process gt is measurable we have p p p p e gt e gt z z p p vs vu dsu e dss e t since vs vu dsu is function of ju ku u u jt kt yt and this vector has the same distribution with s ju ku u u it follows that p p p p cov gt cov proof of lemma a let be state process of semi levy driven cogarch process then t t where t exp n t x log r t r exp n t log n t r it follows from that for all t t r exp n t x log r t r r exp n t log r n t now define a cadlag process xt t by xt n t x log r t then xt is a negative simple pure jump semi levy procress it follows from definition and remark that e e e exp c n t x log r z tz exp r z c dz ds r using a similar analysis is used in proof of proposition it follows from that z t h i t r r e xt du it follows from and that r for all t thus e and e imply e yt and cov yt respectively in the proof of theorem a we have seen that implies that the sequence converges in distribution to a finite random vector which of the vector is the unique solution of the random equation d d where and is independent of it follows from and that u thus e and e imply e u and cov u respectively proof of lemma using and independence and we obtain e yt e e e e e u e d where the last equality follows from that and assumption of section c of the theorem for computing cov yt it is sufficient to obtain e it will therefore be followed from recursion equations which used in the proof of theorem that yt t t j h k x j k i the relation follows from independence the sequence for any k and s and also independence from this sequence for any k proof of corollary since for fixed t almost surely vt yt we have the expected value and covariance function volatility process from proof of proposition a we imitate the proof of theorem of brockwell chadraa and lindner since s is a martingale with zero mean we have it follows from ito isometry for square integrable martingales as integrators iv that z p p vs i t s i s d s s s e gt e and hence follows b it follows from partial integration that z p gt dgs g g z x p vs dss vs t t by similar analysis is used in a the compensation formula and we have z z x p e gt e vs e vs z dz ds t t r from remark the relation follows for proof of since the increments of s on the interval t t p are independent of and s has expectation it follows that z p vs dss t thus it follows from the compensation formula and that x p z z e e z dz ds r therefore p p p p p p cov gt e gt e gt e p and by remark and we have to calculate cov gt partial integration to get z z p p vs d s s s cov gt vs cov t rt to calculate the first term let it vs dss we know e it for all t therefore z p cov vs dss e e jt e it yt e it e kt t from partial integration and substituting byt dt vt d s s t it follows that z t z t e it e e vs dis e i t z t z tz e ys ds e vs z dz ds t zr t p p vs vs dss e vs vs dms where ms is a locally integrable martingale with mean zero as a result of assumption that r z dz for all s thus using the fact that rt e vs vs dss e it e it yt and that is ys ys almost surely for fixed s so we have z t z tz a e it yt a b e is ys ds a e is ys z dz ds p r the equality holds for any vector a hence z t z tz e it yt b e is ys ds e is ys z dz ds r to calculate the second term of the covariance it follows from and that z z dsb e cov vs d s s s cov yt z cov cov yt cov dsb references bennett statistics of regenerative digital transmission bell system technical journal bibi and lescheb on general periodic bilinear processes economics letters bollerslev generalized autoregressive conditional heteroskedasticity journal of econometrics bollerslev patton and wang daily house price indices construction modeling and longerrun predictions journal of applied econometrics bougerol and picard stationarity of garch processes and of some nonnegative time series journal of econometrics brandt the stochastic equation an yn bn with stationary coefficients advances in applied probability brockwell continuous time arma processes handbook of financial time series brockwell chadraa and lindner garch processes ann appl brockwell and davis time series theory and methods edition springer new york cinlar introduction to stochastic processes prentice hall englewood cliffs new jersey cont and tankov financial modelling with jump processes chapman and financial mathematics series dudek hurd wojtowicz parma models with applications in r applied condition monitoring vol cyclostationarity theory and springer switzerland engle autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation econometrica gladyshev periodically correlated random sequences soviet math hurd and miamee periodically correlated random sequences spectral theory and practice new york wiley jeon taylor density forecasting of wave energy using models and kernel density estimation international journal of forecasting kluppelberg lindner and maller a continuous time garch process driven by a levy process stationarity and second order behaviour krithikaivasan zeng deka and medhi based traffic forecasting and dynamic bandwidth provisioning for periodically measured nonstationary traffic transactions on networking maejima and sato processes journal of theoretical probability roger and williams diffusions markov processes and martingales volume ito calculus cambridge university press cambridge sato levy processes and infinitely divisible distributions cambridge university press cambridge vervaat on a stochastic difference equation and a representation of infinitely divisible random variables advances in applied probability
10
adadnns adaptive ensemble of deep neural networks for scene text recognition chun zejun jianwei oct chunchao hongfa and lei department of computer science and technology university of science and technology beijing beijing china teg tencent ltd shenzhen china corresponding author xuchengyin abstract recognizing text in the wild is a really challenging task because of complex backgrounds various illuminations and diverse distortions even with deep neural networks convolutional neural networks and recurrent neural networks in the training procedure for scene text recognition the outputs of deep neural networks at different iterations are always demonstrated with diversity and complementarity for the target object text here a simple but effective deep learning method an adaptive ensemble of deep neural networks adadnns is proposed to simply select and adaptively combine classifier components at different iterations from the whole learning system furthermore the ensemble is formulated as a bayesian framework for classifier weighting and combination a variety of experiments on several typical acknowledged benchmarks icdar robust reading competition challenge and datasets verify the surprised improvement from the baseline dnns and the effectiveness of adadnns compared with the recent methods scene text is widely used as visual indicators for navigation and notification and text recognition from scene images and videos is one key factor for a variety of practical applications with reading in the wild ye and doermann yin et al tian et al such as assisting for visually impaired people goto and tanaka sanketi shen and coughlan translation shi and xu fragoso et al user navigation minetto et al driving assistance systems wu chen and yang and autonomous mobile robots et al scene text cropped word recognition methods can be generally grouped into word recognition and holistic word recognition typical segmentationbased approaches the word image into small segments combine adjacent segments into candidate characters classify them using convolutional neural networks cnns or gradient classifiers and find an approximately optimal word recognition result bissacco et al jaderberg vedaldi and zisserman because of complex backgrounds and diverse distortions character segmentation is another more challenging task thereby holistic word recognition approaches with deep neural networks are more impressive for text reading in the wild copyright c all rights reserved word spotting the direct holistic approach usually calculates a similarity measure between the candidate word image and a query word jaderberg et al gordo sequence matching the indirect holistic approach recognizes the whole word image by embedding hidden segmentation strategies for example shi et al constructed an training deep neural network for sequence recognition scene text recognition shi bai and yao however there are a variety of grand challenges for scene text recognition see samples in fig even with recent deep neural networks dnns where additional characters will be probably identified for text distortions and complex backgrounds some characters are wrongly recognized for changing illuminations and complex noises and characters are sometimes missed for low resolutions and diverse distortions figure some challenging examples from icdar robust reading competition challenge dataset of scene text images which are incorrectly recognized by the baseline dnns see related descriptions in experiments the captions show the recognized text left versus the ground truth right additional characters wrong characters and missing characters in target words stochastic gradient descent sgd bottou and its variants have become the defacto techniques for optimizing dnns where sgd always leads to local minima even though the popularity of sgd can be attributed to its ability to avoid spurious and local minima dauphin et al there are a plenty number of more than million possible local minima in dnns kawaguchi and local minima with flat basins are supposed to generalize better in the learning system keskar et al as a result although different local minima often have similar error rates the corresponding neural networks in dnns tend to make different mistakes this diversity and complementarity can be exploited via classifier ensemble huang et al there are two major ways for ensemble of deep neural networks on the one hand different learning systems with dnns are first trained independently and then the final system is a trivial ensemble of these different deep learning architectures via majority voting or averaging for example most high profile competitions in imagenet and kaggle are won by such ensemble techniques because of the huge computation complexity this ensemble becomes uneconomical and impossible for most researchers in the universities and even in the small companies on the other hand one learning system with dnns is first trained and then the final ensemble selects and combines neural network components in this only one system without incurring any additional training cost huang et al proposed such an ensemble technique called as snapshot ensembling where a specific optimization strategy is designed to train dnns and model snapshots neural network components in all cycles are combined for the final ensemble in the learning procedure huang et al however how to design the specific and effective optimization algorithms for dnns is also a challenge in this paper we propose a new and adaptive ensemble of deep neural networks adadnns in the most simplest way given trained neural networks of all iterations from a learned dnns system a subset of neural network components are simply selected and adaptively combined to perform the final predictions and the ensemble is formally formulated as a bayesian framework for classifier weighting and combination we argue that because of the diversity and complementarity in dnns with sgd adadnns via ensembling with diversity can improve robust performance of the final learning system on the same time because of the high accuracy of components in dnns adadnns via combination with accurate neural network components can improve precision performance of the final classification system a variety of experiments on several acknowledged benchmarks icdar robust reading competition challenge and datasets have shown that the simple but effective adadnns improves largely from the baseline dnns moreover our proposed approach has the the neural network component means the resulting dnn of each iteration in the whole training procedure here the dnns system can be trained with conventional optimization algorithms bottou curtis and nocedal or even with the specific algorithms snapshot ensembling huang et al top performance compared with the latest methods related work recognizing text in scene videos attracts more and more interests in the fields of document analysis and recognition computer vision and machine learning the existing methods for scene text cropped word recognition can be grouped into word recognition and holistic word recognition in general word recognition methods integrate character segmentation and character recognition with language priors using optimization techniques such as markov models weinman et al and crfs mishra alahari and jawahar shi et al in recent years the mainstream segmentationbased word recognition techniques usually the word image into small segments combine adjacent segments into candidate characters and classify them using cnns or gradient classifiers and find an approximately optimal word recognition result using beam search bissacco et al hidden markov models alsharif and pineau or dynamic programming jaderberg vedaldi and zisserman word spotting manmatha han and riseman a direct holistic word recognition approach is to identify specific words in scene images without character segmentation given a lexicon of words wang and belongie word spotting methods usually calculate a similarity measure between the candidate word image and a query word impressively some recent methods design a proper cnn architecture and train cnns directly on the holistic word images jaderberg et al jaderberg et al or use label embedding techniques to enrich relations between word images and text strings almazan et al gordo sequence matching an indirect holistic word recognition approach recognizes the whole word image by embedding hidden segmentation strategies shi et al constructed an train deep neural network for sequence recognition scene text recognition where a convolutional recurrent neural networks framework crnn is designed and utilized shi bai and yao in this paper a similar crnn architecture is used in adadnns for recognizing scene text sequently and holistically classifier ensemble can be mainly divided into two categories the first one aims at learning multiple classifiers at the feature level where multiple classifiers are trained and combined in the learning process boosting freund and schapire bagging breiman and rotation forest rodriguez kuncheva and alonso the second tries to combine classifiers at the output level where the results of multiple available classifiers are combined to solve the targeted problem multiple classifier systems classifier combination zhou yin et al adadnns in this paper follows the second one namely given multiple classifiers neural network components sequently learned in dnns adadnns is constructed by combining intelligently these component classifiers within a formulation framework adaptive ensemble of deep neural networks as we have known both sgd and batch optimization can lead to different local minima in dnns and neural network components are always with diversity and complementarity conventionally there are tens of thousands of iterations and also neural network components in the learning system of dnns considering the acceptable computation complexity in the testing procedure one thing is to quickly select a small subset of neural network components in different training iterations at the same time considering the high accuracy requirement another thing is to adaptively combine this subset of neural network components and construct a final classification system in the following the unified framework of adadnns is first formulated next the detail procedure of adadnns is then described there are two key issues for optimizing eq the first one is the calculation of w y hi x as mentioned above p x is the distribution of describing the correlation between decision y and hi x thus w y hi x can be derived from y hi x and the distance between y and hi x here w y hi x is assumed to be computed as w y hi x i y hi x u y v y hi x where both u and v are functions i y hi x returns when y hi x otherwise i y hi x for the scene text recognition task on the one hand with a given dictionary u y can be calculated as u y y dict dict unified framework to formulate the ensemble decision the individual classifier decisions can be combine by majority voting which sums the votes for each class and selects the class that receives most of the votes while the majority voting is the most popular combination rule a major limitation of majority voting is that only the decision of each model is taken into account without considering the distribution of decisions in particular all the possible models in the hypothesis space could be exploited by considering their individual decisions and the correlations with other hypotheses here we use a framework to combine classifiers given a sample x and a set h of independent classifiers the probability of label y can be estimated by a bayesian model bm as x p x p x p hi on the other hand the correlation between y and hi x can be assumed by the function v of cost levenshtein distance cld in the traditional levenshtein distance the cost of any two different characters is always however in spelling correction the cost of two characters with similar shape tends to have a smaller distance in this paper we statistics the frequencies of different character pairs at the same location from the label and the hypothesis on the validation set bootstrapped from the training set in experiments and calculate the cost of two different characters a and b as cost a b p note that if both y and hi x are from the given dictionary then they will have a competitive relationship with each other thus v y hi x can be calculated with hi where p x is the distribution of describing the correlation between decision y and hi x and p hi denotes the posterior probability of model hi the posterior p hi can be computed as p hi p p p hi hi p p hi where p hi is the prior probability of classifier hi and p isp the model likelihood on the training set here p hi and hi p p hi are assumed to be a constant in eq therefore bm assigns the optimal label y to y according to the following decision rule p y argmaxy pp x argmaxy phi p x p hi argmaxy p hi p x p argmaxy hi p x p p d argmaxy phi x p argmaxy hi w y hi x p where w y hi x is a function of y and hi x by multiplying the scaling factor w y hi x can have a different range in hi x dict hi x dict where f is a function of the cld between y and hi x by a heuristic approach the values of f can be empirically assigned at the multiple integral points and the values at other points can be calculated by the piecewise linear interpolation an example of f is shown in fig in general f has a small range in fig so the obtained weights from eq are convenient for linear combination of classifiers the second issue is about generating voting candidates more probable labels of the hypotheses obviously the ground truth doesn t always appear in the decisions made by it is necessary to find an effective way to generate good candidates from all the decisions to find a more probable label yi x from the existed initial label x of hypothesis hi generally speaking a good candidate means it has a small edit distance with most of the hypotheses following this idea we propose an algorithm to semantically generate voting candidates see algorithm v y hi x f y hi x f cld y hi x in our experiments a word dictionary from jaderberg et al is used as the given dictionary batch orderings will converge to different solutions those snapshots often have the similar error rates but make different mistakes this diversity can be exploited by ensembling in which multiple snapnots are average sampling and then combined with majority voting focusing on scene text recognition the crnn model shi bai and yao is used to generate base classifiers neural network components as our text recognizer crnn uses ctc graves et al as its output layer which estimates the sequence probability conditioned on the input image p where x is the input image and h represents a character sequence figure an example of f of describing the relationship between v in and the cost levenshtein distance in algorithm generating voting candidates input h hl the base classifier set the initial decisions made by ed the measurement function of the pairwise distance the upper bound of the distance between the candidate and the hypothesis output y the voting candidate set parameter h a subset of h i h j h ed h i h j procedure y for each h h for each y if maxh i ed y h x y y y end end in algorithm the searching process of h is an implicit computational way for p in our experiments a special simple case of algorithm is used where during the voting candidates generation process is initialized only by h the upper bound is set from to inf and p is assumed to be a constant adadnns algorithm within the above framework the procedure of adadnns for scene text recognition includes three major steps base classifiers generation classifier combination and ensemble pruning base classifiers generation ensembles work best if the base models have high accuracy and do not overlap in the set of examples they misclassify deep neural networks dnns are naturally used as a base classifier generator for ensembles on the one hand dnns have dramatically improved the in many domains such as speech recognition visual object recognition and object detection by being composed of multiple processing layers to learn representations of data with multiple levels of abstraction on the other hand during the training phase of one individual deep neural network two snapshots with different classifier combination for adadnns the core of adadnns is to calculate y by eq the calculation of f which is a function of distance between y and hi x here f is represented by the set of values at the multiple integral points these values are assigned with the highest recognition rate on the validation set the detail procedure of the adadnns ensemble is shown in algorithm algorithm adadnns classifier combination input h hl the base classifier set dict the given dictionary f a function of distance between y and hi x parameter y the voting candidates set generated by algorithm output y the label of prediction procedure initialize y by h and dict for y y calculate p x through eq end calculate y through eq adadnns pruning in classifier ensemble pruning can generally improve the ensemble performance here we use genetic algorithm ga to pruning the ensemble ga is a meta heuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms gas are commonly used to generate solutions for optimization and search problems by relying on bioinspired operators such as mutation crossover and selection in adadnns pruning firstly a population of binary weight vectors is randomly generated where means the classifier is remained secondly the population is iteratively evolve where the fitness of a vector w is measured on the v validation set v f w rw r stands for the recognition rate finally the ensemble is correspondingly pruned by the evolved best weight vector experiments to evaluate the effectiveness of the proposed adadnns method a variety of experiments for text cropped word recognition are conducted on acknowledged benchmark datasets we first focused on the most challenging task incidental scene text recognition icdar robust reading competition challenge trained our adadnns learning system on both the synthetic dataset from jaderberg et al and the training set of challenge and performed comparative experiments then we also conducted experiments of this learned adadnns on other text recognition tasks focused scene text recognition and borndigital text recognition icdar robust reading competition challenge and and checked the generalization of adadnns here the baseline dnns model crnn is same to the one in shi bai and yao the official metrics in icdar robust reading competition shahab shafait and dengel karatzas et al karatzas et al are used figure challenging samples of scene text from which are correctly recognized with upper by adadnns gems mgennisgal rgao railroad united kappa xmas zoom youtube york walk wprd when year and wisconsin experiments with incidental scene text recognition the icdar robust reading competition challenge database karatzas et al is a widely used and highly competitive benchmark database for scene text recognition within complex situations in the recent years the public dataset includes a training set of images and a test set of with more than annotated text regions cropped words because of complex backgrounds various illuminations and diverse distortions this incidental scene text recognition topic is a very challenging task in our experiments a variety of methods are conducted and compared the baseline dnns adadnns adadnns pruning the winning participation method in the official competition marked as bold words and the latest top submissions of the robust reading competition rrc website in marked as italic words table comparative results on icdar challenge dataset incidental scene text recognition where the comparative results are from the rrc website date method baidu idl hik ocr maps baseline dnns adadnns adadnns pruning upper upper as can be seen from table our proposed adadnns is much better than the baseline dnns for example for the measure of upper adadnns has a surprised improvement from to that is to say the adaptive ensemble of dnns in a simple but effective strategy can largely improved the performance from the original baseline dnns moreover compared with the latest top submissions baidu idl and hik ocr our method adadnns pruning has the best performance with upper we also perform experiments on the dataset veit et al a similar challenging but largescale incidental scene text dataset images in this dataset are from the ms coco dataset that contain text images with text regions robust reading challenge on is holding and will be released http in icdar so the comparative results of adadnns adadnns pruning and the baseline dnns are only on the validation set they are and respectively some scene text recognition samples for cocotext are shown in fig experiments with focused scene text recognition and text recognition in order to investigate the generalization of adadnns we directly use the trained adadnns system above for icdar challenge and perform experiments on icdar challenge cropped word recognition dataset the challenge dataset contains ground truths cropped word images in our experiments a variety of methods are conducted and compared the baseline dnns adadnns adadnns pruning the winning participation method in the official competition marked as bold words the top three results in published papers and the latest top submissions of the rrc website in marked as italic words table comparative results on icdar challenge dataset focused scene text recognition where the comparative results without publications are from the rrc website date method tencentailab tencent youtu hik ocr cnn jaderberg et al rare shi et al crnn shi bai and yao photoocr baseline dnns adadnns adadnns pruning upper upper similarly adadnns is much better than the baseline dnns the measure of upper increases from to surprisedly only trained for another task challenge the adadnns adadnns pruning has a competitive performance on a new dataset challenge dataset compared with the recent published methods crnn shi bai and yao and even with the latest submission results apart from the above experiments on text recognition from scene images icdar robust reading competition challenge and we also directly perform the learned adadnns on the images track challenge though images are not scene images they have similar challenging issues for text recognition complex backgrounds low resolution and various colors we also compare adadnns adadnns pruning with the baseline dnns the winning participation method in the official competition marked as bold words and the latest top submissions of the rrc website marked as italic words the similar conclusions are drawn firstly adadnns improves largely compared with the baseline dnns from to for upper secondly adadnns has a comparative performance with the latest submission results dahua ocr with in table comparative results on icdar challenge dataset text recognition where the comparative results are from the rrc website date method tecent youtu tecentailab dahua ocr photoocr baseline dnns adadnns adadnns pruning upper upper we fully believe that if adadnns adadnns pruning performs on icdar challenge and challenge datasets the performance will correspondingly be improved and obtain a more impressive results compared with the latest submission systems this is also a near issue for our future work conclusion and discussion a variety of dnns based methods have been proposed and are still being investigated in the literature for scene text recognition because of the grand challenges complex backgrounds various illuminations and diverse distortions in order to fully take advantage of the complementary diversity and the high accuracy of neural network components in dnns an adaptive ensemble of deep neural networks adadnns is proposed to simply select and adaptively combine neural networks in the whole training procedure comparative experiments of scene text cropped word recognition showed that adadnns achieves a remarkable increase in the final performance more than compared with the baseline dnns note that the dnns methods have dramatically improved the in object detection object recognition speech recognition and many other domains consequently a near future issue is to evaluate the efficacy of adadnns with dnns on object recognition and speech recognition for example experiments for object detection and recognition of adadnns with snapshot ensembling huang et al resnet he et al and densenet huang liu and weinberger can be performed and compared in the next step references almazan et al almazan gordo fornes and valveny word spotting and recognition with embedded attributes ieee trans pattern analysis and machine intelligence alsharif and pineau alsharif and pineau j text recognition with hybrid hmm maxout models in proceedings of international conference on learning representations iclr bissacco et al bissacco cummins netzer and neven photoocr reading text in uncontrolled conditions in proceedings of international conference on computer vision iccv bottou curtis and nocedal bottou curtis and nocedal j optimization methods for machine learning corr bottou bottou machine learning with stochastic gradient descent in proceedings of the international conference on computational statistics compstat breiman breiman bagging predictors machine learning dauphin et al dauphin pascanu cho ganguli and bengio y identifying and attacking the saddle point problem in optimization in advances in neural information processing systems annual conference on neural information processing systems nips fragoso et al fragoso gauglitz zamora kleban and turk translatar a mobile augmented reality translator in proceedings of ieee workshop on applications of computer vision wacv freund and schapire freund and schapire a generalization of learning and an application to boosting journal of computer and system sciences gordo gordo a supervised features for word image representation in proceedings of ieee international conference on computer vision and pattern recognition cvpr goto and tanaka goto and tanaka wearable camera system for the blind in proceedings of international conference on document analysis and recognition icdar graves et al graves gomez and schmidhuber j connectionist temporal classification labelling unsegmented sequence data with recurrent neural networks in machine learning proceedings of the international conference icml pittsburgh pennsylvania usa june he et al he zhang ren and sun j deep residual learning for image recognition in proceedings of ieee conference on computer vision and pattern recognition cvpr huang et al huang li pleiss liu hopcroft and weinberger q snapshot ensembles train get m for free in proceedings of international conference on learning representations iclr huang liu and weinberger huang liu and weinberger q densely connected convolutional networks in proceedings of ieee conference on computer vision and pattern recognition cvpr jaderberg et al jaderberg simonyan vedaldi and zisserman a synthetic data and artificial neural networks for natural scene text recognition corr jaderberg et al jaderberg simonyan vedaldi and zisserman a reading text in the wild with convolutional neural networks international journal of computer vision jaderberg vedaldi and zisserman jaderberg vedaldi and zisserman a deep features for text spotting in proceedings of the european conference on computer vision eccv karatzas et al karatzas shafait uchida iwamura i bigorda mestre mas mota and de las heras icdar robust reading competition in proceedings of international conference on document analysis and recognition icdar karatzas et al karatzas nicolaou ghosh bagdanov iwamura matas neumann chandrasekhar lu shafait uchida and valveny icdar competition on robust reading in proceedings of international conference on document analysis and recognition icdar kawaguchi kawaguchi deep learning without poor local minima in advances in neural information processing systems annual conference on neural information processing systems nips keskar et al keskar mudigere nocedal smelyanskiy and tang on largebatch training for deep learning generalization gap and sharp minima in proceedings of international conference on learning representations iclr et al michaud valin and proulx textual message read by a mobile robot in proceedings of the international conference on intelligent robots and systems iros volume manmatha han and riseman manmatha han and riseman word spotting a new approach to indexing handwriting in proceedings of conference on computer vision and pattern recognition cvpr minetto et al minetto thome cord leite and stolfi j snoopertrack text detection and tracking for outdoor videos in proceedings of the ieee international conference on image processing icip mishra alahari and jawahar mishra alahari and jawahar and cues for scene text recognition in proceedings of ieee conference on computer vision and pattern recognition cvpr rodriguez kuncheva and alonso rodriguez kuncheva and alonso j rotation forest a new classifier ensemble method ieee trans pattern analysis machine intelligence sanketi shen and coughlan sanketi shen and coughlan localizing blurry and text in natural images in proceedings of ieee workshop on applications of computer vision wacv shahab shafait and dengel shahab shafait and dengel a icdar robust reading competition challenge reading text in scene images in proceedings of international conference on document analysis and recognition icdar shi and xu shi and xu y a wearable translation robot in proceedings of the ieee international conference on robotics and automation icra shi bai and yao shi bai and yao an trainable neural network for sequence recognition and its application to scene text recognition ieee trans pattern analysis and machine intelligence published online shi et al shi wang xiao zhang gao and zhang z scene text recognition using partbased character detection in proceedings of ieee conference on computer vision and pattern recognition cvpr shi et al shi wang lyu yao and bai x robust scene text recognition with automatic rectification in proceedings of ieee conference on computer vision and pattern recognition cvpr tian et al tian yin su and hao a unified framework for tracking based text detection and recognition from web videos ieee trans pattern analysis and machine intelligence published online veit et al veit matera neumann matas and belongie j dataset and benchmark for text detection and recognition in natural images corr wang and belongie wang and belongie word spotting in the wild in proceedings of european conference on computer vision eccv weinman et al weinman butler knoll and feild j toward integrated scene text reading ieee trans pattern analysis and machine intelligence wu chen and yang wu chen and yang j detection of text on road signs from video ieee trans intelligent transportation systems ye and doermann ye and doermann text detection and recognition in imagery a survey ieee trans pattern analysis and machine intelligence yin et al yin huang yang and hao convex ensemble learning with sparsity and diversity information fusion yin et al yin zuo tian and liu text detection tracking and recognition in video a comprehensive survey ieee trans image processing zhou zhou ensemble methods foundations and algorithms boca raton fl chamman
1
yang et al bmc systems biology suppl http research open access microbial community pattern detection in human body habitats via ensemble clustering framework peng xiaoquan le kang from asia pacific bioinformatics network apbionet thirteenth international conference on bioinformatics sydney australia july august abstract background the human habitat is a host where microbial species evolve function and continue to evolve elucidating how microbial communities respond to human habitats is a fundamental and critical task as establishing baselines of human microbiome is essential in understanding its role in human disease and health recent studies on healthy human microbiome focus on particular body habitats assuming that microbiome develop similar structural patterns to perform similar ecosystem function under same environmental conditions however current studies usually overlook a complex and interconnected landscape of human microbiome and limit the ability in particular body habitats with learning models of specific criterion therefore these methods could not capture the underlying microbial patterns effectively results to obtain a comprehensive view we propose a novel ensemble clustering framework to mine the structure of microbial community pattern on metagenomic data particularly we first build a microbial similarity network via integrating metagenomic samples from three body habitats of healthy adults then a novel symmetric nonnegative matrix factorization nmf based ensemble model is proposed and applied onto the network to detect clustering pattern extensive experiments are conducted to evaluate the effectiveness of our model on deriving microbial community with respect to body habitat and host gender from clustering results we observed that body habitat exhibits a strong bound but microbial structural pattern meanwhile human microbiome reveals different degree of structural variations over body habitat and host gender conclusions in summary our ensemble clustering framework could efficiently explore integrated clustering results to accurately identify microbial communities and provide a comprehensive view for a set of microbial communities the clustering results indicate that structure of human microbiome is varied systematically across body habitats and host genders such trends depict an integrated biography of microbial communities which offer a new insight towards uncovering pathogenic model of human microbiome background metagenomic background the human body is a content that complex microbial communities are living inside and on this microbiome occupies body habitats and endows us with ecosystem functions such as nutrition pathogen resistance and correspondence ningkang computational biology group of single cell center shandong key laboratory of energy genetics and cas key laboratory of biofuels qingdao institute of bioenergy and bioprocess technology chinese academy of science qingdao china full list of author information is available at the end of the article immune system development to help maintain our health hence systematically defining the normal states of human microbiome is an important step towards understanding role of microbiota in pathogenesis however the majority of microbiomes have been poorly investigated to understand the principle of human microbiome prior research concentrated on particular body habitats for example turnbaugh et al investigated the gut microbiome in obese and lean twins to address how host environmental condition and diet influence the yang et al licensee biomed central ltd this is an open access article distributed under the terms of the creative commons attribution license http which permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited the creative commons public domain dedication waiver http applies to the data made available in this article unless otherwise stated yang et al bmc systems biology suppl http microbial components grice et al targeted human skin microbiome to characterize its topological and personal variations within multiple sites bik et al s research indicated the distinctness of microbial structure on oral cavity and tongue however human microbial habitats are not isolated with one another instead they reveal community structure correlation across body habitats in this case ensemble of different habitat samples could bring global and insights into microbiome recent studies had aggregated microbial samples from different body habitats to perform a comprehensive study costello et al surveyed the microbiomes that were gathered from body habitats of nine adults mitreva carried out the extensive sampling on body habitats from individuals in order to establish a global insight of human microbiome they built a microbial similarity network where the nodes were consisted of metagenomic samples from multiple human body sites and the edges as phylogenetic similarity of samples were measured in terms of their shared evolutionary history clustering approaches had been applied on this similarity network to group samples that shared more similar phylogenetic structures with each other within the clusters than other ones from these clusters researchers could infer how microbial patterns were affected by body habitat host gender and environmental condition with time costello et al proposed a hierarchical clustering algorithm on a microbial community network and found out personal microbiota relatively stable within habitats over time turnbaugh et al identified two distinct functional modules on gut microbiome via principal components analysis pca and hierarchical clustering algorithm and experimental results disclosed that microbiome within same clusters carried out similar functions mitreva adopted a clustering algorithm and discovered the covariation and of microbiome between different habitats current limitations clustering approach aims to group metagenomic samples with similar phylogenetic patterns it can be achieved by various algorithms that differ significantly in terms of computational principles and measures by which each generated clustering results can be viewed as taking a different look through data as shown in table however most of prior studies employ one particular clustering approach by which the clustering outputs tend to be specific towards the criterion of the proposed approach for example clustering algorithm groups samples that are densely connected in similarity network however true microbial page of communities are not limited to densely connected structures samples with sparsely microbial structure widely exist in the lake graph clustering such as mcl and clustering explores the best partition of a network but these algorithms do not allow the overlaps between clusters therefore they are unable to discover shared microbe between two communities such as some species that could adapt in conditions like microbial mats and biofilms hierarchical clustering algorithm learns the hierarchical structure of a network which has been used in but hierarchical structure is determined by local optimization criterion as such there is no global objective function which might lead to small clusters with only part of similar samples clustering approach like em identifies the clusters that follow statistical condorcet criteria but statistical model for microbial community remains rarely known and therefore it is difficult to evaluate reliability of the results advantage of proposed ensemble clustering framework ideally a clustering algorithm should be able to exploit clustering patterns as comprehensive as possible however as we have mentioned above few algorithms are capable of taking into consideration all factors different clustering algorithms may produce different partitions of the network given multiple clustering results we need to explore their information and output more robust results that can exploit the complementary nature of these patterns ensemble clustering was proposed recently which has been successfully used to solve many community detection problems thus we use ensemble clustering framework to integrate the various kinds of clusters here we call them base clustering results and output more comprehensive results in this study we first construct a consensus matrix which measures similarity of samples based on of samples in base clustering results next we apply symmetric nonnegative matrix factorization nmf on the consensus matrix to derive clusters symmetric nmf provides a lower rank approximation of a nonnegative matrix which could be easily related to the clustering of the nonnegative data as mentioned in the factorization of the consensus matrix will generate a clustering assignment matrix that could capture the cluster structure inherent in the network unlike prior researches that applied single cluster algorithm on particular habitat microbiome our framework assembled clustering algorithms of different human microbiome in different body habitats we carried out our experiments to demonstrate its capability in capturing the microbial community experimental yang et al bmc systems biology suppl http page of table summary of four particular clustering approaches clustering approaches clustering characteristics limitations on microbial pattern clusters are defined as connected dense regions in the network true microbial community are not limited to densely connected structures sparsely microbial structure still exists graph clusters are generated via graph partitioning techniques based clustering partition based algorithms do not allow the overlaps between clusters therefore they are unable to discover shared microbe among clusters such as some species that could adapt in conditions like microbial mats and biofilms hierarchical clustering clusters are built based on an agglomerative clustering model that shows relations between the members and groups hierarchical structure is determined by local optimization criterion as such there is no global objective function which might lead to small clusters with only part of similar samples distributionbased clustering clusters are modelled using statistical distributions statistical models of microbial communities are still unknown and need to be further explored results showed that predicted clusters were capable of revealing the spatial and gender roles of human microbiota and eventually elaborated human microbiome biogeography which provided new insights about disease pathogenesis of human microbiome material and methods in this section we first briefly introduced the experimental data the similarity measurements of metagenomic samples and gpu based fast similarity matrix computing then we described the schema of ensemble clustering framework and its phases to structure microbial community experimental data in this work we used metagenomic samples from the project moving pictures of human microbiome to build the microbial matrix and similarity network refer to section similarity measurements of metagenomic samples for details a sample of metagenomic matrix and network were illustrated in figure and the similarity matrices of all datasets were shown in additional file table were performed to measure structural similarity of metagenomic samples efficiency of is shown in additional file figure metagenomic samples were annotated by two habitat gut skin oral cavity defined human body habitat the samples live in while gender male female defined the gender of host the samples inhabit combining the two each sample was partition into one of six they were male gut male skin male oral cavity female gut female skin female oral cavity table summarized the distribution of metagenomic samples on three body habitats and two host genders similarity measurements of metagenomic samples the scoring function of compared two microbial samples structure by calculating the maximum common component of their common phylogenetic tree figure an example of a similarity matrix and b its similarity network in the matrix each tile indicates a similarity value between samples by colour gradient from red high to green low in the network each node represents a sample and edges represent similarity values in the matrix yang et al bmc systems biology suppl http page of table microbial samples on six human body habitats gut skin oral male female total total considering the phylogenetic distance and abundance of each species formula the scoring function first evaluated the common abundance of each species on the leaf node which was considered as the smaller abundance value in two samples these abundance values were propagated to their ancestors iteratively and the accumulative common abundance values at the root node reflected the overall similarity between the two metagenomic samples which could be computed using similarity root defined in formula common abundance x similarity x if x is a leaf node common abundance x if x is an internal node then we constructed the similarity matrix based on the similarity among all sample pair figure a exploiting the architecture of the gpu formula could be invoked in parallel using a large number of threads to compute similarity between different pairs of metagenomic samples to compute the similarity matrix for n samples we spawned n n threads in the gpu such that each similarity value in the matrix was processed by an independent thread figure overview of the gpu based similarity matrix computing figure illustrated the gpu computing workflow to build the common phylogenetic tree we first loaded and initialize abundant specie data from the file system to main memory this data was then reloaded to the gpu for computing when all threads of the gpu kernel had been completed figure step the key step these values were returned back to ram to populate the similarity matrix which was then stored in the file system ensemble clustering framework in this subsection we proposed a novel ensemble clustering framework namely to perform microbial community pattern detection the framework consisted of two stages a generation phase where a consensus matrix was constructed based on base clustering results and an identification phase in which a symmetric clustering was used to detect reliable clusters from the consensus matrix the schema of our algorithm was presented in figure terminology after computing the similarity matrix of the metagenomic samples we used it to construct the microbial similarity network that was reformatted as a simple undirected graph g v e where v defined a vertex set which containeed n vertices and e an edge set a vertex v v represented a metagenomic sample and a weighted edge e e represented the polygenetic structure similarity of two samples figure b a cluster ci vci eic was a subnetwork of g such that vci v and eic was the set of edges induced by vci from a microbial community of g was a set of predicted microbial clusters defined as cm generation phase when the similarity network was ready a set of base clustering results were calculated by yang et al bmc systems biology suppl http page of figure the schema of algorithm applying four clustering algorithms base clustering algorithms on the similarity network with different initializations as shown in figure a the base clustering algorithms included em algorithm clustering hierarchical clustering and clustering as present in table and additional file section a consensus matrix w was introduced to measure the of samples in clusters of the base clustering results each wij indicated the number of base clustering results in which sample i and sample j were assigned to the same cluster divided by the total number of base clustering results therefore matrix w took into consideration all generated clusters and reflected the similarity between each pair of samples based on different clustering criterions the higher the value of wij the more likely sample i and sample j belonged to the same cluster identification phase when the consensus matrix was constructed we applied a symmetric clustering algorithm on this matrix to derive the clusters the flowchart of this algorithm was shown in figure b the main idea of this algorithm was outlined as follows the symmetric nmf defined in equation was suitable for network clustering based on similarity matrix w d here d t was a predefined cost function k and k was the predefined number of clusters h was a cluster indicator matrix in which each entry hi k denoted the membership of sample i belonging to cluster so we could easily infer the clustering assignment of sample i from the row of in this study we used kl divergence as the cost function which could be represented as d w dkl w n i wij wij log hht ij wij hht ij we chose as the cost function since it was free of noise parameter and had been widely used in nmf a sample may belong to more than one cluster but it seldom belonged to all clusters thus the cluster indicator matrix h should be sparse to achieve sparsity of yang et al bmc systems biology suppl http page of the solution of h a regularization for h was integrated neglecting constants and adding the regularization for h the modified formulation was as follows min n n wi j log hht i j hht i j n k z where the b controlled the sparsity of h and h was the cluster indicator matrix solution to ensemble clustering minimization of the cost function in equation with constraints formed a constrained nonlinear optimization problem similar to we adopted the multiplicative update rule to estimate h which was widely accepted as a useful algorithm in solving nonnegative matrix factorization problem by the multiplicative update rule we obtained the following update rules for hi z hi z wi j hj z hi l hj l hi z hi z hj z we iteratively updated h according to the updating rule until they satisfied a stopping criterion let hl be the cluster indicator matrix at iteration time l l the algorithm was stopped whenever r where r was a predefined tolerance parameter here we set r as the default value of tolerance parameter in addition the maximum of iteration time was limited to iterations if the stopping criteria r was unsatisfied in order to avoid local minimum for random initialization we repeated the algorithm times with random initial conditions and chose the results with lowest value of the cost function from cluster indicator matrix to microbial clusters similar to we obtained microbial clusters from cluster indicator matrix h by taking the threshold to assign a sample to a cluster when its weight for the cluster exceeded in this way we samplecluster membership matrix z where z if hi z and z if hi z here z mean sample i was assigned to detected cluster z and h mean the final output of after completing these steps we obtained the refined clusters ek that satisfied the following conditions ek ck vi cz if z where i n and z we summarized the whole algorithm in figure results in this section we focused on evaluating the effectiveness of algorithm before presenting the experimental results we first introduced our experiment design evaluation metrics and experimental settings in figure the algorithm of for microbial community pattern detection our study then we conducted experimental comparison between and base clustering approaches and comparison between constructed consensus network and original metagenomic similarity network finally from clustering results we investigated how human microbial community was influenced by body habitat and host gender evaluation metrics in this work we evaluated the effectiveness of clustering algorithms by observing how well detected clusters corresponded to the sampling information of habitats and genders six refer to subsection terminology for details since the true number of cluster patterns for habitat and gender was unknown and there were no literature references to clearly mention how to determine the number of cluster patterns in either body habitat or host gender we empirically defined reference clusters based on six assuming that metagenomic samples with identical were likely to have similar microbial structures we bring the metagenomic samples with identical into one reference cluster typically the quality of the predicted clusters could be evaluated by following three quantity measures pr metrics and which could measure how well the detected clusters corresponded to reference clusters among these three measures which was the harmonic mean of precision and recall aimed at assessing how well the detected clusters matched reference yang et al bmc systems biology suppl http page of ones at cluster level precision measured what fraction of the detected clusters were matched with reference ones and recall measured what fraction of reference clusters were matched to detected clusters metric took into account the overlap between detected and reference clusters focused on measuring whether samples within identical habitats were grouped together in the detected clusters the value of each measure varied from to and the higher value indicated better match for more details of pr metrics and please refer to additional file section and the parameter setting in the experiments is introduced in additional file section evaluation of clustering results generated by algorithm in this subsection to evaluate the performance of metaec algorithm we presented performance comparison of proposed algorithm with base clustering approaches and comparison of constructed consensus matrix with original microbial similarity matrix comparison against four base clustering approaches to evaluate the performance of ensemble clustering approach the accuracy of the clustering results derived from our proposed approach was compared with the ones derived from these base clustering algorithms figure illustrated the performance of different clustering algorithms in terms of three metrics pr and with respect to the reference clusters from figure we could observe that our approach had competitive performance compared with the base clustering algorithms as regard to all three measures among the base clustering algorithms with cluster number set to had better performance in terms of pr while and clustering with cluster number set to had better performance in terms of and hierarchical clustering with cluster number set to and had comparable performance with and clustering with cluster number set to in terms of but none of them could have superior performance than others as regard to all three measures however our approach obtained the best performance in terms of all the three measures this may be owing to the fast that approach could make use of clusters derived from different base clustering algorithms and extract more reliable results in addition we conducted sensitivity study of phylogenetic structure similarity on microbial network we ran algorithm with threshold value of metagenomic similarity in matrix tuning from to with as step size the results in additional file figure showed outperformed other clustering techniques in the wide range of edge threshold indicating that our figure performance comparison of ensemble clustering framework to base clustering algorithms with respect to fmeasure pr and note that approach with random initialization is denoted as while approach with a base clustering result as initial input is denoted as the result of is obtained with algorithm is robust and insensitive to the similarity network noisy and data coverage in addition we have compared the computational time with base clustering approaches in table and results show that yang et al bmc systems biology suppl http page of table comparison with bases clustering approaches on computational time method em time s hierarchical exactly spend more time than that by and hierarchical clustering but less than em clustering so the total time cost of metaec is the sum of all base clustering algorithms plus seconds with rapid development of computational capability we could improve the time efficiency on large amount of operations comparison of constructed consensus network with original similarity network to demonstrate the benefits of combining different base clustering results we applied symmetric nmf on original metagenomic similarity network and evaluated its performance to be fair the results of symmetric nmf on original metagenomic similarity network were obtained over the best tuned parameter the comparison of the two tested similarity network is present in figure as regard to pr and the results in figure showed that applying symmetric nmf on consensus matrix achieved better performance than that on the original similarity network these results demonstrated the benefits of combining different base clustering results if the similarity matrix was well constructed each element reflected the cocluster similarity the factorization of the similarity matrix would generate a clustering assignment matrix figure performance comparison of bayesian nmf based clustering algorithm applied on ensemble clustering similarity network and original microbial similarity network additive values of three measures are present for each data source for random initialization case the value of is set to and the result corresponds to we also choose the base clustering results which presents the best performance as the initial input of symmetric nmf and the result corresponds to that could well capture the cluster structure inherent in the network representation however the original network weighted the interaction via measuring the phylogenetic structure of samples in this way metagenomic samples with higher phylogenetic similarity were more likely to be involved in one cluster if the actual microbial pattern was uncorrelated with phylogenetic similarity the community detected by symmetric nmf may be unreliable in ensemble clustering framework we generated a consensus matrix that integrated the clustering results derived from different clustering algorithms each element in consensus matrix indicated the frequency of the corresponding sample pair being clustered together in these base clustering results thus applying symmetric nmf on consensus matrix could take into consideration the strength of multiple clustering patterns and output a more comprehensive and robust result interpretation of microbial community patterns on human body habitats based on clustering results recall that metagenomic samples were clustered in terms of frequency in base clustering results hence the final output clusters assembled samples to represent unique microbial patterns that are the consensus from base clustering approaches next from the clustering results we infer how microbial pattern was influenced by body habitats and host genders structural variation across body habitats through analyzing the enrichment of body habitat and host gender over six predicted clusters the results in figure revealed a stronger coherence by body habitat than host gender these clusters dominated by particular body habitats inferred that these body habitats harboured distinctive microbial patterns which was also observed in base clustering results in additional file figure although four base clustering algorithms generate clustering patterns with different criterions most clusters in additional file table were enriched with particular habitats meanwhile we observed that microbial communities at different body habitat exhibited different degree of compositional structure variation figure showed that microbial structure remained relatively stable in oral cavity compared with diverse microbial structures harboured in skin it was biologically reasonable to detect diverse patterns on skin since there were quite different places where skin microbial communities could be sampled different extend of habitat structural variation were also observed in base clustering results in additional file figure gut and oral cavity microbial community patterns were only fit with one clustering criterion gut consistent with and oral cavity with yang et al bmc systems biology suppl http page of figure sample distribution on predicted clusters with respect to body habitat and host gender hierarchical clustering contrary to gut and oral cavity cluster could be recognized by four clustering criterions in all experimental settings inferring skin samples have many cluster patterns with diverse microbial structures note that the proposed generates a more comprehensive community patterns with respect to since our result is an agreement by consensus of multiple base clustering approaches for example compared to hierarchical and em clustering results in additional file figure that only capture cluster ensemble clustering is able to uncover femalegut specific clusters shown in figure indicating that could reveal degree of structural variation over body habitat more comprehensively than base clustering results structural variation across host gender we further assessed microbial structure variation with respect to host gender was used to measure similarity of two metagenomic samples the results in figure indicated that over all habitats variation was significantly less within same gender samples than between opposite gender samples however these habitats perform different degree of structural variation with respect to host gender oral cavity microbiome exhibited a stable structure both among same and opposite gender individuals both above phylogenetic structure similarity and skin communities had no unique structural variation patterns regarding to host gender gut community structure was highly variable between samples from opposite gender hosts less than similarity value for opposite gender samples of gut cluster but exhibited strong coherence to same gender hosts on the other hand the enrichment study in figure showed that two gut clusters were distinct with host gender indicating that opposite sexual individuals may exhibit a distinct microbial composition in gut microbial interconnection over habitats although microbial communities reflected unique structures distributions over body habitats the interconnected microbial components among the body habitats were still observed in the clustering results for example cluster in figure contained skin samples that shared similar microbial compositions with oral cavity communities while skin cluster and harboured and oral cavity samples respectively since skin microbial pattern was closely associated with external yang et al bmc systems biology suppl http figure structural variation over host gender in oral cavity gut and clusters environment and oral cavity was an open system where microbiome from external environment was imported by breathing eating food and drinking water oral cavity and skin would respond to outside environmental conditions and gradually evolve similar microbiomes conclusions and discussions the human microbiomes are microbiomes that are hosted in gut oral mucosa and of skin etc these organisms perform functions that are useful for human host to maintain healthy yet detailed factors that attribute the microbial community structures in human body habitats and host gender remain poorly conceptualized to fully understand the roles of human microbiome in disease and health prior studies focus on particular body habitats of health individuals with specific clustering approaches based on the assumption that metagenomic samples of same body habitats would develop similar microbial structure patterns however human habitats are not isolated they are interacted and correlated to form an integrated and complex system and identified structures might be unsuccessful due to noisy sample similarity and specific topological structure within metagenomic network hence single clustering algorithm rarely achieves optimal outcome to uncover a global and comprehensive landscape of human microbiome we perform an ensemble clustering framework on page of scale metagenomic samples in this study our proposed algorithm has four main advantages on microbial pattern detection could effectively identify more reliable microbial communities via integrating many base clustering results as regard to the modularity of microbial communities defined as the clustering of microbial communities modularity according to the effects of their related environments or treatments the consensus clustering network is much clearer at showing such modularity property such as how environments or shape microbial communities in body habitat which is critical to healthcare and diognosis than the original metagenomic similarity network ensemble framework is robust for the coverage of metagenomic similarity network as shown in additional file figure and compared to base clustering results in additional file figure algorithm could reveal the spatial and gender patterns of microbiome as shown in figure more comprehensively as the ensemble clustering result is a general agreement by multiple base clustering approaches nevertheless it should be acknowledged that the performance of our algorithm depends on the base clustering results and quality of original metagenomic similarity network if all these base results were generated by poor clustering algorithms the ensemble outputs would be far from real microbial community similarity patterns if the original similarity network is unreliable to capture the modularity of metagenomic samples none of clustering approaches could work to address this problem we have to integrate more base clustering approaches with diverse optimization criterions and pattern assumptions to reduce the bias generated by base approaches we assume these algorithms can capture a wide variety of clustering patterns in similarity network to alleviate the effect of unreliable clustering results on the other hand the proposed nmf based mode which could be used in association study of bioinformatics domain is a more complex method to implement and convergence could be slow as shown in table with rapid development of computational capability we could improve the time efficiency on large amount of operations and the nonnegative constraints on cluster indicator matrix h may be an insufficient condition for achieving sparseness in some cases then one may set appropriate thresholds to enforce sparseness in summary is an ensemble clustering framework for metagenomic data analysis and microbial community pattern detection in the future nmf based model could be exploited to offer potential applications on bipartite model of association and disease gene prediction yang et al bmc systems biology suppl http availability the data sets and supporting experimental results of this article are available for download from http page of additional material additional file experimental design the file show the experimental design in this paper including introductory of four base clustering approaches evaluation of microbial clusters parameter setting additional file supplementary material the file presents several figures tables and additional experimental results mentioned in this paper including the efficiency of algorithm evaluation of four base clustering results sensitivity study of phylogenetic structure similarity on microbial network competing interests the authors declare that they have no competing interests authors contributions conceptualized and designed the method and drafted manuscript py kn responsible for the implementation py xs loy provided raw data kn xs participated in discussion and improved the method as well as revised the draft xs loy read and approved the manuscript py xs loy hnc kn acknowledgements this work is supported in part by chinese academy of sciences grant ministry of science and technology s grant and as well as national science foundation of china grant and declarations publication costs for this article were partially funded by chinese academy of sciences grant ministry of science and technology s grant and as well as national science foundation of china grant and and by the institute for infocomm research agency for science technology research a star singapore this article has been published as part of bmc systems biology volume supplement thirteenth international conference on bioinformatics systems biology the full contents of the supplement are available online at http authors details institute for infocomm research agency for science technology research a star singapore singapore biology group of single cell center shandong key laboratory of energy genetics and cas key laboratory of biofuels qingdao institute of bioenergy and bioprocess technology chinese academy of science qingdao china for computer vision and department of mathematics sun university guangzhou china published december references wilson m bacteriology of humans an ecological perspective john wiley sons dethlefsen l m relman da an ecological and evolutionary perspective on mutualism and disease nature turnbaugh pj ley re hamady m cm knight r gordon ji the human microbiome project exploring the microbial part of ourselves in a changing world nature lederberg j infectious history science eckburg pb bik em bernstein cn purdom e dethlefsen l sargent m gill sr nelson ke relman da diversity of the human intestinal microbial flora science fierer n hamady m lauber cl knight r the influence of sex handedness and washing on the diversity of hand surface bacteria proceedings of the national academy of sciences aas ja paster bj stokes ln olsen i dewhirst fe defining the normal bacterial flora of the oral cavity journal of clinical microbiology nasidze i quinque d li j li m tang k stoneking m comparative analysis of human saliva microbiome diversity by barcoded pyrosequencing and cloning approaches analytical biochemistry turnbaugh pj hamady m yatsunenko t cantarel bl duncan a ley re sogin ml jones wj roe ba affourtit jp et al a core gut microbiome in obese and lean twins nature grice ea kong hh conlan s deming cb davis j young ac nisc comparative sequencing program bouffard gg blakesley rw murray pr et al topographical and temporal diversity of the human skin microbiome science bik em long cd armitage gc loomer p emerson j mongodin ef nelson ke gill sr cm relman da bacterial diversity in the oral cavity of healthy individuals the isme journal mitreva m structure function and diversity of the healthy human microbiome nature costello ek lauber cl hamady m fierer n gordon ji knight r bacterial community variation in human body habitats across space and time science lozupone c hamady m knight r online tool for comparing microbial community diversity in a phylogenetic context bmc bioinformatics kent ad yannarell ac rusak ja triplett ew mcmahon kd synchrony in aquatic microbial community dynamics the isme journal zinger l coissac e choler p geremia ra assessment of microbial communities by graph partitioning in a study of soil fungi in two alpine meadows applied and environmental microbiology lloyd sp least squares quantization in pcm ieee transactions on information theory szekely gj rizzo ml hierarchical clustering via joint distances extending ward s minimum variance method journal of classification moon tk the algorithm ieee signal processing magazine devarajan k nonnegative matrix factorization an analytical and interpretive tool in computational biology plos computational biology qi q zhao y li m simon r matrix factorization of gene expression profiles a for bioinformatics zhang s li q liu j zhou xj a novel computational framework for simultaneous integration of multiple types of genomic data to identify regulatory modules bioinformatics l dai dq zhang xf protein complex detection via weighted ensemble clustering based on bayesian nonnegative matrix factorization plos one lancichinetti a fortunato s consensus clustering in complex networks scientific reports kuang d park h ding ch symmetric nonnegative matrix factorization for graph clustering sdm caporaso jg lauber cl costello ek d gonzalez a stombaugh j knights d gajer p ravel j fierer n et al moving pictures of the human microbiome genome biol su x xu j ning k efficient search for similar microbial communities based on a novel indexing scheme and similarity score for metagenomic data bioinformatics kullback s letter to the editor the distance the american statistician psorakis i roberts s sheldon b soft partitioning in networks via bayesian matrix factorization adv neural inf process syst tan vy c automatic relevance determination in nonnegative matrix factorization in spars processing with adaptive sparse structured representations yang et al bmc systems biology suppl http page of seung d lee l algorithms for matrix factorization advances in neural information processing systems greene d cagney g krogan n cunningham p ensemble matrix factorization methods for clustering interactions bioinformatics manning cd raghavan p h introduction to information retrieval cambridge university press mcguire al colgrove j whitney sn diaz cm bustillos d versalovic j ethical legal and social considerations in conducting the human microbiome project genome research dewhirst fe chen t izard j paster bj tanner ac yu wh wade wg the human oral microbiome journal of bacteriology yang p li x mei jp kwoh ck ng sk learning for disease gene identification bioinformatics mei jp kwoh ck yang p li x zheng j interaction prediction by learning from local information and neighbors bioinformatics yang p li x wu m kwoh ck ng sk inferring association via global protein complex network propagation plos one zheng x ding h mamitsuka h zhu s collaborative matrix factorization with multiple similarities for predicting interactions in acm sigkdd international conference on knowledge discovery and data mining mei jp kwoh ck yang p li x zheng j globalized bipartite local model for interaction prediction proceedings of the international workshop on data mining in bioinformatics yang p li x chua hn kwoh ck ng sk ensemble positive unlabeled learning for disease gene identification plos one cite this article as yang et al microbial community pattern detection in human body habitats via ensemble clustering framework bmc systems biology suppl submit your next manuscript to biomed central and take full advantage of convenient online submission thorough peer review no space constraints or color figure charges immediate publication on acceptance inclusion in pubmed cas scopus and google scholar research which is freely available for redistribution submit your manuscript at
5
dynamic loop parallelisation adrian and orestis epcc may the university of edinburgh kings buildings mayfield road edinburgh uk of nested loops are a common feature of high performance computing hpc codes in shared memory programming models such as openmp these structure are the most common source of parallelism parallelising these structures requires the programmers to make a static decision on how parallelism should be applied however depending on the parameters of the problem and the nature of the code static decisions on which loop to parallelise may not be optimal especially as they do not enable the exploitation of any runtime characteristics of the execution changes to the iterations of the loop which is chosen to be parallelised might limit the amount of processors that can be utilised we have developed a system that allows a code to make a dynamic choice at runtime of what parallelism is applied to nested loops the system works using a source to source compiler which we have created to perform transformations to user s code automatically through a directive based approach similar to openmp this approach requires the programmer to specify how the loops of the region can be parallelised and our runtime library is then responsible for making the decisions dynamically during the execution of the code our method for providing dynamic decisions on which loop to parallelise significantly outperforms the standard methods for achieving this through openmp using if clauses and further optimisations were possible with our system when addressing simulations where the number of iterations of the loops change during the runtime of the program or loops are not perfectly nested i ntroduction high performance computing hpc codes and in particular scientific codes require parallel execution in order to achieve a large amount of performance increase depending on the underlying parallel platform which is used programmers use different programming models in order to achieve parallel execution in distributed memory systems the message passing programming model is the most commonly used approach for applying parallelism in the codes in shared memory systems however an attractive choice for parallel programming is through openmp the parallelisation of codes with openmp is often achieved with loop parallelisation as long as the iterations of a loop are independent they can be distributed to the available processors of the system in order to execute them in parallel a programmer is required to specify a loop that can be parallelised by placing compiler directives before the loop resolving any dependency issues between the iterations beforehand hpc codes often consist of regions with nested loops of multiple levels in order to parallelise these regions a choice must be made on how parallelism should be applied on the loops even though openmp supports a variety of strategies for parallelising nested loops only a single one can be used to parallelise the code a static choice however can not exploit any runtime characteristics during the execution of the program changes in the input parameters of the executable which affect the iterations of the loops may render the parallelisation decision suboptimal in addition to this the iterations of a loop can change at runtime due to the nature of the code a common feature of hpc codes is to organise the data into hierarchies for example blocks of arrays depending on the problem the blocks can have different shapes and sizes these parameters affect the loops that are responsible for accessing this data in some situations a static decision has the potential to impose a limitation on the amount of processors that can be used for the parallel execution of the loops with the current trend of chip manufactures to increase the number of cores in the processors in each generation leading to larger and larger shared memory system being readily available to computational scientists on the desktop and beyond a more dynamic approach must be considered for taking such decisions this report outlines our investigations into various strategies that can be applied at runtime in order to make a dynamic decision on how to parallelise a region with nested loops our approach is to try to automatically perform modifications to users code before compilation in order to enable the code to make these decisions dynamically at runtime specifically we investigated the possibility of having multiple versions of a loop within a region of nested loops in order to make a dynamic choice on whether a loop should be execute sequentially or in parallel ii o pen mp openmp is arguably the dominant parallel programming model currently used for writing parallel programs for used on shared memory parallel systems now at version and supported by c and fortran openmp operates using compiler directives the programmer annotates their code specifying how it should be parallelised the compiler then transforms the original code into a parallel version when the code is compiled by providing this higher level of abstraction openmp codes tend to be easier to develop debug and maintain moreover with openmp it is very easy table i s trategies for parallelising nested loop regions name description outermost inner loop nested loop parallelisation of the outermost loop parallelisation of one of the inner loops parallelisation of multiple loops with nested parallel regions collapsing the loops into a single big loop loop collapsing loop selection runtime loop selection using if clauses to develop the parallel version of a serial code without any major modifications whilst there are a number of different mechanisms that openmp provides for adding parallel functionality to programs the one that is generally used most often is loop parallelisation this involves taking independent iterations of loops and distributing them to a group of threads that perform these sets of independent operations in parallel since each of the threads can access shared data it is generally straightforward to parallelise any loop with no structural changes to the program iii n ested l oops hpc codes and particularly scientific codes deal with numerical computations based on mathematical formulas these formulas are often expressed in the form of nested loops where a set of computations is applied to a large amount of data generally stored in arrays and parallelisation can be applied to each loop individually the arrays often consist of multiple dimensions and the access on the data is achieved with the presence of nested loops furthermore it is not uncommon that the arrangement of the data is done in multiple hierarchies most commonly in blocks with multidimensional arrays where additional loops are require in order to traverse all the data when such code is presented a choice must be made on which loop level to parallelise where the parallelisation should occur a summary of the available strategies is presented in table i outermost loop the most commonly used approach is to parallelise the outermost loop of a nested loop region as shown in listing using this strategy the iterations of the loop are distributed to the members of the thread team the threads operate in parallel by executing the portion of iterations they are assigned to them individually the nested loops of the parallel region are executed in a sequential manner pragma omp parallel for private j for i i i for j j j work listing outer loop parallelisation of a nested loop region parallelising the outermost loop is often a good choice as it minimises the parallel overheads of the openmp implementation such as the initialisation of the parallel region the scheduling of loop iterations to threads and the synchronisation which takes place at the end of the parallel loops more extensive work on the overheads of various openmp directives can be found in despite the advantages of the outermost loop parallelisation strategy in this context there are drawbacks of this choice the maximum amount of available parallelism is limited by the number of iterations of the outerloop loop considering the example code in listing it is only possible to have i tasks being executed in parallel this restricts the number of threads the code can utilise upon execution and therefore the number of processors or cores that can be exploited b inner loop this is a variant on the outermost loop strategy with the difference that one of the inner loops of the region is chosen to be parallelised this approach will only be required or beneficial if the outer loop does not have enough iterations to parallelise efficiently as this variant on the parallelisation strategy introduces parallelisation overheads by requiring the parallelisation to be performed for each loop of the outerloop rather than once for all the loops as shown in listing further nesting of the parallelisation at deeper loop levels will further increase the performance problems the parallel overheads appear a lot more times whereas the amount of work of each iteration becomes finer for i i i pragma omp parallel for shared i for j j j work listing inner loop parallelisation of a nested loop region another issue with this strategy is the scenario where loops are not perfectly nested in this situation when there are computations the loops as shown in listing parallelising a loop of a deeper level will result in sequential execution of that work depending on the amount of the execution time which is now serialised this approach has the potential to increase the execution time of the code for i i i somework for j j j otherwork listing poorly nested loop region example nested the nested parallelisation strategy exploits the fact that more than one loop can be executed in parallel by opening multiple nested parallel regions at different levels of loops as presented in listing more threads can be utilised during the parallel execution of the code unlike the outermost loop and the inner loop approaches which can only utilise as many threads as the iterations of the loop with the biggest number of iterations this strategy can exploit further parallelisation opportunities other studies have shown that nested parallelism can give good results on systems with a large number of processors pragma omp parallel for private j for i i i pragma omp parallel for shared i for j j j work listing nested loop parallelisation of a nested loop region loop collapsing the loop collapsing strategy takes a different approach for exposing additional parallelism within nested loop regions by performing code transformations multiple nested loops are combined or collapsed into a single loop the newly created loop has a larger amount of iterations which can be distributed to the threads as of version openmp supports loop collapsing by using the collapse clause in the loop construct requiring the programmer to provide the number of loop levels to collapse to be able to use the collapse clause the loops have to be perfectly nested no code between the loops and the number of loop iterations when multiplied together need to be able to be regularly divided loop collapsing can produce better results than both the inner loop and nested loop strategies since the parallel overheads are minimal however it is not always available either because not all compilers support openmp version or because the conditions outlined above can not be met pragma omp parallel for collapse for i i i for j j j for k k k work parallel region is always created in either case the presence of the if clause only affects the number of threads that get assigned to the parallel region when sequential execution is triggered the code is only executed by the master thread for parallel execution all threads execute the code furthermore with the if clause programmers are still required to manually write code which makes the decision construct sensible to be evaluated and manually parallelise each loop that is a potential target for parallelisation iv dynamic loop one of the motivators for this work was a parallelisation that was undertaken of a structured code for undertaking computational fluid dynamics cfd simulation it is a structured mesh multigrid code which works with multiblock grids and includes a range of cfd solvers including steady state dual harmonic balance and timedomain the general pattern for the computations within the code is shown in listing whilst this type of computational pattern is not uncommon for scientific codes one of the challenges in the parallelisation is that as the code can use a range of different methods as previously outlined the range of these loops can vary for instance when performing a time domain simulation the harmonic loop has a single iteration however when performing a harmonic balance simulation it can have a range of values generally between and furthermore it is not uncommon to run large simulations with a single block or a small number of blocks meaning that the block loop has a very small number of iterations finally each block in the simulation can have different values for its dimensions in theory the loop collapsing strategy would be ideal for this type of simulation code as this would enable parallelisation without having to deal with the varying sizes of the nested loops however it can not be guaranteed that for all input datasets the loop iterations can be regularly divided and there are also particular areas of the code where the loops are not perfectly nested listing parallelisation of a nested loop region with loop collapsing loop selection openmp already provides a way of forcing a parallel region to execute sequentially with the use of the if clause on openmp directives the if clause of the following form if scalar expression is used to determine at runtime whether the code enclosed in the parallel region should execute sequentially or in parallel when the scalar expression of the clause evaluates to the region is executed sequentially any other value will result in parallel execution however a new parallelisation for iter iter for block block for harmonics harmonics for for perform computations listing example scientific code loops given the different techniques that can be used to parallelise nested loops the occurrence of nested loops in many scientific simulation codes and the fact that the loop iterations of nested loops can change for different input datasets of a code or when performing different functions with a code we wanted a system that enabled the selection of different parallelisation choices to be available to code at runtime when the specific ranges of the nested loops are known our strategy for providing this functionality is to create code based on the provided user code that can perform a parallelisation of any of the nested loops and add decision making algorithms to dynamically choose at runtime which parallelisation is used specifically we have created tools that create multiple versions of a loop within a region of nested loops in order to make a dynamic choice on whether a loop should execute sequentially or in parallel in general code duplication is considered bad programming practice as it can amongst other issues lead to update anomalies where not all instances of the functionality are modified when modifications occur and thus damage the maintainability of the code however if the duplicate code in our instance the serial and parallel versions of each loop in the nested loop structure can be generated automatically for standard user code then it will not adversely affect the maintainability of the user program we created a compiler that recognises compiler directives within user s source code and uses them to the source code and generate a program that has the alternative parallelisation strategies encapsulated within it by exposing a simple interface to the programmers through compiler directives which are similar to the already familiar openmp compiler directives we can automatically provide the dynamic parallelisation functionality for users without requiring significant changes to the original source code furthermore this approach provides the users the choice of enabling or disabling our functionality with minimum effort to complement the code duplication we have also implemented functionality in a small runtime library that produces the code which is responsible for deciding what parallelisation to perform automatically the decision functionality considers the number of iterations of a loop in order to chose a parallelisation strategy that makes best use of the processors or cores available our implementation is currently limited to parallelising a single loop of a nested loop region taking advantage only the outermost and inner loop strategies other authors have already taken a similar approach by modifying the openmp runtime library in order to make these decisions dynamically however applying this logic in the openmp runtime library would have limited the implementation to a specific compiler using our compile approach we are aiming to transfer the logic in user code in order to maintain the portability of our solution in addition to simple heuristics we also explored the idea of a approach at runtime in order to detect the best possible parallelisation strategy with time measurements a heuristics based approach alone can not capture any information on the amount of the actual computations when making a decision on parallelising a loop whilst this is generally irrelevant for perfectly nested loops as all the work is in the lowest loop it may have more of an impact where there is work between the different loops as well there may also fig compilation process using the compiler be situations where a different inner loop has slightly more iterations than an outer loop so could be chosen by a simple heuristic as the place where the parallelisation occurs but the overheads associated with parallelising that inner loop actually make this a suboptimal choice providing a profiling based decision mechanism may help with both these scenarios and enable us to identify situations where for instance using less threads to parallelise an outer loop might provide a better execution time the idea of an auto tuning code has already been proposed by other researches for producing optimised code we apply similar logic s ource to source compiler our compiler acts as a preprocessor to c code which can contain openmp directives as well as our own directives the compiler parses the code and creates an internal representation of the code in the form of an abstract syntax tree ast the regions of the input code that contain our directives are translated into the semantics of the c programming language and openmp directives during the parse phase and appropriate nodes for these regions are placed in the ast the created ast is then translated back to c code with openmp directives this generated code is then compiled using a standard openmp enabled c compiler to produce a parallel executable this process is illustrated in figure our compiler implemented using the lua programming language along with the lpeg parsing library recognises a number of our own bespoke compiler directives of the form pragma preomp a loop that is preceded by a pragma preomp f or directive is considered by our compiler as a suitable candidate for applying parallelisation when such a loop is found our compiler performs the necessary code transformations so that a decision can be made at runtime whether the loop should run sequentially or in parallel and to ensure that both the sequential and parallel versions of the loop are available in the executable at runtime in addition to this a simple analysis of the loop is performed in order to facilitate the computation of a loops iterations during the making of the decision an example of such a code is presented in listing pragma preomp parallel for private j for i i pragma preomp parallel for shared i for j j work listing a nested loop region with preomp furthermore we also extend the grammar to support an additional clause the parallel threshold expression clause this is optional and when it is not present the compiler will assume a default value of this clause is used to allow control over when a loop is parallelised and will be discussed further in section vi a code duplication the main function of the compiler is to take the original user code and duplicate the loops to be parallelised so that there are both serial and parallel versions of those loops that can be selected at runtime as previously mentioned our system only allows one loop to be parallelised at any given time although which loop is parallelised can change over the runtime of a program as the parameters of the loop change but both the serial and parallel versions of all the loops to be parallelised must appear in the executable to enable a selection at runtime to take place when a loop is preceded by a pragma preomp f or directive the loop is duplicated and wrapped in a normal if else statement which evaluates a decision function from our runtime library and selects the if or else branch based on the outcome of the evaluation openmp if as a comparison to our code duplication approach we also implemented the same functionality uses the existing if clause of the openmp parallel construct our custom directive is translated into an openmp parallel for directive with an attached if clause in order to decide whether to execute the loop in parallel or not rather than a serial and parallel version of the loop the expression of the if clause consists of a call to a decision function of our runtime library which takes the evaluated expressions of the loops information in order to make a decision this functionality was included to allow a comparison of our approach to the standard method that developers could currently use to provide dynamic selection of parallelism with openmp however a major drawback of this approach and the reason we do not uses it for our functionality is that a parallel region will be created regardless of whether a loop is parallelised or not considering the example in figure parallelising the outer loop of two nested loops with two threads will result in three parallel regions each thread of the outer region will fig an example of using the if clause to parallelise a the outer and b the inner loop of two nested loops with two threads create a new parallel region and become its master in the case of the inner loop being parallelised two parallel regions are created for nested regions with a larger number of loops this method has the potential to produce excessive parallel overheads vi d ecision functions and the runtime library the runtime library implements the logic for deciding which version of a loop is chosen during execution once a code has been processed by the compiler it must then be linked with our runtime library to enable this functionality to be used a decision based on heuristics here we use heuristics based on information collected at runtime to decide whether a loop should execute sequentially or in parallel the idea of this approach is to look for the first loop that has enough iterations to utilise all of the available threads based on the assumption that parallelising outer loops is more efficient than parallelising inner loops as the amount of parallel overheads should be lower as the openmp parallel regions are encountered less frequently before the execution of a loop the decider checks whether a loop of an outer level is already running in parallel if this condition is met then the loop is serialised in the case that no outer loop is running in parallel the number of iterations of the loop is calculated and it is divided with the available number of threads if this results in a value that is greater than or equal to a specified threshold then the parallel version of a loop is chosen otherwise the loop is serialised as discussed in section v the default value of the threshold is there must be no idle threads although this can be controlled by the user the calculations of the iterations is based on the parameters of the loop which are extracted by the source to source compiler and are provided as arguments to the decision function in the case that the original code of the loop uses variables for its boundaries any change in their value will also be captured by the decision function during the calculation this design allows constant monitoring of any changes in the iterations of the loops which also results in dynamic adaptation of the parallelisation strategy during the execution of the program the algorithm is very simple and with minimum overheads moreover there is no need to maintain any state for the loops however the logic which is used by the function is of a program the profiling overhead will only be imposed in the first few iterations of the program figure outlines this with an example of three nested loops vii p erformance e valuation fig an example of the heuristics with profiling decider on three loops based on optimism it only considers the amount of parallelism exposed by the loop regardless of whether the amount of work of the loop is big enough to justify any overheads of the parallelisation or whether there is any work between loops to evaluate the performance of our new functionality we aimed to benchmark it against standard static openmp parallelisations with a range of different configurations in particular we focussed on varying the number of loop iterations the amount of work between and within loops and the number of changes that occur to loop bounds during execution to evaluate whether and when our approach is beneficial compared to a static parallelisation to undertake these benchmarks we used two different codes the first is a synthetic configurable benchmark c code shown in listing which we constructed for this evaluation the number of iterations of each loop can be configured as can the amount of work that is simulated by calling the delay function between the second and third loops and within the third loop b decision based on heuristics with profiling to address the potential issue with the basic decision based on heuristics previously discussed we also implemented a more complex decision function based on both the size of loops and some evaluation of the work in the loops in the same manner as the heuristics decider it uses the same information extracted by the source to source compiler in order to determine whether the loop should be parallelised or not however if a loop does not meet the conditions then the function reverts to a profiling mode in order to decide which version of the loop serial or parallel to choose from based on timings the first time a loop is executed the heuristics decider determines if the loop should be parallelised if the conditions are not met the sequential version of the loop is chosen and profiling is enabled for this loop at the next execution of the loop the evaluation of the heuristics is still performed if the conditions are still not met for example there where no changes in the iterations of the loop the loop is now parallelised since at this point we only have timing information for the serial version consecutive executions of the loop will first check the heuristics conditions falling back to profiling mode if the condition is not satisfied however the function will detect that timings for both versions are available and utilise the information gathered from profiling to decide what loop to parallelise providing the number of iterations of the loop have not changed with the fastest version chosen as the final decision in contrast to this if the amount of work is not the same the number of loop iterations has changed the timings get invalidated and profiling is to implement this functionality requires additional code when compared to the basic heuristic decision function this will impose an extra overhead to the produced program although if the loop iterations are static throughout the run for i for j delay for k delay listing synthetic benchmark code the second benchmark code was an extract from the cfd code outlined in listing this code is more complex than the synthetic benchmark and more representative of realistic scientific simulation codes this code is used to explore the performance of our solution when the loop iterations vary and when the bounds of loops are dynamic during the course of the execution of the benchmark one or more loops change their loop bound as the outer loops are progressed a benchmark environment the platform used to evaluate the dynamic loop parallelisation functionality was ness at epcc the system is composed by two parts a for development and job submission and a for job execution the management of the two parts is handled by the sun grid engine which allows submission of jobs from the that must be executed on the nodes in isolation the part of the system is composed by two sun shared memory nodes the central processing unit cpu of each node is an amd opteron processor of processing cores and gb or main memory each core has of cache for data and cache for instructions in addition there is also mb of available to each core combined for data and instructions we used the portland group pgi c compiler for the majority of the benchmarks with the following compiler flags for the benchmarking involving the openmp if functionality we used the gnc c compiler instead as the version of the pgi compiler we used does not support a thread team of a nested parallel region to have more than one threads when an outer region is serialised with the if clause this seems contrary to the openmp specification where the if clause only affects the number of threads that get assigned to a particular parallel region not the thread teams of its nested regions when using the gnu c compiler we used the following compiler flags timing information was collected using the omp get wtime function with each benchmark executed three times and the worst time taken since this is the limiting factor for the execution time a outer loop work b outer loop work c outer loop work d outer loop work b synthetic benchmark results if we consider the example code in listing the execution time of the code of the two internal nested loops when only the outer loop is parallelised with a certain amount of threads outer threads can be calculated as shown in equation tpouter is the execution time when parallelising the outer loop touter work is the time needed for the work the loops and tinner work is the time needed for the amount of work within the innermost loop tp outer in a similar fashion when parallelising the inner loop using inner threads the execution time of the loops is shown in equation tp inner if we want to have a reduction in the overall execution time by parallelising the inner loop the constraint tpinner tpouter must be satisfied solving this constraint in terms of touter work we can get the maximum allowed threshold of the execution time for the work of the outer loop as shown in equation it is worth mentioning that this model is an ideal performance model where the work is evenly distributed to the threads in reality the time of touter work might be affected by the presence of parallel overheads in order to test our hypothesis we measured the amount of time which is required by the delay function for various values with the results shown in figure the graphs in figure show the performance of four different parallelisation strategies openm p outer and openm p inner are the results from manual static parallelisations of the individual loops in the benchmark heuristics are the results from our basic decision function using a value of one only parallelise the loop if there are more iterations than threads available and heuristic p rof iler are the results from our system using the profiling functionality where appropriate fig synthetic benchmark results with varying levels of work between the loops from the results it is evident that when the loops are perfectly nested and regular the loop bounds are not changing then there is no benefit from using the profiling functionality the basic heuristics will choose the optimal loop to parallelise apart from when we are using threads the variation in outcomes for threads is is a consequence of the number of loop iterations chosen for the benchmark iterations of the outer loop and iterations of the inner loop the distribution of iterations to threads results in all of the threads to get assigned iteration of the outer loop each and of the threads get and extra iteration the total execution time in this case is limited by the slowest threads which is the time of iterations iterations of the outer loop multiplied by iterations of the inner one parallelising the inner loop with threads however of the thread get from iterations whereas the rest the threads get iterations each in this case the total execution time of the parallel loops is the amount of time required for iterations iterations of the inner loop multiplied by iterations of the outer loop since both decision functions only utilise the heuristics decision when the number of threads is less than the number of iterations they can not exploit this opportunity as no profiling is actually performed in this case this could be altered by setting the decision heuristic to a value other than setting the heuristic to from the graphs we can observe that our threshold value calculations hold for the parameters we used for this benchmark the calculated threshold value is approximately touter work seconds when the work of the outer work is less than the calculated threshold figures parallelising the inner loop with threads is still faster than parallelising the outer loop with threads as the amount of work increases the impact on the execution time when parallelising the inner loop table ii l oop parameters used for the cfd code benchmarking parameter value iters n cell j n cell i or or is increased since more work is now being serialised in these cases the heuristics decider makes the wrong choice figures and since its decision only concerns the amount of iterations of the loops and the available threads in contrast to this when profiling is used in the decision function it correctly detected that the fastest execution time is achieved by not parallelising the inner loop in the case where the amount of work of the outer loop exceeds the calculated threshold parallelising the inner loop even with threads increases the total execution time the benefit from using threads to parallelise the inner loop is not enough to justify the work that is serialised cfd benchmarking results the first benchmark that we performed using the extract from the cfd code was to compare the openmp if clause with our basic heuristic functionality we used as a reference the timings of the manually parallelised the n blocks n harmonics and n cell j loops and compare the execution time of the heuristics decision function for the two code generation modes of our compiler in order to avoid cases of the iterations not being evenly distributed to the threads we only consider cases of and threads the parameters used for the loop iterations are shown in table ii with varying amount of work in the inner loop we also consider cases where blocks do not have the same shape by altering the values of the n cell j and n cell i loops no alterations indicate that all of the blocks have a grid shape of j cell x i cell an alteration of means that the first and third blocks have a grid shape of whereas the second and fourth blocks have a shape of the performance results shown in figure highlight the fact that there is a significant difference between our implemented functionality and that provided by openmp the if clause not only is the if clause slower than the basic openmp parallelisation but it also increases the overall execution time of the code for figure where and threads are available only the loop of the outer level is parallelised in both code generation modes however the if clause mode produces a slower execution time than the code duplication mode when more than threads are used the parallelisation is applied on the n cell j loop in contrast to the code duplication mode which produces an execution time similar to the case of statically parallelising the loop the if clause mode is still slower a similar performance pattern is seen at threads moreover in the presence of alterations in the shape of the blocks as shown in figures and the if clause mode produces an even slower execution time on the other hand a small work no alterations b small work alterations c large work no alterations d large work alterations fig cfd benchmark with n blocks and n harmonics with varied alterations in the i and j cell loops and varied amount of work in the inner loop the code duplication mode can exploit this opportunity in order to utilise all of the available threads by applying parallelism on the n cell i loop increasing the amount of work in the core calculation has a positive effect on the if clause code generation mode we can observe from figure that compared to figure the difference between using the if clause and the static parallelisation is not as large for small numbers of threads this is likely to be because the performance cost of executing the if clause is proportionally smaller compared to the overall execution time however the same performance degradation is still observed when increasing the number of threads the execution times of the code using the openmp if clause raised some concerns over whether the code was operating correctly after extensive testing and verification we ascertained that both versions of the code the if clause and code duplication were correct and producing the same behaviour therefore we investigated the parallel overheads of the openmp runtime library of the gcc compiler other authors have already studied the overheads of nested parallelism on various compilers including a more recent version of the gcc compiler than the one used in this work their findings suggest that the implementation of nested parallel regions of the gcc compiler has significant overheads what is not presented in their work is whether or not the use of the if clause on nested parallel regions produces the same overheads in order to ensure that the behaviour we observed in our results is the cause of nested parallel regions and not the presence of the if clause we have constructed a simple micro benchmark table iii m icro benchmark results of gnu s c compiler s implementation of nested parallelism parallel loop execution time seconds outer inner nested with if clause nested with num threads clause a n blocks n harmonic b n blocks n harmonic no alterations alterations nested parallel micro benchmark we created four versions of a benchmark code with three nested loops and the delay function of the epcc microbenchmark suite in the block of the innermost loop the first version of the benchmark creates a parallel region on the loop of the second level the second version performs the same operation on the innermost loop the third version uses the if clause on both loops by serialising the outer loop with a value of and parallelising the inner loop with a value of finally the last version creates a parallel region on both of these loops however we force the number of threads on the thread team of the outer loop to using the num threads clause through this we manage to reproduce the same behaviour as with the if clause code case when the inner loop is parallelised the number of iterations of the parallel loops are the same as the number of available threads table iii presents the execution times of each case we can see that parallelising the inner loop with nested parallel regions takes seconds longer than parallelising the inner loop manually even for this small and simple benchmark moreover the two versions that contain nested parallel regions achieve very similar execution times from this test we can concluded that it is likely that the behaviour we observed from the if clause code generation mode is affected by the overheads of the implementation of the gcc compiler for nested parallel regions decision function benchmarking finally we investigated the performance of our profiling decision functionality for the cfd extract code this code is perfectly nested so the basic heuristic decision function should be optimal here as it should chose the best loop to parallelise with very little overheads whereas the profiling function has extra functionality and therefore imposes extra overheads on the performance of the code the results from our experiments are shown in figure we can observe from figures that both the decision functions make the correct choice of parallelisation strategy up to threads however the overheads of the profiling functionality have a negative impact on the overall execution time even when profiling is not actually being performed the functions which are inserted before and after the execution of each loop to count the amount of work performed at each loop level increase the overall time moreover we can observe that at threads the profiler actually chooses to parallelise the harmonics loop whereas the heuristics decider produces c n blocks n harmonic d n blocks n harmonic no alterations alterations fig cfd benchmark with varied alterations in the i and j cell loops and a large amount of work in the inner loop the correct behaviour of profiling the n cell loop the timings which are performed for each loop version during the profiling mode are sensitive to the presence of any overheads which ultimately affect the decision of the function such as the overhead of taking the timings when alterations are present in the shape of the loops as shown in figures and the heuristics decider manages to adapt its behaviour parallelising the innermost loop in order to utilise more threads and can significantly out perform the static parallelisation in all of the test cases the decision function which is based on profiling provides slower execution times than the decision function which is based on heuristics moreover the additional logic which is included in the decision function with profiling caused a suboptimal decision to be made in some situations viii i mproved profiling decisions the results from the previous benchmarks lead to considerations of the reasons behind the poor execution of the decision function which performs profiling comparing the functionality of this function with the simple case of the heuristics decision function there are two sources of additional overheads the first one is the logic of profiling each version of a loop in order to make a choice between the two versions of a loop the slow version must also be executed however if an actual simulation code runs for a significant amount of time this overhead should be negligible providing the loop bounds do not alter and trigger the profiling functionality too many times as it should only be incurred infrequently a small work n blocks n harmonic alterations b large work n blocks n harmonic alterations fig cfd benchmark with varied alterations in the i and j cell loops and a large amount of work in the inner loop the second source of overheads is the inclusion of additional function calls before and after each loop in order to measure the time of the execution and count the amount of work performed the elimination of the functionality for taking the slow path is not possible since this is the essence of profiling both versions of a loop must be executed in order to make a comparison between their execution time however we can relax the conditions on the validity of the timings if we only consider the number of the iterations of the specific loop which is being profiled then we can eliminate all the logic that performs the counting of the work for the internal loops when the decision function decides that a version of a loop should be profiled after the failure of the heuristics conditions the number of the iterations of the version of the loop that is going to be executed is saved in the state of the loop at that point this way the code of the function calls which are placed before and after each loop remains simple only adjusting the loop level counter of each thread as well as marking the starting and ending times of the execution of a loop which is being profiled rather than counting the iterations of internal loops as the initial profiling functionality does in order to test our theory we have created a new version of the runtime library which includes the above modifications called the relaxed profiler from the graphs in figure we can see that the removal of the additional logic which performs the counting benefits the decision function with profiling when no profiling is performed and threads the relaxed version of the decision function is faster than the accurate version and the same performance pattern holds when the profiling is performed threads and more for figure and threads and more for figure comparing the execution time of the new version of the decision function with profiling to the execution time of the heuristics decision function the latter still produces a faster execution time however the difference is not large this behaviour is expected since the presence of profiling introduces additional computations within the code itself from the functions which are placed before and after each loop moreover in the cases where the parallelisation is applied on a nested loop the decision function must execute both versions of a loop one of them being the slow version in order to make a decision finally we can see that the relaxed decision function rectifies the problem with the original profiling decision function of it choosing the wrong option in some cases for figure we can see that at and threads the relaxed profiler makes the correct choice and the same for figure at threads where the performance of the relaxed profile decision function is comparable to the heuristics decision function ix c onclusion the main focus of this work was to investigate the possibility of dynamically choosing at runtime the best loop of a nested loop region which best utilises the available threads we have successfully created a compiler and a runtime library in order to automatically allow a dynamic choice to be made at runtime as our solution uses a directives based approach similar to openmp we requires minimum effort and code change from the users point of view we have discovered that the current mechanism users can exploit to perform this the openmp if clause does not perform efficiently at least for the implementation we tested despite the fact that this behaviour is the result of the inefficient implementation of the gcc compiler which was used in this work the same compiler with the code duplication mode was able to provide additional speedup in the execution time of the code from this we conclude that by relying on the openmp runtime library to perform loop nesting the execution time is limited by the compilers implementation of nested parallel regions although code duplication is considered to be a bad programming practice when it is done automatically it can eliminate unnecessary parallel overheads we have also shown that some level of using profiling to select which loop to parallelise can provide performance benefits in certain circumstances for instance when loops are not perfectly nested openmp is currently generally used for small scale parallelisation of code primarily because there are very few large scale hpc resources however the current trend in processors suggests that in the near future large scale resources of order of cores are likely to be commonly available therefore sharedmemory parallelisations are likely to become more utilised and interesting for large scale scientific simulations r eferences openmp openmp application programming interface version duran silvera corbaln and labarta runtime adjustment of parallel nested loops in in proc of the international workshop on openmp applications and tools wompat chen su and yew the impact of synchronization and granularity on parallel systems in proceedings of the annual international symposium on computer architecture ser isca new york ny usa acm pp online available http tanaka taura sato and yonezawa performance evaluation of openmp applications with nested parallelism ayguade gonzalez martorell and jost employing nested openmp for the parallelization of computational fluid dynamics applications j parallel distrib vol no pp may online available http hall chame chen shin rudy and khan loop transformation recipes for code generation and pluto an automatic parallelizer and locality optimizer for online available http the lua programming online available http the lpeg online available http ness hpc online available http dimakopoulos hadjidoukas and philos a microbenchmark study of openmp overheads under nested parallelism in proceedings of the international conference on openmp in a new era of parallelism ser iwomp berlin heidelberg pp online available http
6
avoiding your teacher s mistakes training neural networks with controlled weak supervision mostafa aliaksei sascha jaap university of amsterdam google research dehghani severyn rothe kamps dec abstract in this paper we propose a learning method where we train two neural networks in a fashion a target network and a confidence network the target network is optimized to perform a given task and is trained using a large set of unlabeled data that are weakly annotated we propose to weight the gradient updates to the target network using the scores provided by the second confidence network which is trained on a small amount of supervised data thus we avoid that the weight updates computed from noisy labels harm the quality of the target network model we evaluate our learning strategy on two different tasks document ranking and sentiment classification the results demonstrate that our approach not only enhances the performance compared to the baselines but also speeds up the learning process from weak labels introduction deep neural networks have shown impressive results in a lot of tasks in computer vision natural language processing and information retrieval however their success is conditioned on the availability of exhaustive amounts of labeled data while for many tasks such a data is not available hence unsupervised and methods are becoming increasingly attractive using weak or noisy supervision is a straightforward approach to increase the size of the training data for instance in web search for the task of ranking the ideal training data would be rankings of documents ordered by relevance for a large set of queries however it is not practical to collect such a data in large scale and only a small set of judged pairs is available however for this task the output of heuristic methods dehghani et or clickthrough logs joachims can be used as weak or noisy signals along with a small amount of labeled data to train learning to rank models this is usually done by the network on weak data and it with true labels dehghani et severyn and moschitti however these two independent stages do not leverage the full capacity of information from true labels for instance in the stage there is no handle to control the extent to which the data with weak labels contribute in the learning process while they can be of different quality in this paper we propose a method that leverages a small amount of data with true labels along with a large amount of data with weak labels our proposed method has three main components a weak annotator which can be a heuristic model a weak classifier or even human via crowdsourcing and it is employed to annotate massive amount of unlabeled data a target network which uses a large set of weakly annotated instances by weak annotator to learn the main task and a confidence network which is trained on a small set to estimate confidence scores for instances annotated by weak annotator we train target network and confidence network in a fashion in a joint learning process target network and confidence network try to learn a suitable representation of the data and this layer is shared between them as a communication channel the target network tries to learn to predict the label of the given input under the supervision of the weak annotator in the same time the output of confidence network which are the confidence scores define the magnitude of the weight updates to the target network with respect to the loss computed based on labels from weak annotator during the propagation phase of the target network this way confidence network helps target network to avoid mistakes of her teacher weak annotator by the weight updates from weak labels that do not look reliable to confidence network from a perspective dehghani et the goal of the confidence network trained jointly with the target network is to calibrate the learning rate for each instance in the batch the weights w of the target network fw at step are updated as follows wt w lt b wt fwt w b where lt is the global learning rate b is the batch size l the loss of predicting fw for an input when the target label is is a scoring function learned by the confidence network taking input instance i and its noisy label and r is the regularization term thus we can effectively control the contribution to the parameter updates for the target network from weakly labeled instances based on how reliable their labels are according to the confidence network learned on a small supervised data our setup requires running a weak annotator to label a large amount of unlabeled data which is done at time for many tasks it is possible to use a simple heuristic or implicit human feedback to generate weak labels this set is then used to train the target network in contrast a small set is used to train the confidence network which estimates how good the weak annotations are controls the effect of weak labels on updating the parameters of the target network our method allows learning different types of neural architectures and different tasks where a meaningful weak annotator is available in this paper we study the performance of our proposed model by focusing on two applications in information retrieval and natural language processing document ranking and sentiment classification whilst these two applications differ considerably as do the exact operationalization of our model to these cases there are also some clear similarities first in both cases the human gold standard data is based on a cognitively complex or subjective judgments causing high interrater variation increasing both the cost of obtaining labels as the need for larger sets of labels second also in both cases the weak supervision signal is more systemic or objective which facilitates the learning of the data representation our experimental results suggest that the proposed method is more effective in leveraging large amounts of weakly labeled data compared to traditional in both tasks we also show that explicitly controlling the weight updates in the target network with the confidence network leads to faster convergence since the filtered supervision signals are more solid and less noisy in the following in section we introduce the general architecture of our model and explain the training process then we describe the details of the applications to which we apply our model in section in section we present the experimental setups for each of the tasks along with its results and analysis we then review related works and conclude the paper the proposed method in the following we describe our recipe for learning of neural networks in a scenario where along with a small training set a large set of weakly labeled instances is leveraged formally given a set of unlabeled training instances we run a weak annotator to generate weak labels this gives us the training set u it consists of tuples of training instances and their weak labels u for a small set of training instances with true labels we also apply the weak annotator to generate weak labels this creates the training set v consisting of triplets of training instances their weak labels and their true labels yj v yj we can generate a large amount of training data u at almost no cost using the weak annotator in contrast we have only a limited amount of data with true labels general architecture in our proposed framework we train a neural network that jointly learns the confidence score of weak training instances and the main task using controlled supervised signals the representation of the model is shown in figure it comprises a weak annotator and two neural networks namely the confidence network and the target network the goal of the weak annotator is to provide weak labels for all the instances u we have this assumption that provided by the weak annotator are imperfect estimates of true labels yi where yi are available for set v but not for set u prediction loss wrt the weak labels supervision layer prediction loss wrt the weak labels confidence network supervision layer goodness of instances representation learning weak annotator goodness of instances representation learning true labels a full supervision mode training on batches of data with true labels confidence network weak annotator true labels b weak supervision mode training on batches of data with weak labels figure learning from controlled weak supervision our proposed network for learning a target task in a fashion using a large amount of weakly labeled data and a small amount of data with true labels faded parts of the network are disabled during the training in the corresponding mode arrows show gradient propagation parameters of the parts of the network in red frames get updated in the backward pass while parameters of the network in blue frames are fixed during the training the goal of the confidence network is to estimate the confidence score of training instances it is learned on triplets from training set v input its weak label and its true label yj the score is then used to control the effect of weakly annotated training instances on updating the parameters of the target network in its backward pass during backpropagation the target network is in charge of handling the main task we want to learn or in other words approximating the underlying function that predicts the correct labels given the data instance and its weak label from the training set u the target network aims to predict the label the target network parameter updates are based on noisy labels assigned by the weak annotator but the magnitude of the gradient update is based on the output of the confidence network both networks are trained in a fashion alternating between the full supervision and the weak supervision mode in the full supervision mode the parameters of the confidence network get updated using batches of instances from training set v as depicted in figure each training instance is passed through the representation layer mapping inputs to vectors these vectors are concatenated with their corresponding weak labels generated by the weak annotator the confidence network then estimates which is the probability of taking data instance j into account for training the target network in the weak supervision mode the parameters of the target network are updated using training set u as shown in figure each training instance is passed through the same representation learning layer and is then processed by the supervision layer which is a part of the target network predicting the label for the main task we also pass the learned representation of each training instance along with its corresponding label generated by the weak annotator to the confidence network to estimate the confidence score of the training instance the confidence score is computed for each instance from set u these confidence scores are used to weight the gradient updating target network parameters or in other words the step size during it is noteworthy that the representation layer is shared between both networks so besides the regularization effect of layer sharing which leads to better generalization sharing this layer lays the ground for the confidence network to benefit from the largeness of set u and the target network to utilize the quality of set v model training our optimization objective is composed of two terms the confidence network loss lc which captures the quality of the output from the confidence network and the target network loss lt which expresses the quality for the main task both networks are trained by alternating between the weak supervision and the full supervision mode in the full supervision mode the parameters of the confidence network are updated using training instance drawn from training set v we use loss function for the confidence network to capture the difference between the predicted confidence score of instance j and the target score cj ranker lc log log the target score cj is calculated based on the difference of the true and weak labels with respect to the main task in the weak supervision mode the parameters of the target network are updated using training instances from u we use a weighted loss function lt to capture the difference between the predicted label by the target network and target label lt li compositionality embedding weights where li is the loss on training instance i and is the confidence score of the weakly annotated instance i estimated by the confidence network note that is treated as a constant during the weak supervision mode and there is no gradient propagation to the confidence network in the backward pass as depicted in figure we minimize two loss functions jointly by randomly alternating between full and weak supervision modes for example using a ratio during training and based on the chosen supervision mode we sample a batch of training instances from v with replacement or from u without replacement since we can generate as much train data for set u since in our setups usually the training process oversamples the instance from v the key point here is that the main task and confidence scoring task are always defined to be close tasks and sharing representation will benefit the confidence network as an implicit data augmentation to compensate the small amount of data with true labels besides we noticed that updating the representation layer with respect to the loss of the other network acts as a regularization for each of these networks and helps generalization for both target and confidence network since we try to capture all tasks which are related tasks and less chance for overfitting we also investigated other possible setups or training scenarios for instance we tried updating the parameters of the supervision layer of the target network using also data with true labels or instead of using alternating sampling we tried training the target network using controlled weak supervision signals after the confidence network is fully trained as shown in the experiments the architecture and training strategy described above provide the best performance figure the target network for the document ranking applications in this section we apply our method to two different tasks document ranking and sentiment classification for each task we start with an introduction of the task followed by the setup of the target network description of the representation learning layer and the supervision layer document ranking this task is the core information retrieval problem which is challenging as it needs to capture the notion of relevance between query and documents we employ a pairwise neural ranker architecture as target network dehghani et in this setting each training instance consists of a query q and two documents and the labels and y are scalar values indicating the probability of being ranked higher than with respect to q the general schema of the target network is illustrated in figure the representation learning layer is a setup proposed in dehghani et this layer is a function which learns the representation of the input data instances q and consists of three components an embedding function v rm where v denotes the vocabulary set and m is the number of embedding dimensions a weighting function v r and a compositionality function rm r n rm more formally the function is defined as q tqi tqi tdi tdi tdi tdi where tqi and tdi denote the ith term in query q respectively document the embedding function maps each term to a dense dimensional real value vector which is learned during the training phase the weighting function assigns a weight to each term in the vocabulary the compositionality function projects a set of n pairs to an dimensional representation independent from the value of n n ti ti exp ti ti exp tj which is in fact the normalized weighted elementwise summation of the terms embedding vectors it has been shown that having global term weighting function along with embedding function improves the performance of ranking as it simulates the effect of inverse document frequency idf which is an important feature in information retrieval dehghani et in our experiments we initialize the embedding function with embeddings mikolov et on google news and the weighting function with idf the supervision layer receives the vector representation of the inputs processed by the representation learning layer and outputs a prediction we opt for a simple fully connected network with l hidden layers followed by a softmax each hidden layer zk in this network computes zk wk where wk and bk denote the weight matrix and the bias term corresponding to the k th hidden layer and is the these layers follow a sigmoid output we employ the weighted cross entropy loss lt log log where bu is a batch of instances from u and is the confidence score of the weakly annotated instance i estimated by the confidence network the weak annotator is robertson et which is a unsupervised retrieval method in the pairwise documents ranking setup for a given instance q is the probability of document being ranked higher than based on the scores obtained from the annotator sq sq whereas sq d is the score obtained from the weak annotator to train the confidence network the target label cj is calculated using the absolute difference of the true label and the weak label cj where yj is calculated similar to but sq d comes from true labels created by humans sentiment classification this task aims to identify the sentiment positive negative or neutral underlying an individual sentence our target network is a convolutional model similar to deriu et severyn and moschitti b deriu et each training instance consists of a sentence s and its sentiment label the architecture of the target network is illustrated in figure the representation learning layer learns a representation for the input sentence s and is shared between the target network and confidence network it consists of an embedding function v rm where v denotes the vocabulary set and m is the number of embedding dimensions this function maps the sentence to a matrix s where each column represents the embedding of a word at the corresponding position in the sentence matrix s is passed through a convolution layer in this layer a set of f filters is applied to a sliding window of length h over s to generate a feature map matrix o each feature map oi for a given filter f is generated by oi j s i k j fk j where s denotes the concatenation of word vectors from position i to the concatenation of all oi produces a feature vector the vectors o are then aggregated over all f filters into a feature map matrix o we also add a bias vector b rf to the result of a convolution each convolutional layer is followed by a activation function we use relu nair and hinton which is applied afterward the output is passed to the max pooling layer which operates on columns of the feature map matrix o returning the largest value pool oi r see figure this architecture is similar to the model for twitter sentiment classification from semeval and severyn and moschitti deriu et we initialize the embedding matrix with embeddings mikolov et pretrained on a collection of tweets the supervision layer is a neural classifier pooled repr conv feature map embedding embedding figure the target network for the sentiment classification network similar to the supervision layer in the ranking task with different width and depth but with softmax instead of sigmoid as the output layer which returns the probability distribution over all three classes we employ the weighted cross entropy loss lt log where bu is a batch of instances from u and is the confidence score of the weakly annotated instance i and k is a set of classes the weak annotator for the sentiment classification task is a simple unsupervised method hamdan et kiritchenko et we use baccianella et to assign probabilities positive negative and neutral for each token in set u then a sentencelevel distribution is derived by simply averaging the distributions of the terms yielding a noisy label where is the number of classes we empirically found that using soft labels from the weak annotator works better than assigning a single hard label the target label cj for the confidence network is calculated by using the mean absolute difference of the true label and the weak label cj where yj is the onehot encoding of the sentence label over all classes experiments and results here we first describe baselines afterward we present the experimental setups for each of our tasks along with their results and analysis baselines and general setups for both tasks we evaluate the performance of our method compared to the following baselines weak annotator the unsupervised method that we used for annotating the unlabeled data weak supervision only the target network trained only on weakly labeled data full supervision only the target network trained only on true labeled data weak supervision fine tuning the target network trained on the weakly labeled data and on true labeled data weak supervision supervision layer the target network trained only on weakly labeled data and the supervision layer is on true labeled data while the representation learning layer is kept fixed weak supervision representation fine tuning except the supervision layer is kept fixed during fine tuning new label inference veit et is similar to our proposed neural architecture inspired by the paradigm hinton et romero et but instead of having the confidence network to predict the confidence score of the training instance there is a label generator network which is trained on set v to map the weak labels of the instances in u to the new labels the new labels are then used as the target for training the target network controlled weak supervision with joint training is our proposed neural architecture in which we jointly train the target network and the confidence network by alternating batches drawn from sets v and u as explained in section controlled weak supervision full supervision with joint training is the same as cwsjt except that parameters of the supervision layer in target network are also updated using batches from v with regards to the true labels additionally we compare the performance of cwsjt with other possible training setups separate training we consider the confidence network as a separate network without sharing the representation learning layer and train it on set v we then train the target network on the controlled weak supervision signals circular training we train the target network on set u then the confidence network is trained on data with true labels and the target network is trained again but on controlled weak supervision signals progressive training is the mixture of the two previous baselines inspired by rusu et we transfer the learned information from the converged target network to the confidence network using progressive training we then train the target network again on the controlled weak supervision signals the proposed architectures are implemented in tensorflow tang abadi et we use the adam optimizer kingma and ba and the algorithm furthermore to prevent feature we use dropout srivastava et as a regularization technique in all models in our setup the confidence network to predict is a fully connected feed forward network given that the confidence network is learned only from a small set of true labels and to speed up training we initialize the representation learning layer with parameters word embeddings we use relu nair and hinton as a activation function in both target network and confidence network in the following we describe setups and the experimental results document ranking setup results collections we use two standard trec collections for the task of retrieval the first collection consists of news articles from different news agencies as a homogeneous collection the second collection clueweb is category b a web collection with over million english documents which is considered as a heterogeneous collection spam documents were filtered out using the waterloo spam scorer cormack et with the default threshold data with true labels we take query sets that contain judgments a set of queries trec topics and for the collection and a set of queries topics for the experiments on the clueweb collection for each query we take all documents judged as relevant plus the same number of documents judged as and form pairwise combinations among them data with weak labels we create a query set q using the unique queries appearing in the aol http query logs pass et this query set contains web queries initiated by real users in the aol search engine that were sampled from a period from march to may we applied standard dehghani et a on the queries we filtered out a large volume of navigational queries containing url substrings http we also removed all characters from the queries for each dataset we took queries that have at least ten hits in the target corpus using our weak annotator method applying all these steps we collect million queries to train on in and million queries for clueweb to prepare the weakly labeled training set u we take the top retrieved documents using for each query from training query set q which in total leads to training instances parameters and settings we conducted a nested cross validation with split in each fold all hyperparameters of all models and baselines were tuned individually on the validation set using batched gp bandits with an expected improvement acquisition function desautels et the size and number of hidden layers for the ranker and the confidence network were separately selected from and respectively the initial learning rate and the dropout parameter were selected from and respectively we considered embedding sizes of the batch size in our experiments was set to in all experiments the parameters of the network are optimized employing the adam optimizer kingma and ba and using the computed gradient of the loss to perform the algorithm at inference time for each query we take the top retrieved documents using as candidate documents and them using the trained models we use the implementation of with default parameters and results and discussions we evaluate on set v and report two standard evaluation metrics mean average precision map of the documents and normalized discounted cumulative gain calculated for the top retrieved documents ndcg statistical significant differences of map and ndcg values are determined using https table performance of the proposed method and baseline models on different datasets or indicates that the improvements or degradations are statistically significant at the level using the paired for all model the is with respect to the weak supervision only baseline wso for cwsjt the improvement over all baselines is considered and the bonferroni correction is applied on the significant tests method wso nli fso cwsjt table performance of the variants of the proposed method on different datasets indicates that the improvements or degradations are statistically significant at the level using the paired for all model the is with respect to the weak supervision only baseline wso on table for cwsjt the improvement over all baselines is considered and the bonferroni correction is applied on the significant tests clueweb map ndcg map ndcg the paired with p value with bonferroni correction table shows the performance on both datasets based on the results cw s jt provides a significant boost on the performance over all datasets there are two interesting points we want to highlight first among the experiments updating all parameters of the target network is the best fine tuning strategy updating only the parameters of the representation layer based on the true labels works better than updating only parameters of the supervision layer this supports our designed choice of a shared embedding layer which gets updated on set v second while it seems reasonable to make use of true labels for updating all parameters of the target network achieves no better results than cwsjt it also performs mostly even worse than this is because during training the direction of the parameter optimization is highly affected by the type of supervision signal and while we control the magnitude of the gradients we do not change their directions so alternating between two sets with different label qualities different supervision signal types weak and string confuses the supervision layer of the target network in fine tinning we don not have this problem since we optimize the parameters with respect to the supervision from these two sets in two separate stages it is noteworthy that we have also tried with another objective function for the target network taking both weak and true labels into account which was slightly better but gives no method a b c cwsst cwsct cwspt cwsjt clueweb map ndcg map ndcg improvement over cwsjt in the ranking task the target network is designed in particular to be trained on weak annotations dehghani et hence training the network only on weak supervision performs better than fso this is due to the fact that ranking is a complex task requiring many training instances while relatively few true labels are available the performance of nli is worse than cwsjt as learning a mapping from imperfect labels to accurate labels and training the target network on new labels is essentially harder than learning to filter out the noisy labels hence needs a lot of supervised data the reason is that for the ranking due to a few training instances with regards to the task complexity nli fails to generate better new labels hence it directly misleads the target network and completely fails to improve the performance table shows the performance of different training strategies as shown cwsjt and cwsct perform better than other strategies cwsct is to let the confidence network to be trained separately while still being able to enjoy shared learned information from the target network however it is less efficient as we need two rounds of training on weakly labeled data cwsst performs poorly since the training data v is too small to train a confidence network without taking advantage of the vast amount of weakly annotated data in u we also noticed that this strategy leads to a slow convergence compared to wso also transferring learned information from target network to confidence network via progressive training cwspt performs no better than full sharing of the representation learning layer table performance of the baseline models as well as the proposed method on different datasets indicates that the improvements or degradations are statistically significant at the level using the paired for all model the is with respect to the weak supervision only baseline wso for cwsjt the improvement over all baselines is considered and the bonferroni correction is applied on the significant tests method walexicon wso nli fso cwsjt table performance of the variants of the proposed method for sentiment classification task on different datasets indicates that the improvements or degradations are statistically significant at the level using the paired for all model the is with respect to the weak supervision only baseline wso on table for cwsjt the improvement over all baselines is considered and the bonferroni correction is applied on the significant tests method a b c cwsst cwsct cwspt cwsjt sentiment classification setup results collections we test our model on the twitter sentiment classification of task rosenthal et datasets of subsume the test sets from previous editions of semeval and each tweet was preprocessed so that urls and usernames are masked data with true labels we use train tweets and development tweets data from for training and tweets for validation to make our results comparable to the official runs on semeval we use tweets and tweets as test sets rosenthal et nakov et data with weak labels we use a large corpus containing tweets collected during two months for both training the word embeddings and creating the weakly annotated set u using the method explained in section parameters and settings similar to the ment ranking task we tuned the for each model including baselines separately with respect to the true labels of the validation set using batched gp bandits with an expected improvement acquisition function desautels et the size and number of hidden layers for the classifier and the confidence network were separately selected from and respectively we tested the model with both and convolutional layers the number of convolutional feature maps and the filter width is selected from and respectively the initial learning rate and the dropout parameter were selected from and respectively we considered embedding sizes of and the batch size in these experiments was set to results and discussion we report the performance of our model and the baseline models in terms of official semeval metric in table we have also report statistical significance of improvements using paired with p value with bonferroni correction our method is the best performing among all the baselines unlike the ranking task training the network only on data with true labels tso performs rather good in the sentiment classification task learning representation of input which is a sentence tweet is simpler than the ranking task in which we try to learn representation for query and long documents consequently we need fewer data to be able to learn a suitable representation and with the amount of available data with true labels we can already capture a rather good representation without helps of weak data while it was impossible in the ranking task however as the results suggest we can still gain improvement using in this task behaviors of different experiments are similar to the ranking task furthermore updating parameters of the supervision layer with respect to the true labels model does not perform better than cwsjt which again supports our choice of updating just the representation learning layer with respect to the signals from data with true labels in the sentiment classification task the performance of nli is acceptable compared to the ranking task this is first of all because generating new classification labels is essentially simpler secondly in this task we need to learn to represent a simpler input and learn a simpler function to predict the labels but a relatively bigger set of supervised data which helps to generate new labels however the performance of nli is still lower than cwsjt we can argue that cwsjt is a more conservative approach it is in fact equipped with a soft filter that decreases the effect of noisy training examples from set u on parameter updates during training this is a smoother action as we just the gradient while nli might change the direction of the gradient by generating a completely new label and consequently it is prone to more errors especially when there is not enough training data to learn to generate better labels in the sentiment classification task besides the general baselines we also report the best performing systems which are also models rouvier and favre on deriu et al on our proposed model outperforms the best system on both datasets table also presents the results of different training strategies for the sentiment classification task as shown similar to the ranking task cwsjt and cwsct perform better than other strategies although cwsct is slightly better not statistically significant in terms of effectiveness compared to cwsjt it is not as efficient as cwsjt during training compared to the ranking task for sentiment classification it is easier to estimate the confidence score of instances with respect to the amount of available supervised data therefore cwsst is able to improve the performance over wso significantly moreover cwspt fails compared to the strategies where the representation learning layer is shared between the target network and the confidence network faster learning pace controlling the effect of supervision to train neural networks not only improves the performance but also provides the network with more solid signals which speeds up the learning process figure illustrates the loss for both networks compared to the loss of training the target network with weak supervision along with their performance on test sets with respect to different amounts of training data for the sentiment classification as shown in the training the loss of the target network in our model lt is higher than the loss of the network which is trained only on weakly we have observed similar in the learning process of the ranking task however we skip bringing its plots due to space limit since we have nested for the ranking task and a set of plots for each fold figure loss of the target network lt and the confidence network lc compared to the loss of wso lwso on set and performance of cws wso and wa on test sets with respect to different amount of training data on sentiment classification supervised data lwso however since these losses are calculated with respect to the weak labels not true labels having very low training loss can be an indication of overfitting to the imperfection in the weak labels in other words regardless of the general problem of lack of generalization due to overfitting in the setup of learning from weak labels predicting labels that are similar to train labels very low training loss is not necessarily a desirable incident in the validation set however lt decreases faster than lwso which supports the fact that lwso overfits to the imperfection of weak labels while our setup helps the target network to escape from this imperfection and do a good job on the validation set in terms of the performance compared to wso the performance of cws on both test sets increases very quickly and cws is able to pass the performance of the weak annotator by seeing much fewer instances annotated by the weak annotator related work learning from weak or noisy labels has been studied in the literature and verleysen we briefly review research most relevant to our work there are learning supervised learning algorithms zhu developed to utilize weakly or even unlabeled data rosenberg et or lee tries to predict labels of unlabeled data this unlabeled data is provided additionally in particular for neural networks methods use greedy of weights using unlabeled data alone followed by supervised deriu et severyn and moschitti a go et other methods learn unsupervised encodings at multiple levels of the architecture jointly with a supervised signal ororbia ii et weston et from the perspective our approach is similar to andrychowicz et al where a separate recurrent neural network called optimizer learns to predict an optimal update rule for updating parameters of the target network the optimizer receives a gradient from the target network and outputs the adjusted gradient matrix as the number of parameters in modern neural networks is typically on the order of millions the gradient matrix becomes too large to feed into the optimizer so the approach of andrychowicz et al is applied to very small models in contrast our approach leverages additional weakly labeled data where we use the confidence network to predict scores that calibrate gradient updates for the target network direct learning with labels many studies tried to address learning in the condition of imperfect labels some noise cleansing methods were proposed to remove or correct mislabeled instances brodley and friedl other studies showed that weak or noisy labels can be leveraged by employing a particular architecture or defining a proper loss function to avoid overfitting the training data imperfection dehghani et patrini et beigman and klebanov zeng et bunescu and mooney modeling imperfection there is also research trying to model the pattern of the noise or weakness in the labels some methods leverage generative models to denoise weak supervision sources that a discriminative model can learn from ratner et rekatsinas et varma et other methods aim to capture the pattern of the noise by inserting an extra layer or a separated module sukhbaatar et veit et infer better labels from noisy labels and use them to supervise the training of the network this is inspired by the paradigm hinton et romero et xiao et in which the teacher generates a new label given the training instance with its corresponding weak or noisy label however as we show in our experiments this approach is not sufficient when the amount of supervised data is not enough to generate better labels conclusion and future directions training neural networks using large amounts of weakly annotated data is an attractive approach in scenarios where an adequate amount of data with true labels is not available in this paper we propose a neural network architecture that unifies learning to estimate the confidence score of weak annotations and training neural networks to learn a target task with controlled weak supervision using weak labels to updating the parameters but taking their estimated confidence scores into account this helps to alleviate updates from instances with unreliable labels that may harm the performance we applied the model to two tasks document ranking and sentiment classification and empirically verified that the proposed model speeds up the training process and obtains more accurate results as a promising future direction we are going to understand to which extent using weak annotations has the potential of training models with neural networks and understand the exact conditions under which our proposed method works references abadi et al tensorflow machine learning on heterogeneous systems software available from http marcin andrychowicz misha denil sergio gomez matthew w hoffman david pfau tom schaul and nando de freitas learning to learn by gradient descent by gradient descent in advances in neural information processing systems pages stefano baccianella andrea esuli and fabrizio sebastiani sentiwordnet an enhanced lexical resource for sentiment analysis and opinion mining in lrec volume pages eyal beigman and beata beigman klebanov learning with annotation noise in proceedings of the joint conference of the annual meeting of the acl and the international joint conference on natural language processing of the afnlp volume association for computational linguistics pages carla e brodley and mark a friedl identifying mislabeled training data journal of artificial intelligence research razvan bunescu and raymond mooney learning to extract relations from the web using minimal supervision in acl gordon cormack mark smucker and charles clarke efficient and effective spam filtering and for large web datasets inf retr mostafa dehghani sascha rothe enrique alfonseca and pascal fleury learning to attend copy and generate for query suggestion in proceedings of the international conference on information and knowledge management cikm mostafa dehghani aliaksei severyn sascha rothe and jaap kamps learning to learn from weak supervision by full supervision arxiv preprint mostafa dehghani hamed zamani aliaksei severyn jaap kamps and bruce croft neural ranking models with weak supervision in proceedings of the international acm sigir conference on research and development in information retrieval jan deriu maurice gonzenbach fatih uzdilli aurelien lucchi valeria de luca and martin jaggi swisscheese at task sentiment classification using an ensemble of convolutional neural networks with distant supervision proceedings of semeval pages jan deriu aurelien lucchi valeria de luca aliaksei severyn simon mark cieliebak thomas hofmann and martin jaggi leveraging large amounts of weakly supervised data for multilanguage sentiment classification in proceedings of the international international world wide web conference www pages alec go richa bhayani and lei huang twitter sentiment classification using distant supervision project report stanford hussam hamdan frederic and patrice bellot experiments with dbpedia wordnet and sentiwordnet as resources for sentiment analysis in in second joint conference on lexical and computational semantics sem volume pages geoffrey hinton oriol vinyals and jeff dean distilling the knowledge in a neural network arxiv preprint thorsten joachims optimizing search engines using clickthrough data in proceedings of the eighth acm sigkdd international conference on knowledge discovery and data mining acm pages diederik kingma and jimmy ba adam a method for stochastic optimization arxiv preprint svetlana kiritchenko xiaodan zhu and saif m mohammad sentiment analysis of short informal texts journal of artificial intelligence research lee the simple and efficient learning method for deep neural networks in workshop on challenges in representation learning icml volume page tomas mikolov ilya sutskever kai chen greg s corrado and jeff dean distributed representations of words and phrases and their compositionality in nips pages vinod nair and geoffrey e hinton rectified linear units improve restricted boltzmann machines in proceedings of the international conference on machine learning pages preslav nakov alan ritter sara rosenthal fabrizio sebastiani and veselin stoyanov task sentiment analysis in twitter proceedings of semeval pages alexander g ororbia ii c lee giles and david reitter learning a deep hybrid model for text classification in proceedings of the conference on empirical methods in natural language processing emnlp thomas desautels andreas krause and joel w burdick parallelizing tradeoffs in gaussian process bandit optimization journal of machine learning research greg pass abdur chowdhury and cayley torgeson a picture of search in infoscale and michel verleysen classification in the presence of label noise a survey ieee transactions on neural networks and learning systems giorgio patrini alessandro rozza aditya menon richard nock and lizhen qu making neural networks robust to label noise a loss correction approach arxiv preprint alexander j ratner christopher m de sa sen wu daniel selsam and christopher data programming creating large training sets quickly in advances in neural information processing systems pages theodoros rekatsinas xu chu ihab f ilyas and christopher holoclean holistic data repairs with probabilistic inference arxiv preprint stephen robertson hugo zaragoza et al the probabilistic relevance framework and beyond foundations and in information retrieval adriana romero nicolas ballas samira ebrahimi kahou antoine chassang carlo gatta and yoshua bengio fitnets hints for thin deep nets arxiv preprint chuck rosenberg martial hebert and henry schneiderman of object detection models in seventh ieee workshop on applications of computer vision sara rosenthal preslav nakov svetlana kiritchenko saif m mohammad alan ritter and veselin stoyanov task sentiment analysis in twitter in proceedings of the international workshop on semantic evaluation semeval pages mickael rouvier and benoit favre at task polarity embedding fusion for robust sentiment analysis proceedings of semeval pages andrei a rusu neil c rabinowitz guillaume desjardins hubert soyer james kirkpatrick koray kavukcuoglu razvan pascanu and raia hadsell progressive neural networks arxiv preprint aliaksei severyn and alessandro moschitti twitter sentiment analysis with deep convolutional neural networks in proceedings of the international acm sigir conference on research and development in information retrieval acm pages aliaksei severyn and alessandro moschitti unitn training deep convolutional neural network for twitter sentiment classification in proceedings of the international workshop on semantic evaluation semeval association for computational linguistics denver colorado pages nitish srivastava geoffrey hinton alex krizhevsky ilya sutskever and ruslan salakhutdinov dropout a simple way to prevent neural networks from overfitting mach learn res sainbayar sukhbaatar joan bruna manohar paluri lubomir bourdev and rob fergus training convolutional networks with noisy labels arxiv preprint yuan tang tensorflow s module for distributed machine learning arxiv preprint paroma varma bryan he dan iter peng xu rose yu christopher de sa and christopher socratic learning correcting misspecified generative models using discriminative models arxiv preprint andreas veit neil alldrin gal chechik ivan krasin abhinav gupta and serge belongie learning from noisy datasets with minimal supervision in the conference on computer vision and pattern recognition jason weston ratle hossein mobahi and ronan collobert deep learning via semisupervised embedding in neural networks tricks of the trade springer pages tong xiao tian xia yi yang chang huang and xiaogang wang learning from massive noisy labeled data for image classification in proceedings of the ieee conference on computer vision and pattern recognition pages daojian zeng kang liu yubo chen and jun zhao distant supervision for relation extraction via piecewise convolutional neural networks in emnlp pages xiaojin zhu learning literature survey
9
machine learning application in the life time of materials xiaojiao yu abstract materials design and development typically takes several decades from the initial discovery to commercialization with the traditional trial and error development approach with the accumulation of data from both experimental and computational results data based machine learning becomes an emerging field in materials discovery design and property prediction this manuscript reviews the history of materials science as a disciplinary the most common machine learning method used in materials science and specifically how they are used in materials discovery design synthesis and even failure detection and analysis after materials are deployed in real application finally the limitations of machine learning for application in materials science and challenges in this emerging field is discussed keywords machine learning materials discovery and design materials synthesis failure detection introduction materials science has a long history that can date back to the bronze age however only until the century first book on metallurgy was published marking the beginning of systematic studies in materials science researches in materials science were purely empirical until theoretical models were developed with the advent of computers in the last century numerical methods to solve theoretical models became available ranging from dft density functional theory based quantum mechanical modeling of electronic structure for optoelectronic properties calculation to continuum based finite element modeling for mechanical properties multiscale modeling that bridge various time and spatial scales were also developed in the materials science to better simulate the real complex system even so it takes several decades from materials discovery to development and commercialization even though physical modeling can reduce the amount of time by guiding experiment work the limitation is also obvious dft are only used for functional materials optoelectronic property calculation and that is only limited to materials without defect the assumption itself is far off from reality new concept such as multiscale modeling is still far away from large scale real industrial application traditional ways of materials development are impeding the progress in this field and relevant technological industry with the large amount of complex data generated by experiment especially simulation results from both published and archived data including materials property value processing conditions and microstructural images analyzing them all becoming increasingly challenging for researchers inspired by the human genome initiative obama government launched a materials genome initiative hoping to reduce current materials development time to half with the increase of computing power and the development of machine learning algorithms materials informatics has increasingly become another paradigm in the field researchers are already using machine learning method for materials property prediction and discovery machine learning forward model are used for materials property prediction after trained on data from experiments and physical simulations bhadeshia et al applied neural network nn technique to model creep property and phase structure in steel crystal structure prediction is another area of study for machine learning thanks to the large amount of structural data in crystallographic database k neighbor s method was used to identify materials structure type based on its neighbors structure types machine learning is also applied for materials discovery by searching compositional structural space for desired properties which is essentially solving a constrained optimization problem baerns et al was able to find an effective multicomponent catalyst for oxidation of lowconcentration propane with a genetic algorithm and neural network there are a few reviews on machine learning application in materials science already dane morgan and gerbrand ceder reviewed the data mining methods in materials development tim mueller aaron gilad kusne and rampi ramprasad also reviewed the progress and application of machine learning in materials science more specifically in phase diagram crystal structural and property prediction however their reviews are mostly based on applications in fundamental of materials science here we are taking a more practical approach of reviewing machine learning application in material design development and stages after deployment we first discuss data problems specifically in materials science then machine learning concept and most widely used methods are introduced reviews on machine leaning application in materials discovery design development deployment and recall is conducted the relation between data driven research and traditional experimental and physical modeling is discussed afterwards finally challenges and future endeavors of machine learning based materials science research is pointed out for researchers in this niche area data problem in materials science the successful application of informatics in biology astronomy and business has inspired similar application in materials science however materials science differs from other subjects due to its unique characteristics some researchers are debating whether there is a big data problem in materials science after all the size of materials data is nothing comparable to biology data the largest existing database based on experimental results from materials has data records however the rapid progress in computational science and microscopy techniques is resulting in enormous amounts of output data furthermore materials science data tends to be complex and heterogeneous in terms of their sources and types ranging from discrete numerical values to qualitative descriptions of materials behavior and imaging data data in materials science also exhibit the veracity characteristics of big data problem by that we acknowledge the practical reality of data missing and uncertainties with the data according to the volume variety velocity veracity characteristics of big data materials science does have a big data problem with the emergence of this big data in materials science how to extract hidden information from the complex data and interpret resulted information is becoming increasingly important for materials design and development machine learning methods machine learning a branch of artificial intelligence is about computer learning from existing data without being explicitly programmed and make predictions on new data by building a model from input samples depending on the assigned task machine learning can be classified into three categories supervised learning machine learning algorithms are trained with a set of input value and labeled output value first then they are used to predict output values for corresponding unseen input values unsupervised learning where there is no labelled output value for training data and machine learning algorithm is used to discover patterns in the input value reinforcement learning program interact with environment dynamically to maximize accumulated rewards reinforcement learning is not used in materials science field hence it is not introduced in detail in this manuscript supervised learning can either be a classification problem or a regression problem depends on the whether the output value is discrete or continuous method workflow machine learning method typically comprise several steps including raw data collection data preprocessing filling in missing data handling outliers data transformation feature engineering for feature selection and extraction principle component analysis model selection training validations and testing a detailed workflow is presented in fig to select the best algorithm for a particular task model evaluation is important different algorithms are evaluated with different metrics for instance a classifier s evaluation metrics include confusion matrix auc area under curve precision recall f measure kolomogorov smirnov chart confusion matrix is a matrix with four elements true positive tp true negative tn false positive fp false negati ve fn other accuracy measures are sensitivity true positive specificity true negative auc is the area under roc curve which consider the relation between sensitivity and specificity the greater the area under the curve the more accurate is the model precision is recall is the true positive rate defined as above shows the fraction of predictions that are false positive f measure is also a measure of the model accuracy and is defined as the weighted harmonic mean of the precision and recall of the test f is the balance between precision and recall evaluate how the model separates between the positive and negative distributions higher ks value means better separation for regression algorithms evaluation metric includes mean absolute error root mean squared error rmse coefficient of determination measures the percent of total variability that is explained by the regression model fig flowchart of a typical machine learning method method comparison some of the most common machine learning algorithms are svm support vector machine ann artificial neural network logistic regression decision trees support vector machine algorithms are used to find the hyperplane that separate different classes with highest margin the advantage of svm is that the solution is global and unique computation complexity of svm does not depend on the dimension of the input space and is less prone to overfitting however svm does not work well on unbalanced data artificial neural network is inspired by biological brain where artificial neurons are connected to mimic the connection of neurons in the brain multiple hidden layers and neurons can add to the complexity of the neuron network architecture the strength of ann is that they are flexible and can represent any nonlinear and linear function however it needs large amount of training data and is prone to overfitting hyperparameter tuning is tedious and troublesome for ann decision tree is another commonly used basis classification algorithm which comprises a root node internal node branch leaf node and depth decision tree progressively splits the tested data based on input feature value decision process follows the branch which is the collection between an internal node and its parent node until it reaches a leaf node ensemble methods such as random forest and adaboost which are based on constructing a large number of trees with bootstrap samples and iteratively build an ensemble of weak learners in an attempt to generate a strong overall model ensemble methods usually perform better than basic machine learning algorithms in terms of reducing variance and bias machine learning application in materials discovery and design an important concept in materials science field is relationship developing materials that meet the required performance and property goes back to control processing conditions structural and compositions of the materials hence understanding how processing condition structural and compositions affect materials property and performance is the first step towards materials design traditionally controlled experiments are conducted to isolate the effect of one variable however variables often are correlated with each other it is infeasible to isolate some variable for experimental testing data mining can help revealing hidden relations between large amount of materials parameters processing conditions and their re lations with dependent materials properties traditional ways of materials development can be disrupted and reshaped by making the use of available data materials property prediction materials design first of all requires understanding of how de sired properties such as materials yield strength toughness ultimate tensile strength and fatigue life etc are affected by intrinsic microstructure chemical composition crystal structure and external processing loading conditions and temperatures machine learning algorithm can derive the quantitative relation between the independent and dependent variables and hence make prediction with enough training data when physical model does not exist or is too complicated to apply neural network algorithm has been used in ferritic steel welds toughness prediction due to their ability to handle complex models toughness was studied as a function of chemical composition microstructure welding process and testing temperature their influence on toughness was shown in fig the interaction between different variables can also be predicted with neural network algorithm as shown in fig the cross of the two toughness curves as a function of temperature and manganese compositions indicates at higher temperatures the influence of manganese on toughness was not only reduced but also negative fig bar chart showing a measure of the significance of each of the input variable in influencing toughness fig variation in the normalized toughness as a function of the manganese concentration and the test temperature ann can also be used to predict constitutive relations for instance the constitutive flow behavior of steel is predicted with strain log strain rate and temperature as input and flow stress as output predicted results show good correlation with experimental value indicating excellent capacity of the developed model in predicting flow stress fig austenite stainless steel grade and ultimate tensile strength yield strength tensile elongation rate strain hardening exponent and strength coefficient were also able to be predicted by ann with a function of temperature and strain rate the optimum architecture is for ass and for ass using feed forward back propagation learning model accuracy is verified with correlation coefficient average absolute error and its standard deviation fatigue properties has always been among the most difficult ones to predict due to the high cost and long time for fatigue testing and the prevalence of structural failure caused by fatigue existing physical models are either lacking of generality or fail to give quantitative indications agrawal et al predicted the fatigue strength of steel using data from the japan national institute of materials science nims matnavi database they used predictive model among them neural network decision tree and multivariate polynomial regression were able to achieve a high r value of fig comparison between experimental value and predicted flow stress of steel using bp ann a predicted training data b predicted testing data inversed design of materials understanding how mechanical properties are influenced by materials internal and external factors help reducing searching space in the inversed materials design task however the inverse problem is more challenging because of the possibility of multiple solutions and the enormous structural dimension machine learning application has shown promise in inversed materials discovery and design by reducing searching path and searching region ruoqian liu et al developed a machine learning method for the inverse design of alloy microstructure with enhanced elastic plastic and magnetostrictive properties a systematic approach consisting of random data generation feature selection and classification was developed firstly features that can quantitatively describe microstructures and properties were developed then randomly generated structural and properties pairs were simulated to form the most desired and least desired classes two crucial steps search path refinement and search space reduction are conducted prior to the actual searching to find the most efficient orders of features in search and the most promising search regions of features this method was validated with five design problems which involves identification of microstructures that satisfy both linear and nonlinear property constraints this framework shows supremacy comparing with traditional optimization methods in reducing as much as of running time and achieving optimality that would not be attained machine learning application in materials processing and synthesis design of materials can be facilitated with the data driven machine learning approach however the commercialization of materials is still impeded by the availability to synthesize them to disrupt the trial and error synthesis methods olivetti group in mit is working on creating a predictive synthesis system for advanced materials processing they are building a curated database of solid state materials and their synthesis methods compiled from thousands of materials synthesis journal articles the database also contains algorithms developed through machine learning approaches which are capable of predicting synthesis routes for novel materials based on chemical formulae and other known physical input data even failed experiments can be used by the machine learning algorithm for materials discovery and synthesis which truly shows the power of data mining and machine learning after all onl y a small amount of information is published in the research work most of the data are archived and not been used to its full potential paul raccuglia et al trained a machine learning model based on failed hydrothermal syntheses data to predict reaction outcomes under different conditions such as temperature concentration reactant quantity and acidity the model was validated and tested with previously untested data and shown better performance than human researchers who have years experience it was able to predict conditions for new organically templated inorganic product formation with a success rate of machine learning application in microstructure recognition and failure analysis microstructure damage and failure is another area that machine learning find its applications traditionally materials scientist examines the sem opm images of samples for failure analysis similar to medical doctors analyze images of patients with the increasing penetration of machine learning methods in medical imaging analysis the same kind of application in materials imaging is expect to happen as well in fact there are already reports on machine learning and computer vision researches on materials microstructure automatic recognition aritra et al applied computer vision methods to identify images that contain dendritic morphology and then classify whether the dendrites are al ong the longitudinal direction or traverse direction if they do exist in the image to extract features and reduce feature dimensions they used visual bag of words texture and shape statistics and pre convolutional neural network classification was conducted using support vector machine nearest neighbors and random forest models it was shown that convolutional neural network performs best in terms of micrograph recognition and feature extraction which confirmed with other reports classification methods were able to reach great accuracy for both task another example is the automatic measurement of ferrite volume fraction from the binary phase structures based on gpf graph processing framework algorithm developed by hafiz muhammad tanveer et al machine learning algorithm can also be used in failure detection by examining microstructure images matthias demant et al introduced an enhanced machine learning algorithm for crack detection in photoluminescence pl images of wafers the detection algorithm is based on a classification of cracks due to the comparison of the crack descriptions with previous trained crack data crack centers are identified by detecting features appearing as star or structure grain boundary information is extracted from additional images in the visible range to avoid false detections support vector machine is used to train labelled data for crack and structures classification the algorithm is able to achieve a high precision of and sensitivity of for crack length greater than mm elaheh rabiei et al developed a dynamic bayesian network dbn based on the variation of modulus of elasticity to estimate damages from a prognostic approach when crack is not observable yet various sources of information were taken into account to reduce uncertainties dbn was applied to relate the variables and their causal or correlation relationship degradation model parameters are learned with joint particle filtering technique support vector regression models was applied to define unknown nonparametric and nonlinear correlation between the input variables more precise damage estimation and crack initiation prediction in a metallic alloy under fatigue was confirmed by experimental observations this method is different from traditional empirical damage models paris law since direct damage indicators such as crack is not required to predict damage stage thus underling damages can be monitored at an earlier stage it is easy to imagine manufacturing companies such as ge can monitor their jet engine data to predict whether it needs inspection or maintenance fig overview of the crack detection algorithm limitations of machine learning in materials science applications although machine learning has been widely used in a lot of fields and increasingly been used in materials science machine learning is by no means a panacea without understanding its limitations and blindly apply it to every possible area can lead to wrongful predictions and a waste of time and effort first of all machine learning system are opaque making them very hard to debug machine learning prediction heavily relies on training data machine learning often have overfitting or overfitting problems that needs to be concerned when taking their prediction results into consideration input data quality needs to be ensured interpolation and extrapolation can lead to problems when training data is not sufficient in the interpolated or extrapolated regime or when training data is noisy hence error bar prediction is needed for evaluating prediction accuracy machine learning does not explain the results from the physics point of view materials scientists often are interested in understanding the mechanism of certain phenomena machine learnin g can not elucidate the mechanism since it works on data driven model training and prediction interpretation of the machine learning results needs domain knowledge without understanding the underline physics nonsense predictions can t be recognized even in the process of feature selection a good understanding of the causal relationship between these variable and dependent properties can be helpful for selecting most effective features and build less complicated models machine learning is also inseparable from experiment and physical simulation it is typically used as a supplemental tool for materials discovery design and property prediction machine learning training data are either from experimental results or physical simulation results machine learning models also rely on experiments or simulations for validation to advance this field people from different discipline both experimentalist and computational scientist should collaborate on data collection storage and curation interdisciplinary researchers need to be trained to understand both materials science and machine learning literature gnesin on the origin of metallurgical technologies in the bronze age powder metall met no karl alfred von zittel history of geology and palaeontology hafner christopher wolverton and gerbrand ceder mrs bulletin volume september ashkan vaziri arvind gopinath and vikram deshpande journal of mechanics of materials and structures vol no merryn tawhai jeff bischoff daniel einstein ahmet erdemir trent guess and jeff reinbolt ieee eng med biol mag lesar richard alan and bryden multiscale design of materials ames laboratory conference papers posters and presentations whittingham june electrical energy storage and intercalation chemistry science neugebauer tilmann hickel wiley interdiscip rev comput mol sci sep holdren et al material genome initiative strategic plan technical report december https national science and technology council design of ferritic steels isij sourmail bhadeshia and mackay neural network model of creep strength of austenitic stainless steels bergerhoff hundt sievers and brown the inorganic compu white rodgers and lepage crystmet a database of structures and powder patterns of metals and acta cryst b rodemerck wolf buyevskaya and baerns synthesis and screening of catalytic study on the search for a catalyst for the oxidation of chem eng dane morgan and gerbrand ceder handbook of materials modeling tim mueller aaron gilad kusne rampi ramprasad abby parrill kenny lipkowitz reviews in computational chemistry volume doi villars p iwata pauling file verifies reveals principles in materials science supporting four cornerstones given by nature chem met alloys belianinov a vasudevan r strelcov e et al big data and deep data in scanning and electron microscopies deriving functionality from multidimensional data sets adv str chem imaging krishna rajan materialstoday volume issue pages xinjian guo yilong yin cailing dong gongping yang guangtong zhou fourth international conference on natural computation volume jesse davis mark goadrich proceeding icml proceedings of the international conference on machine learning pages pittsburgh pennsylvania usa june matthew boutell jiebo luo xipeng shen christopher brown pattern recognition volume issue september pages luengo herrera soft comput chen wang tourism management volume issue february pages boser guyon and vapnik an algorithm for optimal margin classifiers in fifth annual workshop on computational learning theory pages pittsburgh computational materials science theodoridis koutroumbas pattern recognition fourth academic press massachusetts jianchang mao mohiuddin computer volume issue pages mar diertrich heller b yang b data science and big data analytics indianapolis wiley suneetha et al ijcse international journal on computer science and engineering vol no abraham wyner matthew olson justin bleich apr chih hang tung journal of semiconductor technology and science no september bhadeshia mackay and svensson materials science and technology lin jun zhang jue zhong computational materials science raghuram karthik desu hansoge nitin krishnamurthy aditya balu amit kumar gupta swadesh kumar singh j mater res technol kumar materials science engineering a april issn kumar materials science and engineering a doi schooling the modelling of fatigue in nickel base alloys thesis university of cambridge agrawal deshpande cecen and kalidindi exploration of data science techniques to predict strength of steel from composition and processing parameters integr mater manuf innovation ankit agrawal and alok choudhary apl materials ruoqian liu abhishek kumar zhengzhang chen ankit agrawal veera sundararaghavan alok choudhary scientific reports doi paul raccuglia katherine elbert philip adler casey falk malia wenny aurelio mollo matthias zeller sorelle friedler joshua schrier alexander norquist nature may miles wernick yongyi yang jovan brankov grigori yourganov and stephen strother ieee signal process mag jul aritra chowdhury elizabeth kautz bulent yener daniel lewis computational materials science gatys ecker and bethge exture synthesis and the controlled generation of natural stimuli using convolutional neural networks gatys ecker and bethge a neural algorithm of artistic style hafiz muhammad tanveer hafiz muhammad tahir mustafa waleed asif munir ahmad ijacsa international journal of advanced computer science and applications vol no matthias demant marcus oswald tim welschehold sebastian nold sebastian bartsch stephan schoenfelder and stefan rein presented at the european pv solar energy conference and exhibition september amsterdam the netherlands elaheh rabiei enrique lopez droguett and mohammad modarres advances in mechanical engineering vol mohsen ostad shabani ali mazahery metallurgical and materials transactions a volume june ashley a white mrs bulletin volume august
5
to detect and segment cysts in lung ct images without manual annotation ling vissagan le ronald joel jianhua jan radiology and imaging sciences department cardiovascular and pulmonary branch nhlbi national institutes of health nih bethesda md abstract image segmentation is a fundamental problem in medical image analysis in recent years deep neural networks achieve impressive performances on many medical image segmentation tasks by supervised learning on large manually annotated data however expert annotations on big medical datasets are tedious expensive or sometimes unavailable weakly supervised learning could reduce the effort for annotation but still required certain amounts of expertise recently deep learning shows a potential to produce more accurate predictions than the original erroneous labels inspired by this we introduce a very weakly supervised learning method for cystic lesion detection and segmentation in lung ct images without any manual annotation our method works in a manner where segmentation generated in previous steps first by unsupervised segmentation then by neural networks is used as ground truth for the next level of network learning experiments on a cystic lung lesion dataset show that the deep learning could perform better than the initial unsupervised annotation and progressively improve itself after index convolutional neural networks weakly supervised learning medical image segmentation graph cuts introduction image segmentation is a fundamental problem in medical image analysis classic segmentation algorithms are usually formulated as optimization problems relying on cues from image features in recent years deep learning has made much progress on image segmentation tasks fcn hed achieved dominant performances on many medical image segmentation benchmarks unet is competitive enough for many applications the success of deep learning based segmentation requires supervised learning on large manually annotated data however expert annotations on big medical datasets are expensive to obtain this research was supported in part by the intramural research program of the national institutes of health clinical center the authors thank li zhang from beijing institute of big data research for his inspiring discussion and nvidia for the titan x pascal gpu donation mild moderate severe fig examples of the cystic lung lesions with different severity levels in ct image and their manual annotation red or even unavailable for example manual annotation of hundreds of cysts in ct volume dataset examples shown in fig is not feasible for a recent clinical study of lymphangioleiomyomatosis lam to alleviate the annotation burden researchers exploit weakly supervised methods for deep learning based segmentation one direction is to reduce the effort time expertise for annotation by combining fcn and active learning training data is needed to train a model with comparable performance as training on all data another direction applies annotation by incorporating fcn in a multiple instance learning framework however expertise from physicians are still needed such as assigning imagelevel annotations and estimating the lesion size recently deep learning has shown a potential to beat the teacher perform better than the training data labels or even to be an expert without human knowledge in alphago zero specifically for some classification and semantic segmentation tasks when provided with data labels with certain amount of errors deep learning could produce lower errors than the original erroneous labels in addition with assisting by algorithm monte carlo tree search in go game grabcut in image segmentation training can be generated to iteratively or recursively update the neural network parameters to achieve better performance transfer unsupervised segmentation segmentation net level transfer segmentation net level segmentation net level n data stream annotation stream fig learning to segment medical images without manual annotation segmentation networks level level n are recursively trained with the previous network segmentation as training labels in this paper we propose a very weakly supervised approach for lam cyst detection and segmentation as shown in fig the detection and segmentation of cysts is a challenging task due to the large number of cysts greatly variation of cyst sizes severe touching of cysts inconsistent image quality and image noise and motion artifact etc moreover it is infeasible to obtain manual segmentation on lam studies our method differs from weakly supervised methods can automatically learn from medical images without any manual or annotation and without a segmentation network on other labeled datasets starting from classic segmentation techniques specifically unsupervised clustering with spatial information followed by graph cuts refinement the initial annotation is generated and serves as labels for a segmentation network unet in this paper learning new networks are then recursively trained with the previous network predictions as training labels an improved segmentation network could be trained under two hypotheses deep learning might generate better predictions than the training data labels and better training data labels produce better predictions note that the value of k in clustering is the only value provided to the framework methods given a medical image dataset without manual annotation our method works in a manner fig where the previously generated first by unsupervised segmentation then by segmentation networks annotations serve as inputs for the next level of network learning unsupervised segmentation clustering is an unsupervised segmentation approach by involving pixel intensity average and median pixel intensities of a local window into a feature space a spatial classifies the image by grouping similar pixels in the feature space into clusters the number of ters k needs to be manually set in different applications for the cyst segmentation in ct images we set k to obtain three clusters and are the cluster centers indicating cyst lung tissue and others respectively with and we construct a graph with the energy function consisting of a data term and a pixel continuity term as in the data term is assigned as the squared intensity differences between pixels and the cluster centers the pixel continuity term is when two neighboring pixels values are the same and otherwise through empirical evaluation on our data then algorithm is used to optimize the energy function and the global optimal pixel labels are obtained segmentation network after obtaining the initial annotation for all the images in the dataset by using spatial graph cuts unet is used as the network architecture to learn a better segmentor because of its efficiency and accuracy for medical image segmentation unet is constituted of four layers of contraction pooling and four layers of expansion skip connections from contracting path to expansive path strengthen context information in higher resolution layers during unet training the inputs are raw ct images with original resolution and the outputs are annotations loss is utilized the training focuses on distinguishing between cysts and lung tissues and ignoring background labels one critical problem in training unet for medical images is that the distribution can be highly imbalanced much more positive samples than negative or vice versa in our experiments we use the distribution of cysts and lung tissues in the image to balance the positive and negative classes in loss function as in we also avoid sampling empty ct slices no cyst in the slice in the training recursive learning the trained unet will become its own teacher it is applied to segment all the ct images in training set to generate a new set of cyst labels which will be used as the new ground truth to train a next level unet the network parameters of the previous unet are transferred to initialize the next network and a lower learning rate is used to train the next network the terminates when the similarity between successive segmentation is larger than a threshold experimental methods in this study we evaluated our method on a lam dataset a total of ct volumes from patients with lam in a natural history protocol were studied high resolution ct scans of the chest were obtained the scans contained slices and the slice thickness ranged from to mm at intervals each ct slice is with pixels the unet is implemented using caffe we train the unet model from scratch three unet models are trained progressively in the recursive framework named as and respectively the initial learning rate is for and decreases by a factor of for every next level thanks to transfer learning from previous level each is trained for iterations of image since it provides better performance than etc in a preliminary experiment the proposed method is tested on a dell tower workstation with ghz xeon cpu gb ram and a nvidia titan x pascal gpu of gb of memory our model is trained on ct volumes the remaining volumes including mild moderate and severe cases are left out as unseen testing data to evaluate the segmentation performance a medical student manually detect and segment one slice from each of the testing volumes the manual segmentation was tedious that it took working days quantification metrics include dice coefficient and absolute difference of cyst scores adcs cyst score is defined as the percentage of lung region occupied by cysts which is a critical clinical factor in lam assessment it s worth mentioning that differing from traditional concept of training set our model does not learn from any manual annotation from the training data which is not available therefore these data can also be seen as testing data for performance evaluation six images from ct volumes with large adcs between unsupervised segmentation results and unet results are additionally selected from the dataset manual segmentation is then conducted on these slices for evaluation of the progressive improvement of our framework in addition we compare our method with the cyst segmentation method in where thresholding followed by some postprocessing techniques were used results table shows the performance on unseen images out of the images are with good image quality while are noisy table performance comparison on unseen ct images spatial graph cuts adcs absolute difference of cyst scores bold indicates the best results dice adcs teacher student student student table performance comparison on ct images with large adcs between and unet from learning set spatial graph cuts adcs absolute difference of cyst scores bold indicates the best results dice adcs teacher student student student student unet learning could achieve higher segmentation accuracy than its teacher spatial graph cut but the seems to stop at level the same trends could be observed in table where the performance on images from the learning set is shown in these ct images with large adcs between and unet compared to manual annotation unet learning performs substantially better than the lower dice of unet in table compared to which in table is mainly caused by the lower dice values from the mild cases where both and unet have dice values around our proposed method is also more accurate than the method three examples in fig show how the proposed strategy recursively improves the segmentation performance itself given inaccurate segmentation provided by one level of unet learning can already correct most oversegmentation and undersegmentation of cysts thus achieve both higher sensitivity and higher specificity higher levels of unet tend to obtain more accurate cyst boundaries especially for the overlapping cysts the whole training process takes about hours and testing is conclusions we report the first results of very weakly supervised learning to detect and segment cysts in lung ct images without manual annotation by first learning from classic unsupervised segmentation deep learning shows its potential to perform even better after a few levels of in future work we will extend this method to segment other medical images ct slice manual annotation fig three examples good image quality and noisy show segmentation results obtained by and given manual annotation as reference is not shown due to space constraint references sonka hlavac and boyle image processing analysis and machine vision cengage learning j long shelhamer and darrell fully convolutional networks for semantic segmentation in cvpr pp xie and tu edge detection in iccv pp ronneberger fischer and brox convolutional networks for biomedical image segmentation in miccai pp yao jones julienwilliams stylianou and moss sustained effects of sirolimus on lung function and cystic lung lesions in lymphangioleiomyomatosis am respir crit care vol no pp yang zhang chen zhang and chen suggestive annotation a deep active learning framework for biomedical image segmentation in miccai jia huang chang and xu constrained deep weak supervision for histopathology image segmentation ieee tmi guan gulshan dai and hinton who said what modeling individual labelers improves classification arxiv preprint khoreva benenson hosang hein and schiele simple does it weakly supervised instance and semantic segmentation in cvpr silver schrittwieser simonyan antonoglou huang guez hubert baker lai bolton chen lillicrap hui sifre van den driessche graepel and hassabis mastering the game of go without human knowledge nature vol pp boykov and kolmogorov an experimental comparison of algorithms for energy minimization in vision tpami vol no pp li lu liu and yin cytoplasm and nucleus segmentation in cervical smear images using radiating gvf snake pattern recognition vol no pp jia caffe an open source convolutional architecture for fast feature embedding http
1
jan entity retrieval and text mining for online reputation monitoring pedro dos santos saleiro da cruz departamento de engenharia faculdade de engenharia da universidade do porto in partial fulfillment of requirements for the degree of doctor in informatics engineering feup supervisor carlos soares eduarda mendes rodrigues faculdade de engenharia universidade do porto rua roberto frias porto portugal copyright by pedro saleiro doctoral committee oliveira full professor at feup university of porto mark carman senior lecturer at monash university bruno martins assistant professor at ist university of lisbon torgo associate professor at fcup university of porto carlos soares associate professor at feup university of porto esta tese dedicada minha maria de lurdes pelo seu amor e constantes acknowledgements first i would like to thank everybody that contributed somehow to this work from to reviewers colleagues and faculty staff i will probably forget to mention someone in particular and i sincerely apologize for that this work was funded by sapo labs fct and microsoft research without their financial support i would not be able to conclude this thesis i am deeply grateful to my supervisor carlos soares although i was not working in carlos always showed a genuine enthusiasm about this work thank you for giving me the freedom to grow independently as a researcher and to pursuit my own ideas even in the moments i was not delivering at the rhythm you expected i believe you made me more pragmatic after all we had really interesting and thorough discussions during the last years we have not tried all the cool ideas but i hope we will do it someday last i would like to thank you for all the support and protection as well as for believing that i should expand my horizons to the us i will send you a postcard from chicago i am also thankful to eduarda mendes rodrigues of this work it all started with you thank you for receiving me at feup back in october i still remember the day we drew the first draft of our framework for orm you always have encouraged me and your positive feedback was a source of inspiration and motivation even at distance you have always been available when i needed i must also mention your decisive role in helping me pursuing a summer internship in a top notch place such as microsoft research being a graduate student is an opportunity to collaborate with new and inspirational people and that was what happened when i had the chance to start working with natasa at microsoft research when we are around natasa we believe we can make things happen thank you for your patience support and motivation i hope we can keep our collaboration for many years i show my gratitude to oliveira who helped me with a smooth transition to liacc and always had the door open to discuss my issues furthermore was very enthusiastic about my work even not being my supervisor and i iv sincerely appreciate that thank you for your advices and i will miss our conversations about the past and future of ai i wish to thank sarmento for introducing me to the world of data science and text mining in particular i just regret we did not had the chance to collaborate more often i must thank two special friends that are also graduate students jorge teixeira and rodrigues jorge you were my brother in arms throughout this journey and i will always be thankful my friend and colleague for more than years is the friend i searched for sharing the ups and downs of being graduate student thank you for your support and motivation i would like to thank cristina ribeiro for the administrative support regarding my funding throughout these years i must also address rosaldo rossetti for believing in my abilities and for starting our productive collaboration and of course for being a really enjoyable and funny colleague arian pasquali also deserves a personal mention for all the support as well as gomes rei amir tiago cunha and gustavo laboreiro i would like also to thank all the members of the popstar project specially pedro nesta hora poderia deixar de estender os agradecimentos aos amigos e em especial queria referir o grupo do rumo ao penta onde todos os me proporcionam grandes momentos de boa sem os quais seria dos problemas do dia a dia um grande para o miguel jorge e queria deixar aqui uma palavra para a minha prima xana pela amizade desde sempre como tenho imenso a agradecer minha a quem dedico esta tese obrigado pelo amor e pela liberdade que sempre me deste em todas as minhas escolhas como deixaria de ser sempre foste uma entusiasta deste meu desafio que agora chega ao fim a ti deixo um beijinho minha guida ao querido e ao diogo que ainda conheci mas a vida de emigrante tem destas coisas e por fim deixo o meu sentido agradecimento minha namorada maria foste crucial nesta caminhada ao meu lado todos os dias para mais um pouco sei que este trabalho comprometeu muito o nosso tempo a dois mas a e os incentivos constantes para levar isto ao fim prometo compensar no futuro ah e claro tenho que agradecer ao bobby pela companhia que me fez durante a escrita e por me conseguir fazer sorrir mesmo quando a vida madrasta abstract online reputation monitoring orm is concerned with the use of computational tools to measure the reputation of entities online such as politicians or companies in practice current orm methods are constrained to the generation of data analytics reports which aggregate statistics of popularity and sentiment on social media we argue that this format is too restrictive as end users often like to have the flexibility to search for information that is not available in predefined charts as such we propose the inclusion of entity retrieval capabilities as a first step towards the extension of current orm capabilities however an entity s reputation is also influenced by the entity s relationships with other entities therefore we address the problem of retrieval in which the goal is to search for multiple connected entities this is a challenging problem which traditional entity search systems can not cope with besides retrieval we also believe orm would benefit of prediction capabilities such as predicting entity popularity on social media based on news events or the outcome of political surveys however none of these tasks can provide useful results if there is no effective entity disambiguation and sentiment analysis tailored to the context of orm consequently this thesis address two computational problems in online reputation monitoring entity retrieval and text mining we researched and developed methods to extract retrieve and predict information spread across the web we proposed a new probabilistic modeling of the problem of retrieval together with two design patterns for creating representations of both entities and relationships furthermore we propose the dependence model a novel supervised model based on the markov random field framework for retrieval together with a new method to create test collections for retrieval we released a new test collection for that purpose that will foster research in this area we performed experiments at scale with results showing that it is possible to perform retrieval without using fix and entity and relationship types enabling a wide range of queries to be addressed vi we tackled entity filtering and financial sentiment analysis using a supervised learning approach and studied several possible features for that purpose we participated in two well known external competitions on both tasks obtaining performance moreover we performed analysis of the predictive power of a wide set of signals extracted from online news to predict the popularity of entities on twitter we also studied several sentiment aggregate functions on twitter to study the feasibility of using sentiment on social media to predict political opinion polls finally we created and released an adaptable entity retrieval and text mining framework that puts together all the building blocks necessary to perform orm and can be reused in multiple application scenarios from computational journalism to politics and finance this framework is able to collect texts from online media identify entities of interest perform entity and retrieval as well as classify sentiment polarity and intensity it supports multiple data aggregation methods together with visualization and modeling techniques that can be used for both descriptive and predictive analytics resumo a da online mro consiste na de ferramentas computacionais para medir a de entidades online como por exemplo ou empresas na os actuais de mro restringidos de por de dados tais como agregadas da popularidade e do sentimento nos media sociais consideramos que esta demasiado restritiva uma vez que os utilizadores finais das plataformas mro desejam frequentemente ter a flexibilidade que lhes permita pesquisar por centrada nas entidades que vai da disponibilizada nos por conseguinte propomos a da capacidade de de entidades como um primeiro passo no sentido de estender as o estado atual das ferramentas de mro no entanto a de uma dada entidade influenciada pelas desta com outras entidades neste sentido a tratar do problema de de onde o objectivo consiste na pesquisa por entidades relacionadas entre si de um desafio que os sitemas tradicionais de de entidades ainda capazes de lidar para da acreditamos que a mro iria beneficiar da capacidade de efectuar baseadas em texto e centradas nas entidades como por exemplo a da popularidade de entidades nos media sociais utilizando eventos retratados nas ou o resultado de sondagens no entanto nenhuma destas tarefas sucesso e utilidade se houver a capacidade efetiva de desambiguar entidades mencionadas nos textos assim como uma de sentimento para o contexto da mro consequentemente esta tese trata dois problemas computacionais da da online de entidades e de texto e desenvolvemos para extrair recuperar e prever centrada em entidades e espalhada pela internet propomos um novo modelo do problema de conjuntamente com dois de desenho baseados em de texto para criar de entidades e propomos o modelo de viii mder um novo modelo supervisionado de antecipada baseado no campo de markov para a de conjutamente com um novo de de de teste para uma nova de teste com esse que fomentar a nesta efetuamos de grande escala e os resultados mostram que realizar sem utilizar tipos fixos e de entidades e o que permite atuar sobre o conjunto alargado de pesquisas tratamos das tarefas de filtragem de entidades e de sentimento financeiro utilizando uma abordagem de aprendizagem supervisionada em que estudamos para esse fim em duas exterrnas em ambas as tarefas atingindo resultados ao do estado da arte disso uma do poder preditivo de um grande conjunto de sinais das online para parever a popularidade de entidades no twitter assim como um estudo de de de sentimento do twitter para estudar a praticabilidade de utilizar de sentimento nos media sociais para prever sondagens eleitorais finalmente e uma plataforma de de entidades e de texto que conjuga todos os blocos para a de mro pode ser reutilizada em diversos de desde o jornalismo computacional e esta plataforma capaz de recolher textos dos media online identificar entidades alvo efectuar de entidades e assim como classificar sentimento e intensidade associada suporta de de dados e juntamente com de e pode ser utilizada tanto para descritivas como preditivas table of contents list of figures xiii list of tables xv introduction thesis statement objectives research methodology contributions and applications foundations thesis outline background and related work online reputation monitoring related frameworks entity retrieval and semantic search markov random field for ir sequential dependence model mrf for entity retrieval named entity disambiguation sentiment analysis word embeddings predicting collective attention political data science entity retrieval for online reputation retrieval queries modeling retrieval monitoring table of contents x design patterns for early fusion association weights early fusion example late fusion late fusion example implementation dependence model graph structures feature functions ranking discussion summary of the contributions retrieval retrieval over a web corpus relink query collection tabular data and entity relationships selection of tables formulation of queries collection statistics experimental setup data and indexing retrieval method and parameter tuning test collections results and analysis summary of the contributions entity filtering and financial sentiment entity filtering task overview features experimental setup results financial sentiment analysis task overview financial word embeddings analysis table of contents approach experimental setup results and analysis concluding remarks summary of the contributions xi prediction exploring online news for reputation monitoring approach experimental setup results and discussion predicting political polls using twitter sentiment methodology data experimental setup results and discussion feature importance outlook summary of the contributions a framework for online reputation monitoring framework overview relink texrep relink use case news processing pipeline demonstration texrep use case data aggregation visualization learning word embeddings for orm neural word embedding model experimental setup results and analysis concluding remarks summary of the contributions on twitter xii table of contents conclusions summary and main contributions limitations and future work references list of figures entity retrieval and text mining as computational problems of orm markov random field document and term dependencies bayesian networks for retrieval with queries of different lengths markov random field dependencies for retrieval markov random field dependencies for retrieval example of wikipedia table row example of metadata provided to editors illustration of indexing from a web corpus values of for erdm a all b c obtained using sum normalization were results grouped by entity s category using run daily popularity on twitter of entities under study training and testing sliding window first iterations individual feature type score for tp at k negatives share berminghamsovn of political leaders in twitter representation of the monthly poll results of each political candidate error predictions for polls results error predictions for polls results variation mean absolute error buzz vs sentiment aggregate functions importance in the random forests models overview on the orm framework relink framework architecture overview architecture and data flows of the texrep framework news processing pipeline b and c xiv list of figures cristiano ronaldo egocentric network twitter buzz share of political leaders continuous line represents loss in the training data while dashed line represents loss in the validation data left side effect of increasing using of training data right side effect of varying the amount of training data used with list of tables retrieval definitions illustrative example of the entity index in early fusion illustrative example of the relationship index in early fusion illustrative example of the document index in late fusion clique sets and associated feature functions by type and input nodes examples of query annotations relink collection statistics extractions statistics description of query sets used for evaluation early fusion and erdm comparison using lm and results of erdm compared with three baselines replab filtering task dataset description entity filtering versions description official results for each version plus our validation set accuracy training set examples for both microblog results with all features on validation and test sets features performance breakdown on test set using rf news headlines results with all features on validation and test sets features performance breakdown on test set using mlp summary of the four type of features we consider score of popularity high as function of tp and k equal to and respectively distribution of positive negative and neutral mentions per political number of available for training for different sizes of target vocabulary xvi list of tables overall statistics for combinations of models learned varying and volume of training data results observed after training epochs evaluation of resulting embeddings using class membership class distinction and word equivalence tests for different thresholds of cosine similarity chapter introduction nowadays people have pervasive access to connected devices applications and services that enable them to obtain and share information almost instantly on a basis with social media growing at an astonishing speed user opinions about people companies and products quickly spread over large communities consequently companies and personalities are under thorough scrutiny with every event and every statement potentially observed and evaluated by a global audience which reflects one s perceived reputation van riel and fombrun define reputation as the overall assessment of organizations by their the authors use the term organization in the definition but it may as well apply to individuals politicians or products mobile phone brands a stakeholder is someone who has some relationship with the organization such as employees customers or shareholders this definition and other similar ones focus on the perspective that reputation represents perceptions that others have on the target entity however the rise of social media and online news publishing has brought about wider public awareness about the entities activities influencing people s perceptions about their reputation while traditional reputation analysis is mostly manual and focused on particular entities with online media it is possible to automate much of the process of collecting preparing and understanding large streams of content to identify facts and opinions about a much wider set of entities online reputation monitoring orm addresses this challenge the use of computational tools to measure the reputation of entities from online media content early orm started with counting occurrences of a brand name in social media as a channel to estimate the of a brand introduction there are several challenges to collect process and mine online media data for these purposes social media texts are short informal with many abbreviations slang jargon and idioms often the users do not care about the correct use of grammar and therefore the text tends to have misspellings incomplete and unstructured sentences furthermore the lack of context poses a very difficult problem for tasks relevant in the context of text mining such as named entity disambiguation or sentiment analysis once we classify the sentiment polarity of a given document tweet or news title it is necessary to aggregate several document scores to create meaningful indicators these tasks are technically complex for most of the people interested in tracking entities on the web for this reason most research has focused on investigating parts of this problem leading to the development of tools that only address of this endeavor text data usually includes a large number of entities and relationships between them we broadly define an entity to be a thing or concept that exists in the world such as a person a company organization an event or a film entities exist as mentions across documents and in external knowledge resources in recent years entities have gained increased importance as the basic unit of information to answer particular information needs instead of entire documents or text snippets the volume of data is rapidly increasing on the web including rdf and linked data facebook s open graph and google s knowledge graph describing entities footballers and coaches and relationships between them manages these developments have a great impact in online reputation monitoring as it is mainly focused on entities more specifically the orm process consists in searching and tracking an entity of interest the personality the company organization or under analysis on the other hand news stories topics and events discussed in the news or social media usually contain mentions of entities or concepts represented in a knowledge base thus we can say that entities are the gravitational force that drives the online reputation monitoring process thesis statement the ultimate goal of orm is to track everything that is said on the web about a given target entity and consequently to the impact on its reputation from our perspective this goal is very hard to achieve for two reasons the first reason has to do with the difficulty of computationally processing interpreting and accessing the huge amount of information published online everyday the second thesis statement reason is inherent to the definition of reputation as being intangible but having tangible outcomes more specifically fombrun and van riel and later stacks found a correlation between several indicators such as reputation or trust and financial indicators such as sales or profits however this finding does not imply causality as financial indicators can be influenced by many factors besides stakeholders perceived reputation in conclusion there is no consensus on how to measure reputation neither intrinsically nor extrinsically to the best of our knowledge current orm is still very limited and naive the most standard approach consists in counting mentions of entity names and applying sentiment analysis to produce descriptive reports of aggregated entity popularity and overall sentiment we propose to make progress in orm by tackling two computational problems entity retrieval and text mining figure online reputation monitoring text and entities entity retrieval text mining fig entity retrieval and text mining as computational problems of orm we believe that a orm platform besides providing aggregated statistics and trends about entity popularity and sentiment on the news and social media would benefit from providing entity retrieval capabilities end users often like to have the flexibility to search for specific information that is not available in predefined charts however orm has some specificities that traditional entity search systems can not cope with more specifically an entity s reputation is also influenced by the entity s relationships with other entities introduction for instance the reputation of apple was severely damaged with the so called apple foxconn scandal foxconn was one of the several contractor companies in apple s supply chain that was accused of exploiting chinese workers although the facts were not directly concerned with apple itself its relationship with foxconn triggered bad public opinion about apple the same happened recently with the weinstein sex scandal as accusations of sexual harassment aimed at harvey weinstein created a wave of damage to companies and personalities associated with the disgraced hollywood producer therefore a orm platform should provide search capabilities retrieval is a complex case of entity retrieval where the goal is to search for multiple unknown entities and relationships connecting them contrary to traditional entity queries queries expect tuples of connected entities as answers for instance us technology companies contracts chinese electronics manufacturers can be answered by tuples apple foxconn while companies founded by disgraced hollywood producer is expecting tuples miramax harvey weinstein in essence an query can be decomposed into a set of that specify types of entities and types of relationships between entities on the other hand orm requires accurate and robust text processing and data analysis methods text mining plays an essential enabling role in developing better orm there are several challenges with collecting and extracting relevant information from raw text data it is necessary to filter noisy data otherwise downstream processing tasks such as sentiment analysis will be compromised more specifically it is essential to develop named entity disambiguation approaches that can distinguish relevant text passages from named entities are often ambiguous for example the word bush is a surface form for two former presidents a music band and a shrub the ambiguity of named entities is particularly problematic in social media texts where users often mention entities using a single term orm platforms would be even more useful if they would be able to predict if social media users will talk a lot about the target entities or not for instance on april the uk david cameron was mentioned on the news regarding the panama papers story he did not acknowledge the story in detail on that day however the news cycle kept mentioning him about this topic in the following days and his mentions on social media kept very high he had to publicly address the issue on april when his reputation had already been severely damaged blaming himself for not providing further details earlier thus we also want to study the feasibility of objectives using knowledge extracted from social media and online news to predict real world surveys results such as political polls objectives the work reported on this dissertation aimed to understand formalize and explore the scientific challenges inherent to the problem of using unstructured text data from different web sources for online reputation monitoring we now describe the specific research challenges we proposed to overcome retrieval existing strategies for entity search can be divided in and approaches the former usually rely on statistical language models to match and rank terms in the proximity of the target entity the latter consists in creating a sparql query and using it over a structured knowledge base to retrieve relevant rdf triples neither of these paradigms provide good support for retrieval recent work in search tackled retrieval by extending sparql to support joins of multiple query results and creating an extended knowledge graph extracted entities and relationships are typically stored in a knowledge graph however it is not always convenient to rely on a structured knowledge graph with predefined and constraining entity types in particular orm is interested in transient information sources such as online news or social media general purpose knowledge graphs are usually fed with more stable and reliable data sources wikipedia furthermore predefining and constraining entity and relationship types such as in semantic approaches reduces the range of queries that can be answered and therefore limits the usefulness of entity search particularly when one wants to leverage to the best of our knowledge retrieval using approaches is a new and unexplored research problem within the information retrieval research community one of the objectives of our research is to explore to what degree we can leverage the textual context of entities and relationships terminology to relax the notion of an entity or relationship type instead of being characterized by a fixed type person country place the entity would be characterized by any contextual term the same applies to the relationships traditional knowledge graphs have fixed schema of relationships child of created by works for while our approach relies on contextual terms in the text proximity of introduction every two entities in a raw document relationships descriptions such as criticizes hits back meets or interested in would be possible to search for this is expected to significantly reduce the limitations which structured approaches suffer from enabling a wider range of queries to be addressed entity filtering and sentiment analysis entity filtering is a of named entity disambiguation ned in which we have a named entity mention and we want to classify it as related or not related with the given target entity this is a relatively easy problem in well formed texts such as news articles however social media texts pose several problems to this task we are particularly interested in entity filtering of tweets and we aim to study a large set of features that can be generated to describe the relationship between a given target entity and a tweet as well as exploring different learning algorithms to create supervised models for this task sentiment analysis has been thoroughly studied in the last decade there have been several phd thesis entirely dedicated to this subject it is a broad problem with several ramifications depending on the text source and specific application within the context of orm we will focus in a particular domain finance sentiment analysis on financial texts has received increased attention in recent years neverthless there are some challenges yet to overcome financial texts such as microblogs or newswire usually contain highly technical and specific vocabulary or jargon making the development of specific lexical and machine learning approaches necessary prediction we hypothesize that for entities that are frequently mentioned on the news politicians it is possible to establish a predictive link between online news and popularity on social media we cast the problem as a supervised learning classification approach to decide whether popularity will be high or low based on features extracted from the news cycle we aim to assess if online news are valuable as source of information to effectively predict entity popularity on twitter more specifically we want to find if online news carry different predictive power based on the nature of the entity under study and how predictive performance varies with different times of prediction we propose to explore different features and how particular ones affect the overall predictive power and specific entities in particular on the other hand we will study if it is possible to use knowledge extracted from social media texts to predict the outcome of public opinion surveys the automatic content analysis of mass media in the social sciences has become necessary and possible research methodology with the rise of social media and computational power one particularly promising avenue of research concerns the use of sentiment analysis in microblog streams however one of the main challenges consists in aggregating sentiment polarity in a timely fashion that can be fed to the prediction method a framework for orm the majority of the work in orm consists in studies where researchers collect data from a given social network and produce their specific analysis or predictions often unreproducible the availability of open source platforms in this area is scarse researchers typically use specific apis and software modules to produce their studies however there has been some effort among the research community to address these issues through open source research platforms we therefore aim to create an adaptable text mining framework specifically tailored for orm that can be reused in multiple application scenarios from politics to finance this framework is able to collect texts from online media such as twitter and identify entities of interest and classify sentiment polarity and intensity the framework supports multiple data aggregation methods as well as visualization and modeling techniques that can be used for both descriptive analytics such as analyze how political polls evolve over time and predictive analytics such as predict elections research methodology we adopted distinct research methodologies in the process of developing the research work described in this thesis the origin of this work was the popstar project popstar public opinion and sentiment tracking analysis and research was a project that developed methods for the collection measurement and aggregation of political opinions voiced in microblogs twitter in blogs and online news a first prototype of the framework for orm was implemented and served as the backend of the popstar website http the ground work concerned with the development of a framework for orm was carried in the scope of the project therefore the popstar website served as use case for validating the effectiveness and adaptability of the framework the entity filtering and sentiment analysis modules of the framework were evaluated using well known external benchmarks resulting in performance we participated in replab filtering task and evaluated our entity filtering method using the dataset created for the competition one of our submissions obtained the first place at the competition we also participated in semeval task introduction sentiment analysis on financial microblogs and news we were ranked using one of the metrics at the microblogs we performed two experiments regarding the entity centric predictions for predicting entity popularity on twitter based on the news cycle we collected tweets and news articles from portugal using the socialbus twitter collector and online news from different news outlets collected by sapo we used the number of entity mentions on twitter as target variable and we extracted features from the news datasets both datasets were aligned in time we used the same twitter dataset for studying different sentiment aggregate functions to serve as features for predicting political polls of a private opinion studies company eurosondagem improvements of retrieval techniques have been hampered by a lack of test collections particularly for complex queries involving multiple entities and relationships we created a method for generating test queries to support comprehensive search experiments queries and relevance judgments were created from content that exists in a tabular form where columns represent entity types and the table structure implies one or more relationships among the entities editorial work involved creating natural language queries based on relationships represented by the entries in the table we have publicly released the relink test collection comprising queries and relevance judgments obtained from a sample of wikipedia tables we evaluated the new methods proposed for retrieval using the relink query collection together with two other smaller query collections created by research work in semantic retrieval we used a large web corpus the containing million web pages for creating retrieval tailored indexes for running our experiments moreover we implemented a demo using a large news collection of million portuguese news articles resulting in the best demo award at ecir contributions and applications this work resulted in the following contributions a text mining framework that puts together all the building blocks required to perform orm the framework is adaptable and can be reused in different application scenarios such as finance and politics the framework provides entityspecific text mining functionalities that enable the collection disambiguation sentiment analysis aggregation prediction and visualization of information from heterogeneous web data sources furthermore given that it is contributions and applications built using a modular architecture providing abstraction layers and well defined interfaces new functionalities can easily be integrated generalization of the problem of search to cover entity types and relationships represented by any attribute and predicate respectively rather than a set a general probabilistic model for retrieval using bayesian networks proposal of two design patterns that support retrieval approaches using the model proposal of a dependence model that builds on the basic sequential dependence model sdm to provide extensible representations and dependencies suitable for complex queries an indexing and retrieval approach including learning to fusion methods that can handle entity and relationships ranking and merging of results the proposal of a method and strategy for automatically obtaining relevance judgments for queries we make publicly available queries and relevance judgments for the previous task entity filtering and financial sentiment analysis methods tailored for twitter that is able to cope with short informal texts constraints analysis of the predictive power of online news regarding metrics on twitter such as popularity or sentiment analysis of how to combine knowledge obtained from heterogeneous sources for prediction tasks we believe this work can be useful in a wide range of applications from which we highlight six reputation management is concerned with influencing and controlling company or individual reputation and consequently tracking what is said about entities online is one of the main concerns of this area for instance knowing if a given news article will have a negative impact on entity s reputation would be crucial for damage control introduction digital libraries are special libraries comprising a collection of digital objects text or images stored in a electronic media format they are ubiquitous nowadays from academic repositories to biomedical databases law enforcement repositories etc we believe the contributions we make to the retrieval research problem can be applied to any digital library enabling a new wide range of search capabilities fraud detection and inside trading detection is an area where information about entities individuals and companies and relationships between entities is very useful to discover hidden relationships and contexts of entities that might represent conflicts of interests or even fraud journalism or more specifically computational journalism would benefit of a powerful search tool in which journalists could investigate how entities were previously mentioned on the web including online news through time as well as relationships among entities and their semantics political science has given a lot of attention to social media in recent years due to the sheer amount of people reactions and opinions regarding politically relevant events being able to analyze the interplay between online news and social media from a political entity perspective can be very interesting for political scientists on the other hand it is becoming increasingly difficult to obtain pollsresponses via telephone and it is necessary to start testing alternative approaches social media marketing focuses on communicating through social networks with company potential and effective customers evaluating the success of a given campaign is a key aspect of this area therefore assessing the volume and polarity of mentions of a given company before and after a campaign would be very useful foundations most of the material of this thesis was previously published in journal conference and workshop publications rodrigues soares oliveira texrep a text mining framework for online reputation monitoring new generation computing volume number foundations saleiro rodrigues soares relink a research framework and test collection for retrieval international acm sigir conference on research and development in information retrieval sigir saleiro rodrigues soares early fusion strategy for retrieval the first workshop on knowledge graphs and semantics for text retrieval and analysis sigir saleiro sarmento rodrigues soares oliveira learning word embeddings from the portuguese twitter stream a study of some practical aspects progress in artificial intelligence epia saleiro rodrigues soares oliveira feup at task predicting sentiment polarity and intensity with financial word embeddings international workshop on semantic evaluation semeval acl saleiro and soares learning from the news predicting entity popularity on twitter in advances in intelligent data analysis xv ida saleiro teixeira soares oliveira timemachine search and visualization of news archives in advances in information retrieval european conference on ir research ecir saleiro gomes soares sentiment aggregate functions for political opinion polling using microblog streams in international c conference on computer science and software engineering saleiro amir silva soares popmine tracking political opinion on the web in ieee international conference on computer and information technology ubiquitous computing and communications dependable autonomic and secure computing pervasive intelligence and computing iucc saleiro rei pasquali soares et popstar at replab name ambiguity resolution on twitter in fourth international conference of the clef initiative clef introduction thesis outline in chapter we discuss related work to this thesis in chapter we present a formalization of the problem of retrieval using a approach we provide two design patterns for retrieval early fusion and late fusion we end the chapter by introducing a new supervised early entity relationship dependence model erdm that can be seen as an extension of the mrf framework for retrieval adapted to retrieval in chapter we describe a set of experiments on retrieval over a web corpus first we introduce a new query collection relink qc specifically tailored to this problem we developed a approach to collect relevance judgments from tabular data and the editorial work consisted in creating queries answered by those relevance judgments we run experiments using the as dataset and provide evaluation results for the new proposed methods for retrieval chapter is dedicated to entity filtering and financial sentiment analysis we evaluate our approaches using well known external benchmarks namely replab and semeval in chapter we present two experiments of predictions in the first experiment we try to predict the popularity of entities on social media using solely features extracted from the news cycle on the second experiment we try to assess which sentiment aggregate functions are useful in predicting political polls results in chapter we present an unified framework of orm the framework is divided in two major containers relink entity retrieval and texrep text mining we present the data flow within the framework and how it can be used as a reference open source framework for researching in orm we also present some case studies of using this framework we end this thesis with chapter which is dedicated to the conclusions chapter background and related work this chapter introduces an overview of the background concepts and previous research work on the tasks addressed in this dissertation we start by presenting a brief description of the task of online reputation monitoring orm including related frameworks for orm we then survey previous research work in entity retrieval and semantic search including a detailed explanation of the markov random field model for retrieval and its variations we describe the tasks of named entity disambiguation sentiment analysis and previous work on training word embeddings we end this chapter by providing an overview of related work on predictions including predicting social media attention or the outcome of political elections online reputation monitoring the reputation of a company is important for the company itself but as well for the stakeholders more specifically stakeholders make decisions about the company and its products faster if they are aware of the image of the company from the company perspective reputation is an asset as it attracts stakeholders and it can represent economic profit at the end in newell and goldsmith used questionnaire and survey methodologies to introduce the first standardized and reliable measure of credibility of companies from a consumer perspective there have been also studies that find a correlation between company indicators such as reputation trust and credibility and financial indicators such as sales and profits these studies found that although reputations are intangible they influence tangible assets following this reasoning fombrum created a very successful measurement framework named reptrak background and related work a different methodology compared to questionnaires is media analysis news tv and radio broadcasts typically the analysis involves consuming and categorizing media according to stakeholder and polarity positive negative towards the company recently social media analysis is becoming an important proxy of people opinion originating the field of online reputation monitoring while traditional reputation monitoring is mostly manual online media pose the opportunity to process understand and aggregate large streams of facts about about a company or individual orm requires some level of continuous monitoring it is crucial to detect early the changes in the perception of a company or personality conveyed in social media online buzz may be good or bad and consequently companies must react and address negative trends it also creates an opportunity to monitor the reputation of competitors in this context text mining plays a key enabling role as it offers methods for deriving information from textual content for instance gonzalo identifies different text mining research areas relevant to orm entity filtering topic tracking reputation priority detection user profiling and automatic social media as a new way of communication and collaboration is an influence for every stakeholder of society such as personalities companies or individuals social media users share every aspect of their lives and that includes information about events news stories politicians brands or organizations companies have access to all this sharing which opens new horizons for obtaining insights that can be valuable to them and their online reputation companies also invest a big share of their public relations on social media building a strong reputation can take long time and effort but destroying it can take place overnight therefore as the importance of social media increased so did the importance of having powerful tools that deal with this enormous amount of data related frameworks the great majority of work in orm consists in studies and platforms for orm are usually developed by private companies that do not share internal information however there are some open source research projects that can be considered as related frameworks to this work trendminer is one of such platforms that enables real time analysis of twitter data but has a very simple sentiment analysis using word counts and lacks flexibility in order to support data processing a framework for orm should be entity retrieval and semantic search collect process and aggregate texts and information extracted from those texts in relation to the entities being monitored context addresses adaptability and reusability by allowing a modular interface and allowing plugin components to extend their framework specially from the perspective of the data sources and text analysis modules for instance it does not support sentiment analysis module by default but it could be plugged in neverthless context does not support the plugin of aggregation and prediction modules which makes it not suitable for orm the fora framework is specifically tailored for orm it creates an ontology based on fuzzy clustering of texts but it is only concerned with extracting relevant linguistic units regarding the target entities and does not include automatic sentiment analysis and it does not allow the plugin of new modules popmine was the first version of our text mining framework for orm and it was developed specifically in the context of a project in political data science it comprises a richer set of modules including cross media data collection twitter blog posts and online news and trend analysis based on entity filtering and sentiment analysis modules in fact our current version of texrep our text mining framework for orm can be seen as an extension of the popmine architecture by creating a more general purpose framework for orm which is not restricted to political analysis while it would be possible to adapt popmine s entity disambiguation and sentiment analysis modules its aggregations are specific to the political scenarios on the other hand texrep supports users to define and plug aggregate functions moreover popmine has limited user configurations lacks support for word embeddings and does not include predictive capabilities entity retrieval and semantic search information retrieval deals with the search for information it is defined as the activity of finding relevant information resources usually documents that meet an information need usually a query from within a large collection of resources of an unstructured nature usually text in early boolean retrieval systems documents were retrieved if the exact query term was present and they were represented as a list of terms with the introduction of the vector space model each term represents a dimension in a space and consequently each document and query are represented as vectors values of each dimension of the document vector correspond to the term frequency background and related work tf of the term in the document therefore the ranking list of documents is produced based on their spatial distance to the query vector the concept of inverse document frequency idf was later introduced to limit the effect of common terms in a collection a term that occurs in many documents of the collection has a lower idf than terms that occur less often the combination and variants such as became commonly used weighting statistics for vector space model recently it has been observed that when people have focused information needs entities better satisfy those queries than a list of documents or large text snippets this type of retrieval is called entity retrieval or retrieval and includes extra information extraction tasks for processing documents such as named entity recognition ner and named entity disambiguation ned entity retrieval is closely connected with question answering qa though qa systems focus on understanding the semantic intent of a natural language query and deciding which sentences represent the answer to the user considering the query british politicians in panama papers the expected result would be a list of names rather than documents related to british politics and the panama papers news story there are two search patterns related to entity retrieval first the user knows the existence of a certain entity and aims to find related information about it for example a user searching for product related information second the user defines a predicate that constrains the search to a certain type of entities searching for movies of a certain genre online reputation monitoring systems usually focus on reporting statistical insights based on information extracted from social media and online news mentioning the target entity however this kind of interaction limits the possibility of users to explore all the knowledge extracted about the target entity we believe entity retrieval could enhance online reputation monitoring by allowing free text search over all mentions of the target entity and consequently allow users to discover information that descriptive statistical insights might not be able to identify entity retrieval differs from traditional document retrieval in the retrieval unit while document retrieval considers a document as the atomic response to a query in entity retrieval document boundaries are not so important and entities need to be identified based on occurrence in documents the focus level is more granular as the objective is to search and rank entities among documents however traditional entity retrieval systems does not exploit semantic relationships between terms in the entity retrieval and semantic search query and in the collection of documents if there is no match between query terms and terms describing the entity relevant entities tend to be missed entity retrieval has been an active research topic in the last decade including various specialized tracks such as expert finding track inex entity ranking track trec entity track and sigir eos workshop previous research faced two major challenges entity representation and entity ranking entities are complex objects composed by a different number of properties and are mentioned in a variety of contexts through time consequently there is no single definition of the atomic unit entity to be retrieved additionally it is a challenge to devise entity rankings that use various entity representations approaches and tackle different information needs there are two main approaches for tackling entity retrieval profile based approach and voting approach the profile based approach starts by applying ner and ned in the collection in order to extract all entity occurrences then for each entity identified a is created by concatenating every passage in which the entity occurs an index of entity is created and a standard document ranking method is applied to rank with respect to a given query one of the main challenges of this approach is the transformation of original text documents to an index including the collection in order to extract all entities and their context in the voting approach the query is processed as typical document retrieval to obtain an initial list of documents entities are extracted from these documents using ner and ned techniques then score functions are calculated to estimate the relation of entities captured and the initial query for instance counting the frequency of occurrence of the entity in the top documents combined with each document score relevance to the query another approach consists in taking into account the distance between the entity mention and the query terms in the documents recently there is an increasing research interest in entity search over linked data also referred as semantic search due to the availability of structured information about entities and relations in the form of knowledge bases semantic search exploits rich structured entity related in machine readable rdf format expressed as a triple entity predicate object there are two types of search and natural language based search regardless of the search type the objective is to interpret the semantic structure of queries and translate it to the underlying schema of the target knowledge base most of the research focus is on interpreting the query intent while others focus on how to devise a ranking framework that deals with background and related work similarities between different attributes of the entity entry in the kb and the query terms relationship queries li et al were the first to study relationship queries for structured querying entities over wikipedia text with multiple predicates this work used a query language with typed variables for both entities and entity pairs that integrates text conditions first it computes individual predicates and then aggregates multiple predicate scores into a result score the proposed method to score predicates relies on redundant contexts yahya et al defined relationship queries as spo queries joined by one or more relationships the authors cast this problem into a structured query language sparql and extended it to support textual phrases for each of the spo arguments therefore it allows to combine both structured triples and text simultaneously it extended the yago knowledge base with triples extracted from clueweb using an open information extraction approach in the scope of relational databases graph search has been widely studied including ranking however these approaches do not consider full documents of graph nodes and are limited to structured data while searching over structured data is precise it can be limited in various respects in order to increase the recall when no results are returned and enable prioritization of results when there are too many elbassuoni et al propose a for ranking results similarly the models like entityrank by cheng et al and shallow semantic queries by li et al relax the predicate definitions in the structured queries and instead implement proximity operators to bind the instances across entity types yahya et al propose algorithms for application of a set of relaxation rules that yield higher recall entity retrieval and proximity web documents contain term information that can be used to apply pattern heuristics and statistical analysis often used to infer entities as investigated by conrad and utt petkova and croft rennie and jaakkola in fact early work by conrad and utt demonstrates a method that retrieves entities located in the proximity of a given keyword they show that using a window around can be effective for supporting search for people and finding relationship among entities similar considerations of the statistics have been used to identify salient terminology keyword to include in the document index entity retrieval and semantic search markov random field for ir in this section we detail the generic markov random field mrf model for retrieval and its variation the sequential dependence model sdm as we later show this model is the basis for our retrieval model the markov random field mrf model for retrieval was first proposed by metzler and croft to model query term and document dependencies in the context of retrieval the objective is to rank documents by computing the posterior p given a document d and a query q p p q d p q for that purpose a mrf is constructed from a graph g which follows the local markov property every random variable in g is independent of its given observed values for its neighbors therefore different edge configurations imply different independence assumptions fig markov random field document and term dependencies metzler and croft defined that g consists of query term nodes qi and a document node d as depicted in figure the joint probability mass function over the random variables in g is defined by pg q d y c g where q qn are the query term nodes d is the document node c g is the set of maximal cliques in g and c is a potential function over clique p q configurations the parameter q d g c is the partition function that normalizes the distribution it is generally unfeasible to compute due to the exponential number of terms in the summation and it is ignored as it does not influence ranking background and related work the potential functions are defined as compatibility functions between nodes in a clique for instance a score can be measured to reflect the aboutness between a query term qi and a document metzler and croft propose to associate one or more real valued feature function with each clique in the graph the potential functions are defined using an exponential form c exp f c where is a feature weight which is a free parameter in the model associated with feature function f c the model allows parameter and feature functions sharing across cliques of the same configuration same size and type of nodes of one query term node and one document node for each query q we construct a graph representing the query term dependencies define a set of potential functions over the cliques of this graph and rank documents in descending order of rank log rank log q d log q rank x log c g rank x log exp f c g rank x f c g metzler and croft concluded that given its general form the mrf can emulate most of the retrieval and dependence models such as language models sequential dependence model the sequential dependence model sdm is the most popular variant of the mrf retrieval model it defines two clique configurations represented in the following potential functions qi d and qi d basically it considers sequential dependency between adjacent query terms and the document node the potential function of the containing a query term node and a document node is represented as qi d exp ft qi d the clique configuration containing contiguous query terms and a document node is represented by two real valued functions the first considers exact ordered matches of the entity retrieval and semantic search two query terms in the document while the second aims to capture unordered matches within n fixed window sizes consequently the second potential function is qi d exp fo qi d fu qi d replacing c by these potential functions in equation and factoring out the parameters the sdm can be represented as a mixture model computed over term phrase and proximity feature classes rank x p ft qi d qi x fo qi d qi x fu qi d qi where the free parameters must follow the constraint coordinate ascent was chosen to learn the optimal values that maximize mean average precision using training data considering tf the frequency of the term s in the document d cf the frequency of the term s in the entire collection c the feature functions in sdm are set as cfq tfqi d ft qi d log fo qi d log tf qi d fu qi d log cf q q i tf uwn qi d cf uwn q q i where is the dirichlet prior for smoothing qi is a function that searches for exact matches of the phrase qi and uwn qi is a function that searches for of qi and within a window of terms usually terms across document sdm has shown performance in document retrieval when compared with several bigram dependence models and standard retrieval models across short and long queries background and related work mrf for entity retrieval the current methods in entity retrieval from knowledge graphs are based on mrf the fielded sequential dependence model fsdm extends sdm for structured document retrieval and it is applied to entity retrieval from knowledge graphs in this context entity documents are composed by fields representing metadata about the entity each entity document has five fields names attributes categories similar entity names and related entity names fsdm builds individual language models for each field in the knowledge base this corresponds to replacing sdm feature functions with those of the mixture of language models the feature functions of fsdm are defined as qi d log f x tfqi dj wjt j qi d log f x qi d log tf qi dj wj cf qi j j f x cfqi j tf uwn qi dj u wj cf uwn qi j j where are the dirichlet priors for each field and wj are the weights for each field p and must be with constraint fj wj coordinate ascent was used in two stages to learn wj and values the parameterized fielded sequential dependence model pfsdm extends the fsdm by dynamically calculating the field weights wj to different query terms features are applied to capture the relevance of query terms to specific fields of entity documents for instance nnp feature is positive if query terms are proper nouns therefore the query terms should be mapped to the names field therefore the field weight contribution of a given query term qi and a query bigram qi in a field j are a linear weighted combination of features wqi j x u k qi j k wqi j x k b k qi j named entity disambiguation u where qi j is the k feature function of a query unigram for the field j and k is its respective weight for bigrams qi j is the k feature function of a query b bigram for the field j and k is its respective weight consequently pfsdm has f u f b total parameters where f is the number of fields u is the number of field mapping features for unigrams b is the number of field mapping features for bigrams plus the three parameters their estimation is performed in a two stage optimization first parameters are learned separately for unigrams and then bigrams this is achieved by setting to zero the corresponding parameters in the second stage the parameters are learned coordinate ascent is used in both stages the elr model exploits entity mentions in queries by defining a dependency between entity documents and entity links in the query named entity disambiguation given a mention in a document named entity disambiguation ned or entity linking aims to predict the entity in a reference knowledge base that the string refers to or nil if no such entity is available usually the reference knowledge base kb includes a set of documents where each document describes one specific entity wikipedia is by far the most popular reference kb previous research typically performs three steps to link an entity mention to a kb representation of the mention extend the entity mention with relevant knowledge from the background document candidate generation find all possible kb entries that the mention might refer to and their representation disambiguation by computing the similarity between the represented mention and the candidate entities entity filtering or targeted entity disambiguation is a special case of ned in which there is only one candidate entity the entity that is being monitored there is an increasing interest in developing entity filtering methods for social media texts considering its specificities and limitations these approaches focus on finding relevant keywords for positive and negative cases using web and collection based features another line of work creates entity extraction systems where entities belong to a certain topic and are used as evidence to disambiguate the short message given its topic similarly hangya et al create features representing topic distributions over tweets using latent dirichlet allocation lda the majority of research work in ned is usually applied to disambiguate entities in reasonably long texts as news or blog posts in recent years there has been an increasing interest in developing ned methods for social media texts and its specificities and background and related work limitations a survey and evaluation of ner and ned for tweets concluded that current approaches do not perform robustly on terse and linguistically compressed microblog texts some methods reach measures of over but are still behind the results obtained on news texts social media texts are too short to provide sufficient information to calculate context similarity accurately in addition most of approaches leverage on neighboring entities in the documents but once again tweets are short and do not have more than one or two entities mentioned most of them extract information obtained from other tweets and disambiguate entity mentions in these tweets collectively the assumption is that twitter users are content generators and tend to scatter their interests over many different messages they broadcast which is not necessarily true entity filtering has also been studied in the context of classification davis et al propose a pipeline containing three stages clearly positive examples are exploited to create filtering rules comprising collocations users and hashtags the remaining examples are classified using a em model trained using the clearly positive examples recently habib et al proposed an hybrid approach where authors first query google to retrieve a set of possible candidate homepages and then enrich the candidate list with text from the wikipedia they extract a set of features for each candidate namely a language model and overlapping terms between tweet and document as well as url length and string similarity in addition a prior probability of the mention corresponding to a certain entity on the yago knowledge base is also used recent work in ned or entity linking includes graph based algorithms for collective entity disambiguation such as tagme babelfy and wat word and entity embeddings have been also used for entity disambiguation more specifically fang and moreno propose to learn an embedding space for both entities and words and then compute similarity features based on the combined representations sentiment analysis in the last decade the automatic processing of subjective and emotive text commonly known as sentiment analysis has triggered huge interest from the text mining research community a typical task in sentiment analysis is text polarity classification and in the context of this work can be formalized as follows given a text span that mentions sentiment analysis a target entity decide whether it conveys positive negative or neutral sentiment towards the target with the rise of social media research on sentiment analysis shifted towards twitter new challenges have risen including slang misspelling emoticons poor grammatical structure a number of competitions were organized such as semeval leading to the creation of resources for research there are two main approaches to sentiment polarity classification using a dictionary of terms and phrases with annotated polarity or supervised learning building a model of the differences in language associated with each polarity based on training examples in the supervised learning approach a classifier is specifically trained for a particular type of text tweets about politics consequently it is possible to capture peculiarities of the language used in that context as expected this reduces the generality of the model as it is biased towards a specific domain supervised learning approaches require training data in twitter most of previous work obtained training data by assuming that emoticons represent the tweet polarity positive negative neutral or by using third party software such as the stanford sentiment analyzer approaches have shown to work effectively on conventional text but tend to be ill suited for twitter data with the purpose of overcoming this limitation an algorithm that uses a lexicon specifically tailored to social media text was introduced sentistrength has become a reference in recent years due to its relatively good performance and consistent performance on polarity classification of social media texts nevertheless it is confined to a fixed set of words and it is context independent the recent interest in deep learning led to approaches that use deep learned word embeddings as features in a variety of text mining tasks in sentiment analysis recent work integrated polarity information of text into the word embedding by extending the probabilistic document model obtained from latent dirichlet allocation while others learned embeddings from an existing embedding and sentences with annotated polarity or learning polarity specific word embeddings from tweets collected using emoticons and directly incorporating the supervision from sentiment polarity in the loss functions of neural networks background and related work word embeddings the most popular and simple way to model and represent text data is the vector space model a vector of features in a feature space represents each lexical item a word in a document and each item is independent of other items in the document this allows to compute geometric operations over vectors of lexical items using well established algebraic methods however the vector space model faces some limitations for instance the same word can express different meanings in different contexts the polysymy problem or different words may be used to describe the same meaning the synonymy problem since a variety of different methods lda and resources dbpedia have been developed to try to assign semantics or meaning to concepts and parts of text word embedding methods aim to represent words as real valued continuous vectors in a much lower dimensional space when compared to traditional models moreover this low dimensional space is able to capture lexical and semantic properties of words statistics are the fundamental information that allows creating such representations two approaches exist for building word embeddings one creates a low rank approximation of the word matrix such as in the case of latent semantic analysis and glove the other approach consists in extracting internal representations from neural network models of text levy and goldberg showed that the two approaches are closely related although word embedding research goes back several decades it was the recent developments of deep learning and the framework that captured the attention of the nlp community moreover mikolov et al showed that embeddings trained using models cbow and exhibit linear structure allowing analogy questions of the form man woman and can boost performance of several text classification tasks in this context the objective is to maximize the likelihood that words are predicted given their context has two models for learning word embeddings the model sg and the model cbow here we focus on cbow more formally every word is mapped to a unique vector represented by a column in a projection matrix w with d as embedding dimension and v as the total number of words in the vocabulary given a sequence of words wt wt the objective is to maximize the average log probability v x log p wt t word embeddings where c is the size of the context window and is a word in the context window of the center word wt the context vector is obtained by averaging the embeddings of each word and the prediction of the center word wt is performed using a softmax multiclass classifier over all vocabulary v eywt p wt p yw e i each of yi is for each output word i after training a low dimensionality embedding matrix e encapsulating information about each word in the vocabulary and its surrounding contexts is learned transforming a sparse representation of words into a compact real valued embedding vector of size d this matrix can then be used as input to other learning algorithms tailored for specific tasks to further enhance performance for large vocabularies it is unfeasible to compute the partition function normalizer of softmax therefore mikolov proposes to use the hierarchical softmax objective function or to approximate the partition function using a technique called negative sampling stochastic gradient descent is usually applied for training the softmax where the gradient is obtained via backpropagation there are several approaches to generating word embeddings one can build models that explicitly aim at generating word embeddings such as or glove or one can extract such embeddings as of more general models which implicitly compute such word embeddings in the process of solving other language tasks one of the issues of recent work in training word embeddings is the variability of experimental setups reported for instance in the paper describing glove the authors trained their model on five corpora of different sizes and built a vocabulary of most frequent words mikolov et al trained with vocabulary while mikolov et al was trained with vocabulary recently arora et al proposed a generative model for learning embeddings that tries to explain some theoretical justification for nonlinear models and glove and some hyper parameter choices the authors evaluated their model using vocabulary semeval sentiment analysis in twitter organizers report that participants either used general purpose word embeddings or trained from tweet dataset or from some sort of dataset however participants neither report the size of vocabulary used neither the possible effect it might have on the task specific results background and related work recently rodrigues et al created and distributed the first general purpose embeddings for portuguese gensim implementation was used and authors report results with different values for the parameters of the framework furthermore authors used experts to translate well established word embeddings test sets for portuguese language which they also made publicly available and we use some of those in this work predicting collective attention online reputation monitoring systems would be even more useful if they would be able to know in advance if social media users will talk a lot about the target entities or not in recent years a number of research works have studied the relationship and predictive behavior of user response to the publication of online media items such as commenting news articles playing youtube videos sharing urls or retweeting patterns the first attempt to predict the volume of user comments for online news articles used both metadata from the news articles and linguistic features the prediction was divided in two binary classification problems if an article would get any comments and if it would be high or low number of comments similarly other studies found that shallow linguistic features or sentiment and named entities have good predictive power research work more in line with ours tries to predict the popularity of news articles shares url sharing on twitter based on content features the authors considered the news source the article s category the article s author the subjectivity of the language in the article and number of named entities in the article as features recently there was a large study of the life cycle of news articles in terms of distribution of visits tweets and shares over time across different sections of the publisher their work was able to improve for some content type the prediction of web visits using data from social media after ten to twenty minutes of publication other lines of work focused on temporal patterns of user activities and have consistently identified broad classes of temporal patterns based on the presence of a clear peak of activity classes differentiate by the specific amount and duration of activity before and after the peak crane and sornette define endogenous or exogenous origin of events based on being triggered by internal aspects of the social network or external respectively they find that hashtag popularity is mostly influenced by exogenous factors instead of epidemic spreading other work extend these classes by creating distinct clusters of activity based on the distributions political data science in different periods before during and after the peak that can be interpreted based on semantics of hashtags consequently the authors applied text mining techniques to semantically describe hashtag classes yang and leskovec propose a new measure of time series similarity and clustering the authors obtain six classes of temporal shapes of popularity of a given phrase meme associated with a recent event as well as the ordering of media sources contribution to its popularity recently tsytsarau et al studied the time series of news events and their relation to changes of sentiment time series expressed on related topics on social media the authors proposed a novel framework using time series convolution between the importance of events and media response function specific to media and event type their framework is able to predict time and duration of events as well as shape through time political data science content analysis of mass media has an established tradition in the social sciences particularly in the study of effects of media messages encompassing topics as diverse as those addressed in seminal studies of newspaper editorials media or the uses of political rhetoric among many others by riffe and freitag reported an increase in the use of content analysis in communication research and suggested that digital text and computerized means for its extraction and analysis would reinforce such a trend their expectation has been fulfilled the use of automated content analysis has by now surpassed the use of hand coding the increase in the digital sources of text on the one hand and current advances in computation power and design on the other are making this development both necessary and possible while also raising awareness about the inferential pitfalls involved one avenue of research that has been explored in recent years concerns the use of social media to predict present and future political events namely electoral results although there is no consensus about methods and their consistency summarizes the differences between studies conducted so far by stating that they vary about period and method of data collection data cleansing and techniques prediction approach and performance evaluation one particular challenge when using sentiment is how to aggregate opinions in a timely fashion that can be fed to the prediction method two main strategies have been used to predict elections buzz number of tweets mentioning a given candidate or background and related work party and the use of sentiment polarity different computational approaches have been explored to process sentiment in text namely machine learning and linguistic based methods in practice algorithms often combine both strategies johnson et al concluded that more than predicting elections social media can be used to gauge sentiment about specific events such as political news or speeches defending the same idea diakopoulos el al studied the global sentiment variation based on twitter messages of an obama vs mccain political tv debate while it was still happening tumasjan et al used twitter data to predict the federal election in germany they stated that the mere number of party mentions accurately reflects the election result bermingham et al correctly predicted the irish general elections also using twitter data et al also tested the share of volume as predictor in the us senate special election in massachusetts on the other hand several other studies use sentiment as a polls result indicator connor et al used a sentiment aggregate function to study the relationship between the sentiment extracted from twitter messages and polls results they defined the sentiment aggregate function as the ratio between the positive and negative messages referring an specific political target they used the sentiment aggregate function as predictive feature in the regression model achieving a correlation of between the results and the poll results capturing the important trends bermingham et al also included in their regression model sentiment features bermingham et al introduced two novel sentiment aggregate functions for sentiment they modified the share of volume function to represent the share of positive and negative volume for sentiment they used a log ratio between the number of positive and negative mentions of a given party moreover they concluded that the inclusion of sentiment features augmented the effectiveness of their model et al introduced a different aggregate function in a race all negative messages on party are interpreted as positive on party and in summary suggestions for potentially independent or in other words predictive metrics appear in a wide variety of forms the mention share that a party received within all party mentions during a given the mention share of political candidates the share of positive mentions a party received the positive mention share of candidates the share of users commenting on a candidate or party the share of mentions for a candidate followed by a word indicative of electoral success or failure the relative increase of positive mentions of a candidate or simply a collection of various potentially political data science politically relevant words identified by their statistical relationship with polls or political actors in the past suggestions for the dependent variable metrics of political success show a similar variety they include the vote share that a party received on election day the vote share of a party adjusted to include votes only for parties included in the analysis the vote share of candidates on election day campaign tracking polls politicians job approval ratings and the number of seats in parliament that a party received after the election chapter entity retrieval for online reputation monitoring we start by presenting a formal definition of queries and how can we model the retrieval problem from a probabilistic perspective we assume that a query can be formulated as a sequence of individual each targeting a specific entity or relationship if we create specific representations for entities context terms as well as for pairs of entities relationships then we can create a graph of probabilistic dependencies between and representations we show that these dependencies can be depicted in a probabilistic graphical model a bayesian network therefore answering an query can be reduced to a computation of factorized conditional probabilities over a graph of and documents however it is not possible to compute these conditional probabilities directly from raw documents in a collection such as with traditional entity retrieval documents serve as proxies to entities and relationships representations it is necessary to fuse information spread across multiple documents we propose two design patterns inspired from model and model of balog et al to create centric and document centric representations the first design pattern early fusion consists in aggregating context terms of entity and relationship occurrences to create two dedicated indexes the entity index and the relationship index then it is possible to use any retrieval method to compute the relevance score of entity and relationship documents given the the second design pattern late fusion can be applied on top of a standard document index alongside a set of entity occurrences in each document first we compute the relevance score of documents given a then based on entity retrieval for online reputation monitoring the entity occurrences of the top k results we compute individual entity or relationship scores once again any retrieval method can be used to score documents when combined with traditional retrieval methods language models or these design patterns can be used to create unsupervised baselines for retrieval finally we follow a recent research line in entity retrieval which exploits term dependencies using the markov random field mrf framework for retrieval we introduce the dependence model erdm a novel supervised early model for retrieval that creates a mrf to compute term dependencies of queries and documents retrieval retrieval is a complex case of entity retrieval queries expect tuples of related entities as results instead of a single ranked list of entities as it happens with general entity queries for instance the query ethnic groups by country is expecting a ranked list of tuples ethnic group country as results the goal is to search for multiple unknown entities and relationships connecting them table retrieval definitions q qei i dei i qe qr de dr te query congresswoman hits back at us president entity in q congresswoman relationship in q hits back at representation of an entity frederica wilson representative congresswoman we use the terminology representation and document interchangeably representation of a relationship frederica wilson donald trump hits back we use the terminology representation and document interchangeably the set of entity in a query congresswoman us president the set of relationship in a query the set of entity documents to be retrieved by a query the set of relationship documents to be retrieved by a query query length corresponding to the number of entity and relationship the entity tuple to be retrieved frederica wilson donald trump retrieval in this section we present a definition of queries and a probabilistic formulation of the retrieval problem from an information retrieval perspective table presents several definitions that will be used throughout this chapter queries queries aim to obtain a ordered list of entity tuples te en as a result contrary to entity search queries where the expected result is a ranked list of single entities results of queries should contain two or more entities for instance the complex information need silicon valley companies founded by harvard graduates expects company founder as results in turn european football clubs in which a brazilian player won a trophy expects triples club player trophy as results each pair of entities ei in an entity tuple is connected with a relationship r ei a complex information need can be expressed in a relational format which is decomposed into a set of that specify types of entities e and types of relationships r ei between entities for each relationship there must be two one for each of the entities involved in the relationship thus a query q that expects is mapped into a triple of q where and are the entity attributes queried for and respectively and is a relationship attribute describing r ei if we consider a query as a chain of entity and relationship q n qen and we define the length of a query as the number of then the number of entity must be and the number of relationship equal to consequently the size of each entity tuple te to be retrieved must be equal to the number of entity for instance the query soccer players who dated a top model with answers such as cristiano ronaldo irina shayk is represented as three soccer players dated top model automatic mapping of terms from a query q to qei or i is out of the scope of this work and can be seen as a problem of query understanding we assume that the information needs are decomposed into constituent entity and relationship using natural language processing techniques or by user input through an interface that enforces the structure q n qen entity retrieval for online reputation monitoring modeling retrieval our approach to retrieval assumes that we have a raw document collection news articles and each document dj is associated with one or more entities ei in other words documents contain mentions to one or more entities that can be related between them since our goal is to retrieve tuples of related entities given a query that expresses entity attributes and relationship attributes we need to create representations for both entities and relationships we denote a representation of an entity ei as dei in retrieval we are interested in retrieving tuples of entities te en as a result the number of entities in each tuple can be two three or more depending on the structure of the particular query when a query aims to get tuples of more than two entities we assume it is possible to combine tuples of length two for instance we can associate two tuples of length two that share the same entity to retrieve a tuple of length three therefore we create representations of relationships as pairs of entities we denote a representation of a relationship r ei as i considering the example query which spiritual leader won the same award as a us vice president it can be formulated in the relational format as spiritual leader won award won us vice president associating the tuples of length two dalai lama nobel peace prize and nobel peace prize al gore would result in the expected dalai lama nobel peace prize al gore for the sake of clarity we now consider an example query with three this query aims to retrieve a tuple of length two a pair of entities connected by a relationship based on the definition of a query each entity in the resulting tuple must be relevant to the corresponding entity qe moreover the relationship between the two entities must also be relevant to the relationship qr instead of calculating a simple posterior p as with traditional information retrieval in retrieval the objective is to rank tuples based on a joint posterior of multiple entity and relationship representations given a query such as p when queries can be seen as chains of interleaved entity and relationship subqueries we take advantage of the chain rule to formulate the joint probability p q as a product of conditional probabilities formally we want to rank entity and relationship candidates in descending order of the joint posterior p as retrieval p q p q q q q rank p d d p q rank p rank p q q rank p we consider conditional independence between entity representations within the joint posterior the probability of a given entity representation dei being relevant given a query is independent of knowing that entity is relevant as well as an example consider the query action movies starring a british actor retrieving entity representations for action movies is independent of knowing that tom hardy is relevant to the british actor however it is not independent of knowing the set of relevant relationships for starring if a given action movie is not in the set of relevant for starring it does not make sense to consider it as relevant consequently p q p q since queries can be decomposed in constituent entity and relationship subqueries ranking candidate tuples using the joint posterior p is rank proportional to the product of conditional probabilities on the corresponding entity and relationship and we now consider a longer query aiming to retrieve a triple of connected entities this query has three entity and two relationship thus as we previously explained when there are more than one relationship we need to join relevant to each relationship that have one entity in common from a probabilistic point of view this can be seen as conditional dependence from the retrieved from the previous relationship p q p to rank entity and relationship candidates we need to calculate the following joint posterior entity retrieval for online reputation monitoring rank p p q p q q p q rank p q q p q q rank p p when compared to the previous example the joint posterior for shows that entity candidates for are conditional dependent of both and in other words entity candidates for must belong to candidates for both relationships representations that are connected with and we are now able to make a generalization of retrieval as a factorization of conditional probabilities of a joint probability of entity representations dei relationship representations i entity qei and relationship i these set of random variables and their conditional dependencies can be easily represented in a probabilistic directed acyclic graph a bayesian network in bayesian networks nodes represent random variables while edges represent conditional dependencies every other nodes that point to a given node are considered parents bayesian networks define the joint probability of a set of random variables as a factorization of the conditional probability of each random variable conditioned q on its parents formally p xn p xi where pai represents all parent nodes of xi figure depicts the representation of retrieval for different query lengths using bayesian networks we easily conclude that graphical representation contributes to establish a few guidelines for modeling retrieval first each points to the respective document node second relationship document nodes always point to the contiguous entity representations last when there are more than one relationship relationship documents also point to the subsequent relationship document once we draw the graph structure for the number of in q we are able to compute a product of conditional probabilities of each node given its parents adapting retrieval a b c fig bayesian networks for retrieval with queries of different lengths the general joint probability formulation of bayesian networks to retrieval we come up with the following generalization rank y p de dr p dei i dri qei p dri i qri y we denote d as the set of all candidate relationship documents in the graph and de the set of all candidate entity documents in the graph in information retrieval is often convenient to work in the as it does not affect ranking and transforms the product of conditional probabilities in a summation as follows r rank p de dr log p de dr rank x logp dei i dri qei x logp dri i qri entity retrieval for online reputation monitoring we now present two design patterns to compute each conditional probability for every entity and relationship candidate documents design patterns for retrieval traditional document retrieval approaches create direct representations of raw documents a retrieval model language models is then used to match the information need expressed as a keyword query against those representations however retrieval requires collecting evidence for both entities and relationships that can be spread across multiple documents it is not possible to create direct representations raw documents serve as proxy to connect queries with entities and relationships abstractly speaking entity retrieval can be seen as a problem of object retrieval in which the search process is about fusing information about a given object such as in the case of verticals google finance recently zhang and balog presented two design patterns for object retrieval the first design pattern early fusion is an approach where a termbased representation of objects is created earlier in the retrieval process first it creates by aggregating term counts across the documents associated with the objects later it matches queries against these using standard retrieval methods the second design pattern late fusion is a approach where relevant documents to the query are retrieved first and then later in the retrieval process it ranks objects associated with top documents these design patterns represent a generalization of balog s model and model for expertise retrieval in essence retrieval is an extension or a more complex case of where besides ranking objects we need to rank tuples of objects that satisfy the relationship expressed in the query this requires creating representations of both entities and relationships by fusing information spread across multiple raw documents we propose novel design patterns for retrieval that are inspired from the design patterns presented by zhang and balog for single we extend those design patterns to accommodate the specificities of retrieval we hypothesize that it should be possible to generalize the term dependence models to represent and achieve effective retrieval without entity or relationship type restrictions categories as it happens with the semantic web based approaches design patterns for retrieval early fusion the early fusion strategy presented by zhang and balog consists in creating a representation for each object under retrieval a containing all terms in the proximity of every object mention across a document collection as described in previous section queries can be formulated as a sequence of multiple entity queries qe and relationship queries qr in a early fusion approach each of these queries should match against a previously created representation since there are two types of queries we propose to create two types of representations one for entities and other for relationships our early fusion design pattern is similar to model of balog et al it can be thought as creating two types of de and dr a dei is created by aggregating the context terms of the occurrences of ei across the raw document collection on the other hand for each each pair of entities and ei that close together across the raw document collection we aggregate context terms that describe the relationship to create a i in our approach we focus on sentence level information about entities and relationships although the design pattern can be applied to more complex segmentations of text dependency parsing we rely on entity linking methods for disambiguating and assigning unique identifiers to entity mentions on raw documents we collect entity contexts across the raw document collection and index them in the entity index the same is done by collecting and indexing entity pair contexts in the relationship index we define the pseudo frequency of a term t for an entity dei as follows f t dei n x f t ei dj w ei dj where n is the total number of raw documents in the collection f t ei dj is the term frequency in the context of the entity ei in a raw document dj w ei dj is the association weight that corresponds to the weight of the document dj in the mentions of the entity ei across the raw document collection similarly the term pseudo frequency of a term t for a relationship i is defined as follows f t d i n x f t i dj w i dj entity retrieval for online reputation monitoring where f t i dj is the term frequency in the context of the pair of entity mentions corresponding to the relationship i in a raw document dj and w i dj is the association weight in this work we use binary associations weights indicating the of an entity mention in a raw document as well as for a relationship however other weight methods can be used the relevance score for an entity tuple te can then be calculated using the posterior p de dr defined in previous section equation we calculate the individual conditional probabilities as a product of a retrieval score with an association weight formally we consider logp dei i dri qei score dei qei w ei i ri logp dri i qri score dri qri w ri i where score dri qri represents the retrieval score resulting of the match of the query terms of a relationship qri and a relationship dri the same applies to the retrieval score score dei qei which corresponds to the result of the match of an entity qei with a entity dei for computing both score dri qri and score dei qei any retrieval model can be used different scoring functions will be introduced below we use a binary association weight for w ei i ri which represents the presence of a relevant entity ei to a qei in its contiguous relationships in the bayesian network i and ri which must be relevant to the i and qri this association weight is the building block that guarantees that two entities relevant to qe that are also part of a relationship relevant to a qr will be ranked higher than tuples where just one or none of the entities are relevant to the entity qe on the other hand the association weight w ri i guarantees that consecutive relationships share one entity between them in order to create triples or of entities for longer queries the relevance score of an entity tuple te given a query q is calculated by summing individual relationship and entity relevance scores for each i and qei in q we define the score for a tuple te given a query q as follows design patterns for retrieval e r rank p d d score dei qei w ei i ri x x score dri qri w ri i considering dirichlet smoothing unigram language models lm the constituent retrieval scores can be computed as follows scorelm dri qri log scorelm dei qei x f t c r r r ri f t d x f t c e e f t d e log ei where t is a term of a qei or qri f t dei and f t dri are the pseudo frequencies defined in equations and the collection frequencies f t c e f t c r represent the frequency of the term t in either the entity index c e or in the relationship index c r represent the total number of terms in a while r and e represent the total number of terms in a collection of finally and are the dirichlet prior for smoothing which generally corresponds to the average document length in a collection association weights both early fusion and late fusion share three components w ri dj w ei dj and w ei ri the first two represent document associations which determine the weight a given raw document contributes to the relevance score of a particular entity tuple te the last one is the association which indicates the strength of the connection of a given entity ei within a relationship ri in our work we only consider binary association weights but other methods could be used according to the binary method we define the weights as follows w ri dj if r ei dj otherwise entity retrieval for online reputation monitoring w ei dj if ei dj otherwise w ei i ri if ei i and ei dri otherwise w ri i if ei i and ei dri otherwise under this approach the weight of a given association is independent of the number of times an entity or a relationship occurs in a document a more general approach would be to assign real numbers to the association weights depending on the strength of the association for instance uniform weighting would be proportional to the inverse of the number of documents where a given entity or relationship occurs other option could be a approach early fusion example let us consider an illustrative example of the early fusion design pattern for retrieval using unigram language models and the query q soccer players who dated a top model this query can be decomposed in three qei soccer players top model and qri dated the first two target the entity index and the last targets the relationship index table presents a toy entity index with entities as example for each of the two entity including the term frequency f t dei for each term table illustrative example of the entity index in early fusion ei tom brady cristiano ronaldo lionel messi figo gisele bundchen irina shayik helen svedin f t dei design patterns for retrieval considering the remaining variables required to calculate the scorelm dei qei e f soccer c e f player c e f top c e f model c e we calculate the scorelm dei qei for the respective entities and for the first entity query soccer players the ranked list of relevant entities and the respective lm score would be the following lionel messi cristiano ronaldo figo tom brady for the second entity query top models gisele bundchen irina shayik helen svedin table shows relationships entity pairs relevant to the dated and the respective term frequency f t dri considering the remaining variables required to calculate the scorelm dri qri r f dated c r entity retrieval for online reputation monitoring table illustrative example of the relationship index in early fusion ri gisele bundchen tom brady irina shayik cristiano ronaldo helen svedin figo f t dri we calculate the scorelm dri qri for the respective relationship and the and we obtain the following ranked list gisele bundchen tom brady irina shayik cristiano ronaldo helen svedin figo we can now sum up individual scores for each and calculate the final score for the early fusion design pattern score te q using the equation the final ranked list of tuples is the following irina shayik cristiano ronaldo helen svedin figo gisele bundchen tom brady the entity tuple irina shayik cristiano ronaldo is the most relevant to the query soccer players who dated a top model although gisele bundchen tom brady has higher individual scores in two top model and dated it ranks last due to the poor relevance of tom brady to the soccer player the entity lionel messi is the most relevant entity to the soccer player but it is not relevant to the relationship therefore it is excluded from the final ranked list of entity tuples late fusion the late fusion design pattern presented by zhang and balog is a documentcentric strategy first we query raw individual documents then we aggregate the associated objects with the relevant documents instead of creating representations of entities and relationships pairs of entities in late fusion we use the design patterns for retrieval raw documents as hidden variables separating the query from the relevant entity tuples to be retrieved our vision of orm implies processing raw documents to detect entities occurrences and extract sentence level information that will be used in downstream entity retrieval and text mining tasks therefore we are not interested in applying a late fusion strategy in this work however we believe it makes sense to present a theoretical formulation of a late fusion design pattern for retrieval we leave the practical experiments with late fusion for future work in the context of generic retrieval the process of retrieving entity tuples using our late fusion strategy consists in processing each independently as in the early fusion strategy but in this case we use a single index comprising a term based representation of the collection of raw documents a retrieval model is used to calculate a relevance score of each individual raw document and a given once we have the relevant documents we use entity linking to extract the entities that are mentioned in each relevant raw document following this strategy we calculate aggregated counts of entity occurrences weighted by the individual relevance score of the individual raw documents at the end we join the results of each and calculate the overall relevance score of the entity tuples formally we define the relevance score of an entity tuple te given a query q as follows rank p de dr n x x score dj qei w ei dj w ei i ri n x x score dj qri w ri dj w ri i where score dj qri represents the retrieval score resulting of the match of the query terms of a relationship qri and a raw document dj the same applies to the retrieval score score dj qei which corresponds to the result of the match of an entity qei with a raw document dj the weights w ri dj and w ei dj represent association weights between relationships and raw documents and entities and raw documents respectively we use binary association weights in this work but other weights can be used we also use a binary association weight entity retrieval for online reputation monitoring for w ei i ri and w ri i which represent the association weights similarly to what happens with the case of early fusion for computing both score dj qri and score dj qei any retrieval model can be used considering the scores can be computed as follows scorebm dj qri x log n n t f t dj n t f t dj b b avg f t dj n n t n t f t dj b b avg ei ri where t is a term of a q or q and f t dj is the query term frequency in a raw document dj the inverse document frequency idf t is computed t as log nn t with n as the number of documents on the collection and n t the number of documents where the term is the total number of terms in a raw document dj and avg is the average document length and b are free parameters usually chosen as and in the absence of specific optimization scorebm dj qei x log late fusion example considering the same toy example query introduced in the previous we now have a single index the document index as illustrated in table the remaining parameters required for calculating the scorebm dj qei and scorebm dj qri are the following n soccer n player n dated n top n model avg design patterns for retrieval table illustrative example of the document index in late fusion dj f t dj ei cristiano ronaldo lionel messi cristiano ronaldo figo gisele bundchen gisele bundchen tom brady irina shayik gisele bundchen adriana lima tom brady irina shayik cristiano ronaldo figo helen svedin for the first entity soccer players the relevant documents ranked by the scorebm dj qei are the following cristiano ronaldo lionel messi figo cristiano ronaldo figo helen svedin gisele bundchen adriana lima tom brady for the second entity top model irina shayik entity retrieval for online reputation monitoring gisele bundchen figo helen svedin gisele bundchen adriana lima tom brady for the relationship dated gisele bundchen tom brady irina shayik cristiano ronaldo gisele bundchen adriana lima tom brady figo helen svedin since in late fusion there is no relationship that could be used directly as entity tuples we need to extract the candidate tuples from the raw documents retrieved using the relationship when there are more than two entity associations in a relevant document we combine entities to create tuples for instance has three entity associations therefore we extract three candidate tuples gisele bundchen tom brady gisele bundchen adriana lima and adriana lima tom brady for each candidate tuple we sum up scorebm dj qri w ri dj over every relevant document dj for the relationship that is associated with each entity tuple the same applies to individual entities from the candidate tuples that are associated with relevant documents for each entity for instance for the entity soccer players we sum score dj qei w ei dj w ei ri over the relevant documents that mentioned an entity that belongs to a candidate tuple when both entities of the candidate tuple are mentioned in relevant documents for both entity helen svedin figo we assign each entity to the that maximizes the final score score te q we use the scores of the entity soccer player for figo and the entity top model for helen svedin the final ranked list of entity tuples is the following irina shayik cristiano ronaldo helen svedin figo gisele bundchen tom brady design patterns for retrieval gisele bundchen adriana lima adriana lima tom brady once again lionel messi is excluded from the final ranked list of entity tuples because he is not associated with any document relevant to the relationship dated on the other hand adriana lima is included in the final ranking although it is not true that she has dated either tom brady or gisele bundchen in this example the top three entity tuples are ranked in the same order as in the early fusion strategy example implementation in this section we proposed two design patterns for retrieval early fusion ef and late fusion lf both can be seen as a flexible framework for ranking tuples of entities given a query expressed as a sequence of entity and relationship this framework is flexible enough to allow using any retrieval method to compute individual retrieval scores between document and query nodes in a graph structure when using language models lm or as scoring functions these design patterns can be used to create unsupervised baseline methods for retrieval in the case of early fusion there is some overhead over traditional document search since we need to create two dedicated indexes that will store entity and relationship the entity index is created by harvesting the context terms in the proximity of every occurrence of a given entity across the raw document collection this process must be carried for every entity in the raw document collection a similar process is applied to create the relationship index for every two entities occurring close together in a raw document we extract the text between both occurrences as a representation of the relationship between the two once again this process must be carried for every pair of entities in sentences across the raw document collection late fusion requires less overhead and can be implemented on top of a web search engine with reduced effort we only need to have a list of entity occurrences alongside each document therefore there is no need to create a separate index es on the other hand it requires more processing on query time since we need to first rank raw documents for each and then aggregate entity occurrences at the top k documents retrieved moreover it does not contain any information entity retrieval for online reputation monitoring on the entity occurrences so two entities occurring very far in the text might be considered as relationship candidates it might be prone to a higher false positive rate one advantage of early fusing lies in its flexibility as we need to create two separate indexes for retrieval it is possible to combine data from multiple sources in seamless way for instance one could use a well established knowledge base dbpedia as entity index and use a specific collection such as a news collection or a social media stream for harvesting relationships having a more transient nature common to both design patterns is a challenge inherent to the problem of retrieval the size of the search space although the problem is formulated as a sequence of independent the results of those must be joined together consequently we have a search space in which we need to join results based on shared entities this problem becomes particularly hard when are short and contain very popular terms let us consider actor as qei there will be many results to this probably thousands there is a high probability that will need to process thousands of before finding one entity that is also relevant to the relationship i if at the same time we have computational power constraints we will probably apply a strategy of just considering top k results for each which can lead to reduced recall in the case of short with popular terms dependence model in this section we present the dependence model erdm a novel supervised early model for retrieval recent approaches to entity retrieval have demonstrated that using models based on markov random field mrf framework for retrieval to incorporate term dependencies can improve entity search performance this suggests that mrf could be used to model query term dependencies among entities and relationships documents one of the advantages of the mrf framework for retrieval is its flexibility as we only need to construct a graph g representing dependencies to model define a set of potential functions over the cliques of g and to learn the parameter vector to score each document d by its unique and unnormalized joint probability with q under the mrf the potential functions are defined using an exponential form c exp f c where is a feature weight which is a free parameter in the model dependence model associated with feature function f c learning to rank is then used to learn the feature weights that minimize the loss function the model allows parameter and feature functions sharing across cliques of the same configuration same size and type of nodes of one query term node and one document node graph structures the dependence model erdm creates a mrf for modeling implicit dependencies between terms entities and relationships each entity and each relationship are modeled as document nodes within the graph and edges reflect term dependencies contrary to traditional retrieval using mrf sdm where the objective is to compute the posterior of a single document given a query the erdm allows the computation of a joint posterior of multiple documents entities and relationships given a query which consists also of multiple fig markov random field dependencies for retrieval the graph structures of the erdm for two queries one with and other with are depicted in figure and figure respectively both graph structures contain two different types of query nodes and document nodes entity query and relationship query nodes qe and qr plus entity and relationship document nodes de and dr within the mrf framework de and dr are considered documents but they are not actual real documents but rather objects representing an entity or a relationship between two entities unlike real documents these objects do not have direct and explicit representations usually it is necessary to gather evidence across multiple real documents that mention the given object in order to be able to match them against keyword queries therefore erdm can be seen as early retrieval model the existence of two different types of documents implies two different indexes the entity index and the relationship index entity retrieval for online reputation monitoring fig markov random field dependencies for retrieval the dependencies of erdm are found in the formed by one entity document and one relationship document i dei i and for dei dri and i dri the graph structure does not need to assume any explicit dependence between entity documents given a relationship document they have an implicit connection through the dependencies with the relationship document the likelihood of observing an entity document dei given a relationship document i is not affected by the observation of any other entity document explicit dependence between the two entity documents could be used to represent the direction of the relationship between the two entities to support this dependence relationship documents would need to account the following constraint r ei r ei i c r with c r representing the relationship index then we would compute an ordered feature function between entities in a relationship similar to the ordered bigram feature function in sdm in this work we do not explicitly model asymmetric relationships for instance if a user searches for the relationship entity a criticized entity b but was in fact entity b who criticized entity a we assume that the entity tuple entity a entity b is still relevant for the information need expressed in the query erdm follows the sdm dependencies between query terms and documents due to its proved effectiveness in multiple contexts therefore erdm assumes a dependence between neighboring terms ei ei p qjei l p qjei dependence model r r r r r i i i ei p qj i i i i l d p qj mrf for retrieval requires the definition of the sets of cliques maximal or nonmaximal within the graph that one or more feature functions is to be applied to the set of cliques in erdm containing at least one document are the following t e set of containing an entity document node and exactly one term in a entity oe set of containing an entity document node and two ordered terms in a entity t r set of containing a relationship document node and exactly one term in a relationship or set of containing a relationship document node and two ordered terms in a relationship s er set of containing one entity document node and one relationship document node s rer set of containing one entity document node and two consecutive relationship document nodes the joint probability mass function of the mrf is computed using the set of potential functions over the configurations of the maximal cliques in the graph potential functions are constructed from one or more real valued feature functions associated with the respective feature weights using an exponential form feature functions erdm has two types of feature functions textual and textual feature functions measure the textual similarity between one or more terms and a document node feature functions measure compatibility between entity and relationship documents if they share a given entity table presents an overview of the feature functions associated with clique sets and the type of input nodes although we could define a wide set of different feature functions we decided to adapt sdm textual feature functions to erdm clique configurations therefore we define unigram based feature functions fte and ftr to entity retrieval for online reputation monitoring table clique sets and associated feature functions by type and input nodes clique set te oe tr or s er s rer feature functions fte foe and fue ftr for and fue fser fsrer type textual textual textual textual input nodes qjei dei ei qjei dei i qj i r i qj i i dei i dei i dri containing a single term and a entity or relationship document node for containing consecutive terms and a document node we define two feature functions one considers consecutive terms and matches ordered bigrams with entity or relationship documents this feature function is denoted as foe and for depending if the clique is oe or or the second feature function matches bigrams with documents using an unordered window of terms it matches bigrams with documents if the two terms of the bigram occur with a maximum of other terms between each other this feature function is denoted as fue and fur depending if the clique is oe or or for each textual feature function we decided to use two variants dirichlet smoothing language models lm and we now present the summary of the textual feature functions used in this work e qjei dei log ft lm e f q i c e e f qj i dei j e e e f q i q i c e e ei j f qj i dei e e e f q i q i c e e ei j f qj i dei e e e i ei e fo lm qjei dei log ei e fu lm qjei dei log dependence model i f qj r r qj i i log ft lm i r c j r i i f q i i r f q q c j r i i i r f q q c i r j i f qj i r i r r i f qj r i r qj i i log fo lm r i i r r i r fu lm qj i i log r here f qjei dei and f qj i i represent the term frequencies in a entity document and relationship document respectively the collection frequencies r f qjei c e f qj i c r represent the frequency of term in either the entity index c e or in the relationship index c r the variants of these functions f and f represent ordered and unordered bigram matching frequency represent the total number of terms in a while r and e represent the total number of terms in a collection of finally and are the dirichlet prior for smoothing which generally corresponds to the average document length in a collection e ei e ei ft bm qj d log n e qj i e n qj i e f qj i dei e f qj i dei ei avg e entity retrieval for online reputation monitoring ei ei e ei fo bm qj d ei n e n qjei ei ei n qj ei f qjei dei ei ei f qjei dei b b avg e ei e ei fu bm qj d ei n e n qjei ei ei n qj ei dei f qjei ei ei dei b b avg f qjei e r i r ft bm i qj n r n qj i r n qj i r f qj i i r i f qj i i b b avg r dependence model r r r i i r fo bm i qj r i n r n qj i r r i n qj i r r i f qj i i r i r i f qj i i b b avg r r r i i r fu bm i qj i n r n qj i r r i n qj i r r i i f qj i r i r i i b b avg f qj i r here n e and n r represent the total number of documents in the entity index and relationship index respectively the document frequency of unigrams and bigrams is represented using n n and n and i are the total number of terms in a entity or relationship document while avg and avg are the average entity or relationship document length and b are free parameters usually chosen as and in the absence of specific optimization we define two features in erdm the first one fter is assigned to composed by one entity document and one relationship document and it is inspired in the feature function fe of hasibi and balog s elr model it is defined as follows h i fser dei i f dei i n e nr i where the linear interpolation implements the smoothing method with and f dei i which measures if the entity ei represented entity retrieval for online reputation monitoring in dei belongs to the relationship r ei represented in i the background model employs the notion of entity popularity within the collection of relationship documents n dei represents the number of relationship documents dr that contain the entity ei and n r represents the total number of relationship documents in the relationship index for queries with more than one relationship we draw an edge between consecutive relationship documents within the erdm graph this edge creates a containing two relationship documents and one entity document the feature function fsrer measures if a given entity ei is shared between consecutive relationship documents within the graph we opted to define a simple binary function fser dei i dri if ei dei i dri otherwise in summary we described the set of feature functions associated with each clique configuration within the erdm graph we leave for future work the possibility of exploring other type of features to describe textual similarity and compatibility between different nodes in the erdm graph such as neural language models ranking we have defined the set of clique configurations and the real valued feature functions that constitute the potential functions over the cliques in the graph of erdm we can now formulate the calculation of the posterior p de dr using the probability mass function of the mrf as follows dependence model rank de dr x f c g rank e xx o xx u xx t x x fte qjei dei e qei ei foe qjei dei e qei ei dei fue qjei e qei r ftr qj i i r qri j o x x u x x r r r r i i for qj i r qri j i i fur qj i r qri j s xx r s fser dei i e xx r fser dei i dri e in essence retrieval using the erdm corresponds to ranking candidate entity tuples using a linear weighted sum of the feature functions over the cliques in the graph therefore we can apply any linear learning to rank algorithm to optimize the ranking with respect to the vector of feature weights given a training set t composed by relevance judgments a ranking of entity tuples and an evaluation function e t that produces a real valued output our objective is to find the values of the vector that maximizes as explained in we require e to only consider the ranking produced and not individual scores this is the standard characteristic among information retrieval evaluation metrics map or ndcg discussion in this section we introduced the dependence model erdm a novel supervised early model for retrieval inspired by recent work in entity retrieval we believe that modeling term dependencies between and documents can increase search performance entity retrieval for online reputation monitoring erdm can be seen as an extension of the sdm model for document retrieval in a way that besides modeling query term dependencies we create graph structures that depict dependencies between entity and relationship documents consequently instead of computing a single posterior p we propose to use the mrf for retrieval for computing a joint posterior of multiple entity and relationship documents given a query p de dr moreover since erdm is a supervised model we believe that tuning weights of feature functions besides optimizing search performance can also help to explain the between terms and the respective documents but also how entity documents and relationship documents contribute to the overall relevance of entity tuples given a query summary of the contributions in this chapter we present several contributions to the problem of retrieval from a ir perspective generalization of the problem of search to cover entity types and relationships represented by any attribute and predicate respectively rather than a predefined set a general probabilistic model for retrieval using bayesian networks proposal of two design patterns that support retrieval approaches using the model proposal of a dependence model that builds on the basic sequential dependence model sdm to provide extensible representations and dependencies suitable for complex queries chapter retrieval over a web corpus we start this chapter by presenting a new method for generating test collections together with a new test collection the relink query collection comprising queries we leverage web tabular data containing entities and relationships among them as they share the same row in a table we exploit the wikipedia tree of articles containing lists of entities in the form of tables we developed a table parser that extracts tuples of entities from these tables together with associated metadata this information is then provided to editors that create queries fulfilled by the extracted tuples we then report a set of evaluations of the erdm model using four different query sets in order to leverage information about entities and relations in a corpus it is necessary to create a representation of entity related information that is amenable to er search in our approach we focus on sentence level information about entities although the method can be applied to more complex segmentation of text our experiments are based on the data set with text annotation that refer to entities found in the text including the variances of their surface forms each entity is designated by its unique id and for each unique entity instance we created entity documents comprising a collection of sentences that contain the entity these context documents are indexed comprising the entity index the same is done by creating entity pair documents and the entity pair index these two indexes enable us to execute queries using different retrieval models including the erdm that models the dependence between entities retrieval over a web corpus relink query collection improvements of search techniques have been hampered by a lack of test collections particularly for complex queries involving multiple entities and relationships in this section we describe a method for generating test queries to support comprehensive search experiments queries and relevance judgments are created from content that exists in a tabular form where columns represent entity types and the table structure implies one or more relationships among the entities editorial work involves creating natural language queries based on relationships represented by the entries in the table we have publicly released the relink test collection comprising queries and relevance judgments obtained from a sample of wikipedia tables the latter comprise tuples of entities that are extracted from columns and labelled by corresponding entity types and relationships they represent improvement of methods for both extraction and search is hampered by a lack of query sets and relevance judgments gold standards that could be used to compare effectiveness of different methods in this section we introduce a method for acquiring instances of entities and entity relationships from tabular data relink query collection qc of queries with corresponding relevance judgments essential to our approach is the observation that tabular data typically includes entity types as columns and entity instances as rows the table structure implies a relationship among table columns and enables us to create queries that are answered by the entity tuples across columns following this approach we prepared and released the relink qc comprising queries and relevance judgments based on a sample of wikipedia tables the query collection and the research framework are publicly enabling the community to expand the relink framework with additional document collections and alternative indexing and search methods it is important to maintain and enhance the relink qc by providing updates to the existing entity types and creating new queries and relevant instances from additional tabular data the material contained in this section was published in saleiro rodrigues soares relink a research framework and test collection for retrieval https relink query collection tabular data and entity relationships information that satisfies complex queries is likely to involve instances of entities and their relationships dispersed across web documents sometimes such information is collected and published within a single document such as a wikipedia page in such cases traditional search engines can provide excellent search results without applying special techniques or considering entity and relationship types indeed the data collection aggregation and tabularization has been done by a wikipedia editor that also means that a tabular wikipedia content comprising various entities can be considered as representing a specific information need the need that motivated editors to create the page in the first place such content can in fact satisfy many different information needs we focus on exploiting tabular data for exhaustive search for types in order to specify queries we can use column headings as entity types all the column entries are then relevance judgments for the entity query similarly for a given pair of columns that correspond to distinct entities we formulate the implied relationship for example the pair car manufacturing plant could refer to is made in or is manufactured in relationships the instances of entity pairs in the table then serve as evidence for the specific relationship this can be generalized to more complex information needs that involve multiple entity types and relationships automated creation of queries from tabular content is an interesting research problem for now we asked human editors to provide natural language and structured queries for specific entity types once we collect sufficient amounts of data from human editors we will be able to automate the query creation process with machine learning techniques for the relink qc we compiled a set of queries with relevance judgments from wikipedia lists about topic areas selection of tables wikipedia contains a dynamic index the lists of lists of lists which represents the root of a tree that spans curated lists of entities in various domains we used a wikipedia snapshot from october to traverse the lists of lists of lists tree starting from the root page and following every hyperlink of type list of and their children this resulted in a collection of list pages while most of the pages contain tabular data only include tables with consistent column and row structure as in we restrict content extraction to wikitable html class that http retrieval over a web corpus typically denotes data tables in wikipedia we ignore other types of tables such as infoboxes in this first instance we focus on relational tables the tables that have a key column referring to the main entity in the table for instance the list of books about skepticism contains a table books with columns author category and title among others in this case the key column is title which contains titles of books about skepticism we require that any relationship specified for the entity types in the table must contain the title type involve the title column in order to detect key columns we created a table parser that uses the set of heuristics adopted by lehmberg et al the ratio of unique cells in the column or text length once the key column is identified the parser creates entity pairs consisting of the key column and one other column in the table the content of the column cells then constitutes the set of relevant judgments for the relationship specified by the pair of entities for the sake of simplicity we consider only those wikipedia lists that contain a single relational table furthermore our goal is to create queries that have verifiable entity and entity pair instances therefore we selected only those relational tables for which the key column and at least one more column have cell content linked to wikipedia articles with these requirements we collected tables in the final step we selected tables by performing stratified sampling across semantic domains covered by wikipedia lists for each new table we calcuated the jaccard similarity scores between the title of the corresponding wikipedia page and the titles of pages associated with tables already in the pool by setting the maximum similarity threshold to we obtained a set of tables the process of creating relink queries involves two steps automatic selection of tables and columns within tables and manual specification of information needs for example in the table grammy award for album of the year the columns winner work were automatically selected to serve as entity types in the query figure the relationship among these entities is suggested by the title and we let a human annotator to formulate the query the relink query set was created by annotators we provided the annotators with access to the full table metadata table title or the first paragraph of the page and entity pairs or triples to be used to specify the query figure for each entity pair or triple the annotators created a natural language information need and an query in the relational format q i qei as shown in table relink query collection fig example of wikipedia table row fig example of metadata provided to editors formulation of queries the relational query format is introduced to support a variety of experiments with queries in essence a complex information need is decomposed into a set of subqueries that specify types of entities e and types of relationships r ei between entities for each relationship query there is one query for each entity involved in the relationship thus a query q that expects a pair of entities for a given relationship is mapped into three i qei where and qei are the entity types for and ei respectively and i is a relationship type describing r ei collection statistics relink qc covers thematic areas from the in wikipedia mathematics and logic religion and belief systems technology and applied sciences miscellaneous people geography and places natural and physical sciences general reference and culture and the arts the most common thematic areas are culture and the arts with queries and geography and places with queries in table we show the characteristics of the natural language and relational queries among queries refer to entity pairs and to entity triples as retrieval over a web corpus table examples of query annotations id nl query relational format what are the regiments held by the indian army regiment held by indian army in which seasons nhl players scored more than goals and the team they represented nhl season scored more than goals in nhl player played for nhl team table relink collection statistics total queries avg queries length avg qe length avg qr length uniq entity attributes qe uniq relationships qr avg relevant judgments all expected natural language descriptions of queries are longer on average characters compared to queries characters we further analyze the structure of relational queries and their components entity queries qe that specify the entity type and relationship queries qr that specify the relationship type across queries there are unique entity types qe out of total occurrences they are rather unique across queries only entity types occur in more than one query and occur in exactly queries the most commonly shared entity type is country present in queries in the case of relationships there are unique relationship types qr out of occurrences with a dominant type located in that occurs in queries this is not surprising since in many domains the key entity is tied to a location that is included in one of the columns nevertheless there are only relationship types qr occurring more than once implying that relink qc is a diverse set of queries including relationship types occurring only once experimental setup experimental setup in this section we detail how we conducted our experiments in retrieval since we only have access to test collections comprising general purpose queries we decided to use a web corpus as dataset more precisely dataset was created to support research on information retrieval and related human language technologies and contains billion web pages the part b is a subset of the most popular million english web pages including the wikipedia part b was created as a resource for research groups without processing power for processing the all collection we used the web collection with text span annotations linked to wikipedia entities to show how relink can be used for retrieval over web content we developed our prototype using apache lucene for indexing and search we used a specific python library pylucene that allowed our customized implementation tailored for retrieval data and indexing entity index relationship index fig illustration of indexing from a web corpus as a text corpus we use combined with text span annotations with links to wikipedia entities via freebase the entity linking precision and recall in is estimated to be and respectively for our experiments we created two main indexes one for entity extractions and one for entity pairs https retrieval over a web corpus relationships extractions we extract entity and pairs occurrences using an open information extraction method like ollie over the annotated corpus as follows for each entity annotation we extract the sentence where it occurred as an entity context for pairs of entities we look for entities in the same sentence and we extract the separating string the context of the relationship connecting them figure illustrates the indexing process adopted in this work we obtained million entity extractions and million entity pairs extractions as described in table in order to compute and i we incrementally updated two auxiliary indices containing the number of terms per entity and per entity pair respectively we ran our experiments using apache lucene and made use of groupingsearch for grouping extractions by entity and entity pair at query time to get the statistics for ordered and unordered bigrams we made use of spannearquery table extractions statistics entities entity pairs total unique avg doc len retrieval method and parameter tuning for experiments using erdm we adopted a three stage retrieval method first queries qei are submitted against the entity index and i is submitted against the index initial sets of top results grouped by entity or respectively are retrieved using lucene s default search settings second the feature functions of the specific retrieval model are calculated for each set using an implementation this process is easily parallelized the final ranking score for each is then computed using the learned weights evaluation scores are reported on the top results parameter tuning for erdm and baselines was directly optimized with respect to the mean average precision map we make use of the ranklib s implementation of the coordinate ascent algorithm under the sum normalization and constraints with random restarts coordinate ascent is a commonly used optimization technique that iteratively optimizes a single parameter while holding all other parameters fixed parameters are estimated using cross validation for each of the query sets separately to be able to use the same train and test folds throughout all experiments experimental setup we first randomly create fixed train and test folds from the initial result set for each query set all reported evaluation metrics were over folds we do not optimize the dirichlet priors and in language models and set them equal to the traditional average document length the average entity and entity pairs extractions length respectively the unordered window size n for fue and fur is set to be as suggested in test collections we ran experiments with a total of queries we decided to just perform experiments using queries aiming of entities we leave for future work the evaluation of queries aiming at triples besides relink qc we used other relationshipcentric query sets with pairs of wikipedia entities as answers relevance judgments the query sets cover a wide range of domains as described in table query sets for retrieval are scarce generally entity retrieval query sets are not table description of query sets used for evaluation query set count domains geography and places politics and society culture and the arts technology and science erq complex relink award city club company film novel person player song university cinema music books sports computing military conflicts general reference culture and the arts geography and places mathematics and logic natural and physical sciences people religion and belief systems society and social sciences technology and applied science total one exception is the query set used in the collection it contains a subset of relational queries who designed the brooklyn bridge most of relational queries in have a fixed relevant entity brooklyn bridge and can be easily transformed from single entity relevance judgments into pairs from the relational queries in we identified with no fixed relevant entity retrieval over a web corpus in the query give me the capitals of all countries in in these cases for provided single entity relevance judgment we needed to annotate the missing entity manually to create a pair for instance given a capital city in africa we identified the corresponding african country in addition we used two benchmarks created in previous work using approaches erq and complex neither erq nor complex provide complete relevance judgments and consequently we manually evaluated each answer in our experiments erq consists of queries that were adapted from and however of the queries have a given fixed entity in the query find eagles songs only queries are asking for pairs of unknown entities such as find films starring robert de niro and please tell directors of these complex queries were created with a approach it contains queries from which we removed that expect of entities this query set consists of pure queries for unknown pairs of entities such as currency of the country whose president is james mancham kings of the city which led the peloponnesian and who starred in a movie directed by hal ashby we used four different retrieval metrics mean average precision at results map precision at p mean reciprocal rank mrr and normalized discounted cumulative gain at ndcg results and analysis we start by performing a simple experiment for comparing early fusion and erdm using both language models lm and as retrieval functions since we are only interested in comparing relative performance we opted to scale down our experimental setup instead of computing the term frequency for every extraction for a given entity or relationship we cap to the number for each group of documents retrieved in the first passage we tried several different values and for values below extraction the performance reduced significantly for while the performance reduces it is not dramatic this setup reduces the experimental runtime and since we had limited resources this proved to be useful table depicts the results for this comparative evaluation we decided to only use the three test collections specifically tailored for relationship retrieval as we can see the results are very similar between ef and erdm for both lm and variants in the three test collections erdm presents slightly better performance than the corresponding ef variant however when performing statistical results and analysis significance tests we obtained above when comparing ef and erdm this is very interesting as it shows that for general purpose evaluation the overhead of computing sequential dependencies does not carry significant improvements table early fusion and erdm comparison using lm and erq map p mrr ndcg complex map p mrr ndcg relink queries map p mrr ndcg on the other hand we detect sensitivity to the retrieval function used in erq both and outperform but the opposite happens for complex and relink this sensitivity means that we can not generalize the assumption that one of the retrieval functions is more adequate for retrieval another important observation has to do with the overall lower results on the relink test collection in comparison with erq and complex contrary to our expectations has very low coverage of entity tuples relevant to the relink test collection we now present the results of comparing erdm with three baselines using sequential dependence to evaluate the impact of modeling dependencies between query terms the first baseline method baseee consists in submitting two queries against the entity index i and i qei are created by cross product of the two entity results set retrieved by each query for each method we compute the sequential dependence model sdm scores retrieval over a web corpus the second baseline method basee consists in submitting again a single query q towards the entity index used in erdm are created by cross product of the entity results set with itself the third baseline method baser consists in submitting a single query q towards an index this index is created using the full sentence for each in instead of just the separating string as in erdm this approach aims to capture any entity context that might be present in a sentence erdm relies on the entity index for that purpose in this evaluation we decided to not cap the number of extractions to compute term frequencies inside each group of results returned from the first passage with lucene groupingsearch due to the low coverage of clueweb for the entire relink collection we decided to just perform the evaluation using the top queries with highest number of relevance judgments in our indexes we also include results for the adapted test collection table results of erdm compared with three baselines baseee basee baser erdm map baseee basee baser erdm map baseee basee baser erdm map baseee basee baser erdm map p mrr ndcg erq p mrr ndcg complex p mrr ndcg relink queries p mrr ndcg results and analysis table presents the results of our experiments on each query set we start by comparing the three baselines among each other as follows from table baser baseline outperforms baseee and basee on all query sets while baseee is the worst performing baseline the baser retrieval is the only approach from the three baselines as its document collection comprises that in corpus baseee and basee retrieve entity pairs that are created in a step which reduces the probability of retrieving relevant results this results shows the need for a document collection when aiming to answer queries erdm significantly outperform all baselines on all query sets we performed statistical significance testing of map using erdm against each baseline obtaining below on all the query sets this results show that our early fusion approach using two indexes one for entities and other for relationships is adequate and promising we believe this approach can become a reference for future research in retrieval from an perspective nevertheless based on the absolute results obtained on each evaluation metric and for each query set we can conclude that retrieval is still very far from being a solved problem there is room to explore new feature functions and retrieval approaches this is a very difficult problem and the methods we proposed are still far from optimal performance queries such as find world war ii flying aces and their services or which mountain is the highest after annnapurna are examples of queries with zero relevant judgments returned on the other hand erdm exhibits interesting performance in some queries with high complexity such as computer scientists who are professors at the university where frederick terman was a we speculate about some aspects that might influence performance one aspect has to do with the lack of query relaxation in our experimental setup the relevant entity tuples might be in our indexes but if the query terms used to search for entity tuples do not match the query terms harvested from it is not possible to retrieve those relevant judgments query relaxation approaches should be tried in future work more specifically with the recent advances in word embeddings it is possible to expand queries with alternative query terms that are in the indexes on the other hand we adopted a very simple approach for extracting entities and relationships the use of dependency parsing and more complex methods of relation extraction would allow to filter out noisy terms we also leave this for future work moreover to further assess the influence of the extraction method we propose to use retrieval over a web corpus selective text passages containing the target entity pairs and the query terms associated as well then different extraction methods could be tried and straightforward evaluation of their impact a b c fig values of for erdm a all b c b and c were obtained using sum normalization to understand how much importance is attributed to the different types of clique sets we plot the values of the lambda parameters parameters represent the feature importance of the set of functions targeting the dependence between entity query terms and the entity documents in overall ranking score for represent the importance of the feature functions of the relationship type queries and finally the value for which is assigned to the feature function that evaluates if each entity retrieved from both entity type queries belongs to the retrieved from the relationship type query summary of the contributions we plot the feature weights learned on each query set as depicted in figure we see that and t weight for the unigram language model in the entity type queries dominate the ranking function we further evaluated the relative weights for each one of the three functions using a sum normalization of the three weights for both entity documents and documents we observe that t dominates on r every query set however the same does not happen with for relationship type queries the bigram features have higher values for complex and relink summary of the contributions in this chapter we presented the following contributions to the retrieval research area indexing method that supports generalization of entity types and to any attribute and predicate respectively a method for generating test collections which resulted in the relink query collection comprising queries results of experiments at scale with a comprehensive set of queries and corpora chapter entity filtering and financial sentiment analysis in this chapter we present the work developed to tackle two fundamental text mining problems in orm entity filtering and sentiment analysis we start by describing our participation at the filtering task of replab we developed a supervised method to classify tweets as relevant or to given target entity this method obtained the first place at the competition entity filtering can be seen as target based named entity disambiguation ned given a target entity under study we need to develop a binary classifier to filter out tweets that are not talking about the target entity this task is fundamental in orm as downstream tasks such as sentiment analysis or predictions would produce misleading results if noisy signals were used sentiment analysis has been widely studied over the last decade it is a research area with several ramifications as it is dependent on the type of texts and the objective of the analysis we decided to focus our efforts in a not so well explored of sentiment analysis semeval task focused on sentiment analysis of financial news and microblogs as one of the use cases of orm is to track the online reputation of companies and try to assess its impact on the stock market we decided it was a specific task within sentiment analysis in which we could make a contribution we obtained the fourth place in the microblogs using one of the evaluation metrics the task consisted in predicting a real continuous variable from to representing the polarity and intensity of sentiment concerning mentioned in short texts we modeled it as a regression analysis problem entity filtering and financial sentiment analysis entity the relationship between people and public entities has changed with the rise of social media online users of social networks blogs and are able to directly express and spread opinions about public entities such as politicians artists companies or products online reputation monitoring orm aims to automatically process online information about public entities some of the common tasks within orm consist in collecting processing and aggregating social network messages to extract opinion trends about such entities twitter one of the most used online social networks provides a search system that allows users to query for tweets containing a set of keywords orm systems often use twitter as a source of information when monitoring a given entity however search results are not necessarily relevant to that entity because keywords can be ambiguous for instance a tweet containing the word columbia can be related with several entities such as a federal state a city or a university furthermore tweets are short which results in a reduced context for entity disambiguation when monitoring the reputation of a given entity on twitter it is first necessary to guarantee that all tweets are relevant to that entity consequently other processing tasks such as sentiment analysis will benefit from filtering out noise in the data stream in this work we tackle the aforementioned problem by applying a supervised learning approach given a set of entities e ei a stream of texts s si tweets we are interested in monitoring the mentions of an entity ei on the stream s the discrete function fm ei s we cast the prediction of fm as a supervised learning classification problem in which we want to infer the target variable ei s we implemented a large set of features that can be generated to describe the relationship between an entity representation and a text mention we use metadata entity names category provided in the user configurations text represented with similarity between texts and wikipedia freebase entities disambiguation feature selection of terms based on frequency and feature matrix transformation using svd the learning algorithms from python library that were tested for entity filtering include naive bayes svm random forests logistic regression and multilayer perceptron most of the material contained in this section was published in rodrigues soares oliveira texrep a text mining framework for online reputation monitoring entity filtering task overview replab focused on monitoring the online reputation of entities on twitter the filtering task consisted in determining which tweets are relevant to each entity the corpus consists of a collection of tweets obtained by querying the twitter search api with entity names during the period from the june until the december the corpus contains tweets both in english and spanish the balance between both languages varies for each entity tweets were manually annotated as related or unrelated to the respective target entity the data provided to participants consists in tweets and a list of entities for each tweet in the corpus we have the target entity id the language of the tweet the timestamp and the tweet id the content of each url in the tweets is also provided due to twitter s terms of service the participants were responsible to download the tweets using the respective id the data related with entities contain the query used to collect the tweets bmw the official name of the entity bayerische motoren werke ag the category of the entity automotive the content of its homepage and both wikipedia articles in english and spanish the entity filtering module includes methods to normalize texts by removing all punctuation converting text to lower case removing accents and converting characters to their ascii equivalent lists of stop words for several languages are also available which are used to filter out non relevant words we rely on the natural language toolkit nltk to provide those lists contrary to other types of online texts news or blog posts tweets contain informal and language including emoticons spelling errors wrong letter casing unusual punctuation and abbreviations therefore when dealing with tweets the entity filtering module uses a tokenizer optimized for segmenting words in tweets after tokenization we extract user mentions and urls and hashtags textual content features many different types of features can be used to optimize relevance classification including language models keyword similarities between tweets and entities as well as external resources projections we implemented a large number of those we assume that future users of our framework for orm will provide data entity filtering and financial sentiment analysis content prior to training and configuring the entity filtering module language model text is encapsulated in a single feature to avoid high dimensionality issues when adding other features a representation of unigrams bigrams and trigrams for training a text classifier which calculates the probability of a text being related to the expected entity the output probabilities of the classifier are used as a feature keyword similarity similarity scores between metadata and the texts obtained by calculating the ratio of the number of common terms in the texts and the terms of query and entity name similarities at character level are also available in order to include possible spelling errors in the text web similarity similarity between the text and the normalized content of the entity s homepage and normalized wikipedia articles are also available the similarity value is the number of common terms multiplied by logarithm of the number of terms in tweet freebase for each keyword of the entity s query that exists in the text two bigrams are created containing the keyword and the word these are submitted to the freebase search api and the list of retrieved entities are compared with the id of the target entity on freebase a freebase score is computed by using the inverse position of the target entity in the list of results retrieved if the target entity is the first result the score is if it is the second the score is and so on if the target entity is not in the results list the score is zero the feature corresponds to the maximum score of the extracted bigrams of each text category classifier a sentence category classifier is created using the wikipedia articles of each entity each sentence of the wikipedia articles is annotated with the category of the corresponding entity for unigrams bigrams and trigrams are calculated and a classifier svm is trained to classify each text the feature is the probability of the text being relevant to its target class experimental setup the dataset used for the competition consists of a collection of tweets both in english and spanish possibly relevant to entities from four domains automotive banking entity filtering dataset related unrelated total training development validation test table replab filtering task dataset description universities and dataset consists of a collection of tweets obtained by querying the twitter search api with entity names during the period from the june until the december the balance between both languages varies for each entity the complementary data about each target entity is the following query used to collect the tweets bmw official name of the entity bayerische motoren werke ag category of the entity automotive content of entity homepage wikipedia article both in english and spanish tweets were manually annotated as related or unrelated to the respective target entity the dataset is divided in training test and development table the training set consists in a total of tweets from which we were able to download approximately of tweets in the training set are labeled as related we split the training dataset into a development set and a validation set containing and of the original respectively we adopted a randomly stratified split approach per entity we group tweets of each target entity and randomly split them preserving the balance of related unrelated tweets the test dataset consists of tweets from which we were able to download we used the development set for trying new features and test algorithms we divided the development set in folds generated with the randomly stratified approach we used the validation set to validate the results obtained in the development set the purpose of this validation step is to evaluate how well the entity filtering classifier generalizes from its training data to the validation data and thus estimate how well it will generalize to the test set it allows us to spot overfitting after validation we trained the classifier using all of the data in the training dataset and evaluated in the test set entity filtering and financial sentiment analysis results we created different classifier runs using different learners features and we also created entity specific models as explained in table we applied selection of features based on frequency and transformation of content representation using svd the learners tested include naive bayes nb svm random forests rf logistic regression lr and multilayer perceptron mlp the evaluation measures used are accuracy and the official metric of the competition which is the harmonic mean of reliability and sensitivity we present results for the top models regarding the we replicated the best system at replab in the run run learner features no of models svm rf rf all all all global global per entity table entity filtering versions description table shows the results of top performing runs and the official baseline of the competition this baseline classifies each tweet with the label of the most similar tweet of target entity in the training set using jaccard similarity coefficient the baseline results were obtained using of the test set run acc val set official baseline best replab acc r s table official results for each version plus our validation set accuracy based on the results achieved we are able to conclude that the models of our classifier are able to generalize successfully results obtained in the validation set are similar to those obtained in the test set during development solutions based on one model per entity were consistently outperformed by solutions based on global models we also noticed during development that language specific models english and spanish did not exhibit improvements in global accuracy therefore we opted to use language as a feature results show that the best model uses the random forests entity filtering classifier with estimators for training a global model though the language modeling feature encapsulates text by using a specific model trained just with of of tweets we performed a break down analysis for each one of the four categories of replab using run model as depicted in figure we observe that university banking and automotive categories exhibit similar average results all above in contrast results for music shows it is a rather difficult category of entities to disambiguate achieving of in fact some of the entity names of this category contain very ambiguous tokens such as alicia keys the wanted or the script fig results grouped by entity s category using run the main goal of this task was to classify tweets as relevant or not to a given target entity we have explored several types of features namely similarity between keywords language models and we have also explored external resources such as freebase and wikipedia results show that it is possible to achieve an accuracy over and an of in a test set containing more than tweets of entities in future work we expect to include the possibility of using embedding to learn a joint embedding space of entities and words similar to entity filtering and financial sentiment analysis financial sentiment sentiment analysis on financial texts has received increased attention in recent years nevertheless there are some challenges yet to overcome financial texts such as microblogs or newswire usually contain highly technical and specific vocabulary or jargon making the development of specific lexical and machine learning approaches necessary most of the research in sentiment analysis in the financial domain has focused in analyzing subjective text labeled with explicitly expressed sentiment however it is also common to express financial sentiment in an implicit way business news stories often refer to events that might indicate a positive or negative impact such as in the news title company x will cut jobs economic indicators such as unemployment and change over time such as drop or increase can also provide clues on the implicit sentiment contrary to explicit expressions subjective utterances factual text types often contain objective statements that convey a desirable or undesirable fact recent work proposes to consider all types of implicit sentiment expressions the authors created a fine grained sentiment annotation procedure to identify polar expressions implicit and explicit expressions of positive and negative sentiment a target company of interest is identified in each polar expression to identify the sentiment expressions that are relevant the annotation procedure also collected information about the polarity and the intensity of the sentiment expressed towards the target however there is still no automatic approach either or machine learning based that tries to model this annotation scheme in this work we propose to tackle the aforementioned problem by taking advantage of unsupervised learning of word embeddings in financial tweets and financial news headlines to construct a syntactic and semantic representation of words we combine with traditional approaches such as techniques and financial features to train a regressor for sentiment polarity and intensity we study how different regression algorithms perform using all features in two different at task microblogs and news headlines mentioning moreover we compare how different combinations of features perform in both the system source code and word embeddings developed for the competition are publicly the material contained in this section was published in saleiro rodrigues soares oliveira feup at task predicting sentiment polarity and intensity with financial word embeddings https financial sentiment analysis task overview the task of semeval consisted of sentiment analysis of financial short texts and it was divided in two based on the type of text microblogs consisted of stocktwits and tweets focusing on stock market events and assessments from investors and traders were identified using stock symbols the so called cashtags amzn for the company news headlines consisted of sentences extracted from yahoo finance and other financial news sources on the internet in this case were identified using their canonical name and were previously annotated by the task organizers microblogs headlines company jpmorgan glencore text span sentiment score its time to sell banks glencore s annual results beat forecasts table training set examples for both the goal of both was the following predict the sentiment polarity and intensity for each of the mentioned in a short text instance microblog message or news sentence the sentiment score is a real continuous variable in the range of very to very with designating neutral sentiment table presents two examples from the training set task organizers provided microblog messages for training and messages for testing in while in news sentences were provided for training and for testing submissions were evaluated using the cosine similarity financial word embeddings mikolov et al created a computationally efficient method to learn distributed representation of words where each word is represented by a distribution of weights embeddings across a fixed set of dimensions furthermore mikolov et al showed that this representation is able to encode syntactic and semantic similarities in the embedding space the training objective of the model defined by mikolov et al is to learn the target word representation embeddings that maximize the prediction of its surrounding words in a context window given the wt word in a vocabulary the objective is to maximize the average log probability entity filtering and financial sentiment analysis t x log p t where c is the size of the context window t is the total number of words in the vocabulary and is a word in the context window of wt after training a low dimensionality embedding matrix e encapsulates information about each word in the vocabulary and its use surrounding contexts we used to learn word embeddings in the context of financial texts using unlabeled tweets and news headlines mentioning from s p tweets were collected using the twitter streaming api with cashtags of stocks titles serving as request parameters yahoo finance api was used for requesting financial news feeds by querying the canonical name of the datasets comprise a total of tweets and news titles we learned separate word embeddings for tweets and news headlines using the model we tried several configurations of hyperparameters the setup resulting in the best performance in both was with dimensions removing words occurring less than times using a context window of words and negative samples per positive example even though the text collections for training embeddings were relatively small the resulting embedding space exhibited the ability to capture semantic word similarities in the financial context we performed simple algebraic operations to capture semantic relations between words as described in mikolov et al for instance the model trained on tweets shows that vector bearish vector loss vector gain results in vector bullish as most similar word representation approach in this section we describe the implementation details of the proposed approach a set of operations are applied to every microblog message and news sentence in the sets of and as well as in the external collections for training word embeddings character encoding and stopwords every message and headline was encoded in standard english stopword removal is also applied financial sentiment analysis and cash obfuscation both cashtags and canonical company names strings were replaced by the string dollar or euro signs followed by numbers were replaced by the string mapping numbers and signs numbers were mapped to strings using bins minus and plus signs were coverted to minus and plus b and m to billions and millions respectively the symbol was converted to percent question and exclamation marks were also converted to strings tokenization punctuation lowercasing tokenization was performed using twokenizer the remaining punctuation was removed and all characters were converted to lowercase features we combined three different group of features features and we apply standard as features we tried unigrams and with unigrams proving to obtain higher cosine similarity in both sentiment lexicon features we incorporate knowledge from manually curated sentiment lexicons for generic sentiment analysis as well as lexicons tailored for the financial domain the financial sentiment dictionary has several types of word classes positive negative constraining litigious uncertain and modal for each word class we create a binary feature for the match with a word in a and a polarity score feature positive negative normalized by the text span length as a sentiment lexicon we use mpqa and created binary features for positive negative and neutral words as well as the polarity score feature we create by taking the average of word vectors for each word in a text span we used the corresponding embedding matrix trained on external twitter and yahoo finance collections for and respectively entity filtering and financial sentiment analysis experimental setup in order to avoid overfitting we created a validation set from the original training datasets provided by the organizers we used a split and sampled the validation set using the same distribution as the original training set we sorted the examples in the training set by the target variable values and skipped every examples results are evaluated using cosine similarity and mean average error mae the former gives more importance to differences in the polarity of the predicted sentiment while the latter is concerned with how well the system predicts the intensity of the sentiment we opted to model both as single regression problems three different regressors were applied random forests rf support vector machines svm and multilayer perceptron mlp parameter tuning was carried using fold cross validation on the training sets results and analysis in this section we present the experimental results obtained in both we provide comparison of different learning algorithms using all features as well as a comparison of different subsets of features to understand the information contained in each of them and also how they complement each other task microblogs table presents the results obtained using all features in both validation set and test sets results in the test set are worse than in the validation set with the exception of mlp the official score obtained in was using random forests rf which is the regressor that achieves higher cosine similarity and lower mae in both training and validation set regressor set rf val rf test svr val svr test mlp val mlp test table microblog results with all cosine mae features on validation and test sets financial sentiment analysis we compared the results obtained with different subsets of features using the best regressor rf as depicted in table interestingly bow and boe complement each other obtaining better cosine similarity than the system using all features financial word embeddings boe capture relevant information regarding the target variables as a single group of features it achieves a cosine similarity of and mae of it is also able to boost the overall performance of bow with gains of more than in cosine similarity and reducing mae more than the individual group of features with best performance is while the worst is a system trained using lex only features while lex alone exhibits poor performance having some value but marginal when combined with another group of features it improves the results of the latter as in the case of boe lex and bow lex features cosine mae lex boe bow boe lex bow lex bow boe all table features performance breakdown on test set using rf task news headlines results obtained in news headlines are very different from the ones of the previous proving that predicting sentiment polarity and intensity in news headlines is a completely different problem compared to microblogs table shows that mlp obtains the best results in the test set using both metrics while svr obtains the best performance in the validation set the best regressor of rf is outperformed by both svr and mlp the official result obtained at was a cosine similarity of using mlp table shows the results of the different groups of features in for mlp regressor the most evident observation is that word embeddings are not effective in this scenario on the other hand lexical based features have significantly better performance in news headlines than in microblogs despite this the best results are obtained using all features entity filtering and financial sentiment analysis regressor set rf val rf test svr val svr test mlp val mlp test table news headlines results with features boe lex bow boe lex bow lex bow boe all table features performance cosine mae all features on validation and test sets cosine mae breakdown on test set using mlp analysis financial word embeddings were able to encapsulate valuable information in microblogs but not so much in the case of news headlines we hypothesize that as we had access to a much smaller dataset for training financial word embeddings for news headlines this resulted in reduced ability to capture semantic similarities in the financial domain other related works in sentiment analysis usually take advantage of a much larger dataset for training word embeddings on the other hand lexical features showed poor performance in microblog texts but seem to be very useful using news headlines the fact that microblogs have poor grammar slang and informal language reveals that financial lexicons created using well written and formal financial reports result better in news headlines rather than in microblog texts after inspecting microblog texts and headlines in which our models showed poor performance we believe it would be important to also encapsulate syntactic and semantic dependencies in our models for instance our model predicted a sentiment score of for the microblog message was right to reject the offer while the true value is similar examples include glencore shares in record crash as profit fears grow and i would rather be a buyer at these levels then trying to sell in which our models summary of the contributions has absolute errors around other type of errors have to do with intensity of the sentiment in which our model correctly predicts the polarity but still has a large error concluding remarks work reported here reported is concerned with the problem of predicting sentiment polarity and intensity of financial short texts previous work showed that sentiment is often depicted in an implicit way in this domain we created continuous word representations in order to obtain domain specific syntactic and semantic relations between words we combined traditional and lexicalbased features with to train a regressor of both sentiment polarity and intensity results show that different combination of features attained different performances on each future work will consist on collecting larger external datasets for training financial word embeddings of both microblogs and news headlines we also have planned to perform the regression analysis using deep neural networks summary of the contributions in this chapter we present some contributions to two fundamental text mining problems in orm a supervised learning approach for entity filtering on tweets achieving performance using a relatively small training set created and made available word embeddings trained from financial texts a supervised learning approach for sentiment analysis of financial texts chapter prediction in this chapter we explore the predictive power of information in online news and social media in the context of orm we address two different predictive tasks the first is concerned with predicting entity popularity on twitter based on signals extracted from the news cycle we aim to study different sets of signals extracted from online news mentioning specific entities that could influence or at least are correlated with future popularity of those entities on twitter we know that entity popularity on social media can be influenced by several factors but we are only interested in exploring the interplay between online news and social media for entities that are frequently mentioned on the news cycle such as politicians or footballers this could be particularly interesting for anticipating public relations damage control once a polemic news article is published or even for editorial purposes to maximize buzz on social media the second predictive task consists in using sentiment polarity extracted from tweets to predict political polls there has been several research work trying to assess the predictive power of social media to predict the outcome of political opinion surveys or elections however each study proposes its own method of aggregating polarity scores over time however there is not a consensus on which sentiment aggregate function is the most adequate for this problem we propose to use and contrast several sentiment aggregate functions reported in the literature by assessing their predictive power on a specific case comprising data collected during the portuguese bailout prediction exploring online news for reputation monitoring on twitter online publication of news articles has become a standard behavior of news outlets while the public joined the movement either using desktop or mobile terminals the resulting setup consists of a cooperative dialog between news outlets and the public at large latest events are covered and commented by both parties in a continuous basis through the social media such as twitter when sharing or commenting news on social media users tend to mention the most predominant entities mentioned in the news story therefore entities such as public figures organizations companies or geographic locations can act as latent connections between online news and social media online reputation monitoring orm focuses on continuously tracking what is being said about entities on social media and online news automatic collection and processing of comments and opinions on social media is now crucial to understand the reputation of individuals and organizations and therefore to manage their public relations however orm systems would be even more useful if they would be able to know in advance if social media users will talk a lot about the target entities or not we hypothesize that for entities that are frequently mentioned on the news politicians it is possible to establish a predictive link between online news and popularity on social media we cast the problem as a supervised learning classification approach to decide whether popularity will be high or low based on features extracted from the news cycle we define four set of features signal textual sentiment and semantic we aim to respond to the following research questions is online news a valuable source of information to effectively predict entity popularity on twitter do online news carry different predictive power based on the nature of the entity under study how do different thresholds for defining high and low popularity affect the effectiveness of our approach does the performance remain stable for different prediction times what is the most important feature set for predicting entity popularity on twitter based on the news cycle the material contained in this section was published in saleiro and soares learning from the news predicting entity popularity on twitter exploring online news for reputation monitoring on twitter do individual sets of features exhibit different importance for different entities approach the starting point of our hypothesis is that for entities that are frequently mentioned on the news politicians it is possible to predict popularity on social media using signals extracted from the news cycle the first step towards a solution requires the definition of entity popularity on social media entity popularity there are different ways of expressing the notion of popularity on social media for example the classical way of defining it is through the number of followers of a twitter account or the number of likes in a facebook page another notion of popularity associated with entities consists on the number of retweets or replies on twitter and post likes and comments on facebook we define entity popularity based on named entity mentions in social media messages mentions consist of specific surface forms of an entity name for example cristiano ronaldo might be mentioned also using just ronaldo or given an set of entities e ei a daily stream of social media messages s si and a daily stream of online news articles n ni we are interested in monitoring the mentions of an entity ei on the social media stream st the discrete function fm ei st let t be a daily time frame t tp where the time tp is the time of prediction and is the prediction horizon time we want to learn a target popularity function fp on social media stream s as a function of the given entity ei the online news stream n and the time frame t fp ei n t x fm ei st which corresponds to integrating fm ei s over t given a day di a time of prediction tp we extract features from the news stream n until tp and predict fp until the prediction horizon tp we measure popularity on a daily basis and consequently we adopted as everyday for example if tp equals to we extract features from n until and predict fp in the interval on day di in the case of tp equals to midnight we extract prediction features from n on the hours of previous day to predict fp for the hours of di we cast the prediction of fp ei n t as a supervised learning classification problem in which we want to infer the target variable ei n t defined as low if p fp ei n t k high if p fp ei n t k where is the inverse of cumulative distribution function at k of fp ei n t as measured in the training set a similar approach to tsagkias et al for instance k corresponds to the median of fp ei n t in the training set and higher values of k mean that fp ei n t has to be higher than k examples on the training set to consider resulting in a reduced number of training examples of the positive class high news features previous work has focused on the influence of characteristics of the social media stream s in the adoption and popularity of memes and hashtags in contrast the main goal of this work is to investigate the predictive power of the online news stream n therefore we extract four types of features from n which we label i signal ii textual iii sentiment and iv semantic as depicted in table one important issue is how can we filter relevant news items to ei there is no consensus on how to link a news stream n with a social media stream some works use urls from n shared on s to filter simultaneously relevant news articles and social media messages as our work is entity oriented we select news articles with mentions of ei as our relevant n signal features this type of features depict the signal of the news cycle mentioning ei and we include a set of counting variables as features focusing on the total number of news mentioning ei in specific time intervals mentions on news titles the average length of news articles the different number of news outlets that published news mentioning ei as well as features specific to the day of the week to capture any seasonal trend on the popularity the idea is to capture the dynamics of news events for instance if ei has a sudden peak of mentions on n a relevant event might have happened which may influence fp textual features to collect textual features we build a daily profile of the news cycle by aggregating all titles of online news articles mentioning ei for the daily time frame tp in di we select the top most frequent terms unigrams and in exploring online news for reputation monitoring on twitter the training set and create a matrix two distinct methods were applied to capture textual features the first method is to apply weighting to we employ singular value decomposition svd to capture similarity between terms and reduce dimensionality it computes a linear approximation the final set of features for training and testing is the weighted matrix r combined with which produces real valued latent features when testing the system uses the same terms from the training data and calculates using the idf from the training data as well as for applying svd on test data the second method consists in applying latent dirichlet allocation lda to generate a topic model of topics features the system learns a distribution and a word distribution over topics using the training data for a given entity ei when testing the system extracts the word distribution of the news title vector r on a test day then by using learned on training data it calculates the probability of r belonging to one of the topics learned before the objective of extracting this set of features is to create a characterization of the news stream that mentions ei namely which are the most salient terms and phrases on each day di as well as the latent topics associated with ei by learning our classifier we hope to obtain correlations between certain terms and topics and fp sentiment features we include several types of word level sentiment features the assumption here is that subjective words on the news will result in more reactions on social media as exposed in once again we extract features from the titles of news mentioning ei for the daily time frame tp we use a sentiment lexicon as sentiwordnet to extract subjective terms from the titles daily profile and label them as positive neutral or negative polarity we compute count features for number of positive negative neutral terms as well as difference and ratio of positive and negatives terms similar to textual features we create a tfidf weighted matrix r using the subjective terms from the title and apply svd to compute real valued sentiment latent features semantic features we use the number of different named entities recognized in n on day di until tp as well as the number of distinct news category tags extracted from the news feeds metadata these tags common in news articles consist of author annotated terms and phrases that describe a sort of semantic hierarchy of news categories topics and news stories european debt crisis we create a weighted and matrices and applied svd to each of them to reduce dimensionality to the idea is to capture interesting entity prediction table summary of the four type of features we consider number signal textual sentiment semantic feature description news news news total news titles avg content sources weekday is weekend number of news mentions of ei in tp in di number of news mentions of ei in tp in number of news mentions of ei in in number of title mentions in news of ei in tp in di average content length of news of ei in tp in di number of different news sources of ei in tp in di day of week true if weekend false otherwise tfidf titles lda titles of news titles tp in di of news titles tp in di pos neg neu ratio diff subjectivity tfidf subj number of positive words in news titles tp in di number of negative words in news titles tp in di number of neutral words in news titles tp in di positive negative p positive negative neutral words of subjective words pos neg and neu entities tags tfidf entities tfidf tags number of entities in news tp in di number of tags in news tp in di of entities in news tp in di of news tags tp in di as well as news stories that are less transient in time and might be able to trigger popularity on twitter learning framework let x be the feature vector extracted from the online news stream n on day di until tp we want to learn the probability p x this can be done using the inner product between x and a weighting parameter vector w r x using logistic regression and for binary classification one can unify the definition of p fp and p with exploring online news for reputation monitoring on twitter p x given a set of z pairs xi i with i z and i we solve the binary class penalized logistic regression optimization problem where c n x min w c log i w xi w we apply this approach following an entity specific basis we train an individual model for each entity given a set of entities e to which we want to apply our approach and a training set of example days d di we extract a feature vector xi for each entity ei on each training day di therefore we are able to learn a model of w for each ei the assumption is that popularity on social media fp is dependent of the entity ei and consequently we extract entity specific features from the news stream n for instance the top words of the news titles mentioning ei are not the same for ej experimental setup this work uses portuguese news feeds and tweets collected from january to january consisting of over million tweets and million online news to collect and process raw twitter data we use a crawler which recognizes and disambiguates named entities on twitter news data is provided by a portuguese online news this service handles online news from over portuguese news outlets and it is able to recognize entities mentioned on the news we choose the two most common news categories politics and football and select the entities with highest number of mentions on the news for both categories the politicians are two former and pedro passos coelho and the incumbent costa the football entities are two coaches jorge jesus and mourinho and the most famous portuguese football player cristiano ronaldo figure depicts the behavior of daily popularity of the six entities on the selected community stream of twitter users for each day from july until july as expected it is easily observable that in some days the popularity on twitter exhibits dataset is available for research purposes access requests via http prediction pedro passos coelho costa jorge jesus cristiano ronaldo mourinho aug sep oct nov dec jan feb mar apr may jun fig daily popularity on twitter of entities under study training iteration jan feb test dec jan training iteration jan feb mar feb test jan feb fig training and testing sliding window first iterations bursty patterns for instance when was arrested in november or when cristiano ronaldo won the fifa ballon d or in january we defined the years of and as training set and the whole year of as test set we applied a monthly sliding window setting in which we start by predicting entity popularity for every day of january the test set using a model trained on the previous months days the training set then we use february as the test set using a new model trained on the previous months then march and so on as depicted in figure we perform this evaluation process rolling the training and test set until december resulting in days under evaluation exploring online news for reputation monitoring on twitter the process is applied for each one of the six entities for different time of predictions tp and for different values of the decision boundary we test tp and k therefore we report results in section for different experimental settings for each one of the six entities the goal is to understand how useful the news cycle is for predicting entity popularity on twitter for different entities at different hours of the hours cycle and with different thresholds for considering popularity as high or low results and discussion results are depicted in table we report on positive class since in online reputation monitoring is more valuable to be able to predict high popularity than low nevertheless we also calculated overall accuracy results which were better than the reported here consequently this means that our system is fairly capable of predicting low popularity we organize this section based on the research questions we presented in the beginning of this section is online news valuable as source of information to effectively predict entity popularity on twitter do online news carry different predictive power based on the nature of the entity under study results show that performance varies with each target entity ei in general results are better in the case of predicting popularity of politicians in the case of football public figures jorge jesus exhibits similar results with the three politicians but mourinho and especially cristiano ronaldo represent the worst results in our setting for instance when cristiano ronaldo scores three goals in a match the burst on popularity is almost immediate and not possible to predict in advance further analysis showed that online news failed to be informative of popularity in the case of live events covered by other media such as tv interviews and debates on one hand and live football games on the other consist of events with unpredictable effects on popularity cristiano ronaldo can be considered a special case in our experiments he is by far the most famous entity in our experiments and in addition he is also an active twitter user with more than followers this work focus on assessing the predictive power of online news and its limitations we assume that for cristiano ronaldo endogenous features from the twitter itself would be necessary to obtain better results prediction table score of popularity high as function of tp and k equal to and respectively entity hour k costa pedro passos coelho cristiano ronaldo jorge jesus mourinho k costa pedro passos coelho cristiano ronaldo jorge jesus mourinho k costa pedro passos coelho cristiano ronaldo jorge jesus mourinho how do different thresholds for defining high and low popularity affect the effectiveness of our approach our system exhibits top performance with k which corresponds to balanced training sets with the same number of high and low popularity examples on each training set political entities exhibit scores above with k on the other hand as we increase k performance deteriorates we observe that for k the system predicts a very high number of false positives it is very difficult to predict extreme values of popularity on social media before they happen we plan to tackle this problem in the future by also including features about the target variable in the current and previous hours components does performance remain stable for different time of predictions results show that time of prediction affects the performance of the system specially for the political entities in their case is higher when time of prediction is noon exploring online news for reputation monitoring on twitter fig individual feature type score for tp at k and which is an evidence that in politics most of the news events that trigger popularity on social media are broadcast by news outlets in the morning it is very interesting to compare results for midnight and the former use the news articles from the previous day as explained in section while the latter use news articles from the first hours of the day under prediction in some examples twitter popularity was triggered by events depicted on the news from the previous day and not from the current day what is the most important feature set for predicting entity popularity on twitter based on the news cycle do individual set of features exhibit different importance for different entities figure tries to answer these two questions the first observation is that the combination of all groups of features does not lead to substantial improvements semantic features alone achieve almost the same score as the combination of all features however in the case of mourinho and ronaldo the combination of all features lead to worse results than the semantic set alone prediction sentiment features are the second most important for all entities except mourinho signal and textual features are less important and this was somehow a surprise signal features represent the surface behavior of news articles such as the volume of news mentions of ei before tp and we were expecting an higher importance regarding textual features we believe that news articles often refer to terms and phrases that explain past events in order to contextualize a news article in future work we consider alternative approaches for predicting future popularity of entities that do not occur everyday on the news but do have social media public accounts such as musicians or actors in opposition entities that occur often on the news such as economics ministers and the like but do not often occur in the social media pose also a different problem predicting political polls using twitter surveys and polls using the telephone are widely used to provide information of what people think about parties or political entities surveys randomly select the electorate sample avoiding selection bias and are designed to collect the perception of a population regarding some subject such as in politics or marketing however this method is expensive and time consuming furthermore over the years it is becoming more difficult to contact people and persuade them to participate in these surveys on the other hand the rise of social media namely twitter and facebook has changed the way people interact with news this way people are able to react and comment any news in real time one challenge that several research works have been trying to solve is to understand how opinions expressed on social media and their sentiment can be a leading indicator of public opinion however at the same time there might exist simultaneously positive negative and neutral opinions regarding the same subject thus we need to obtain a value that reflects the general image of each political target in social media for a given time period to that end we use sentiment aggregate functions in summary a sentiment aggregate function calculates a global value based on the number of positive negative and neutral mentions of each political target in a given period we conducted an exhaustive study and collected and implemented several sentiment aggregate functions from the state of the art the material contained in this section was published in saleiro gomes soares sentiment aggregate functions for political opinion polling using microblog streams predicting political polls using twitter sentiment thus the main objective of our work is to study and define a methodology capable of successfully estimating the poll results based on opinions expressed on social media represented by sentiment aggregators we applied this problem to the portuguese bailout case study using tweets from a sample of the portuguese tweetosphere and portuguese polls as gold standard given the monthly periodicity of polls we needed to aggregate the data by month this approach allows each aggregate value to represent the monthly sentiment for each political party due to the absence of a general sentiment aggregate function suitable for different case studies we decided to include all aggregate functions as features of the regression model therefore the learning algorithm is able to adapt to the most informative aggregate functions through time methodology to collect and process raw twitter data we use an online reputation monitoring platform which can be extended by researchers interested in tracking political opinion on the web it collects tweets from a predefined sample of users applies named entity disambiguation and generates indicators of both frequency of mention and polarity of mentions of entities over time in our case tweets are collected from the stream of thousand different users representing a sample of the portuguese community on twitter this sample was obtained by expanding a manually annotated seed set of users using heuristics such as as language of posts language of followers posts or the platform automatically classifies each tweet according to its sentiment polarity if a message expresses a positive negative or neutral opinion regarding an entity politicians it is classified as positive negative or neutral mention respectively the sentiment classifier uses a corpus of annotated tweets as training set and it has achieved an accuracy over using cross validation these tweets were manually annotated by political science students mentions of entities and respective polarity are aggregated by counting positive negative neutral and total mentions for each entity in a given period sentiment aggregate functions use these cumulative numbers as input to generate a new value for each specific time period since we want to use sentiment aggregate functions as features of a regression model to produce an estimate of the political opinion we decided to use traditional poll results as gold standard prediction sentiment aggregate functions let mei be a mention on twitter of an entity ei then and are positive neutral and negative classified mentions of entity ei on twitter therefore given a time frame t a month sentiment aggregate functions applied to the aggregated data between polls are the following p entitybuzz t mei the sum of the number of mentions buzz of a given entity in the time frame t entitypositives t sum of the positively classified mentions of a given entity in the time frame t p entityneutrals t the sum of the neutral classified mentions of a given entity in a time frame t p entitynegatives t the sum of the negatively classified mentions of a given entity in a time frame t p m p ei t p ei entitysubjectivity the ratio of positive and negative classified m e i t mentions of entity ei over its buzz in a time frame t p entitypolarity pt the ratio of positive over negative classified mentions in ei t a time frame t p ei t berminghamsovn p p the ratio of the negative classified mentions of t e entity ei over the total number of negative mentions of all entities in time frame p t p p bermingham t berminghamsovp t e p pt connor t p mei mei e p t p p gayo t polarity e p t p p p t polarityon eutral p p t p t t t p i predicting political polls using twitter sentiment p polarityot otal p p t p subjot otal subjn euv t m ei t p m ei t p t p t t t p t p p p e ei ei m m subjsov pt pei m t subjv ol p t share p m ei t p p t e m ei shareof n egdistribution p m p t ei mei t p p t i p e t in the poll where n is the number of political entities m ei p m pt meei i t p pt t mei p pt ei t m ei the sentiment aggregate functions are used as features in the regression models prediction fig negatives share berminghamsovn of political leaders in twitter data the data used in this work consists of tweets mentioning portuguese political party leaders and polls from august to december this period corresponds to the portuguese bailout when several austerity measures were adopted by the incumbent right wing governmental coalition of the psd and cds parties twitter table distribution of positive negative and neutral mentions per political party psd ps cds cdu be negative positive neutral total mentions the twitter data set contains classified messages collected from a network of thousand different users classified as portuguese table presents the distribution of positive negative and neutral mentions of the political leaders of the most voted political parties in portugal psd ps cds pcp and be the negative mentions represent the majority of the total mentions except for cdu where the number of negative mentions is smaller than the neutral ones the positive mentions represent less than of the total mentions of each party except for be where they represent predicting political polls using twitter sentiment of the total mentions the most mentioned parties are ps psd and cds the total mentions of these three parties represent of the data sample total mentions figure depicts the time series of the berminghamsovn negatives share sentiment aggregate function the higher the value of the function the higher is the percentage of negative tweets mention a given political entity in comparison with the other entities as expected pedro passos coelho psd as is the leader with the higher score throughout the whole time period under study paulo portas cds leader of the other party of the coalition and also member of the government is the second most negatively mentioned in the period while seguro ps is in some periods the second higher psd and cds are the incumbent parties while ps is the main opposition party in the time frame under study psd and cds as government parties were raising taxes and cutting salaries ps was the incumbent government during the years that led to the bailout and a fraction of the population considered responsible for the financial crisis the bailout and the consequent austerity measures could explain the overwhelming percentage of negative mentions although we verified that in other time periods the high percentage of negatives mentions remains we can say that twitter users of this sample when mentioning political leaders on their tweets tend to criticize them political opinion polls the polling was performed by eurosondagem a portuguese private company which collects public opinion this data set contains the monthly polls results of the five main portuguese parties from june to december figure represents the evolution of portuguese polls results we can see two main party groups the first group where both psd and ps are included has a higher value of vote intention above psd despite starting as the preferred party in vote intention has a downtrend along the time losing the leadership for ps in september on the other hand ps has in general an uptrend the second group composed by cds pcp and be has a vote intention range from to while cds has a downtrend in public opinion pcp has an ascendant one although the constant tendencies up and down trends we noticed that the maximum variation observed between two consecutive months is in june there was political crises in the government when cds threaten to leave the government coalition due to the austerity measures being implemented and corresponds to the moment when ps takes the lead in the polls prediction fig representation of the monthly poll results of each political candidate experimental setup we defined the period of to december as training set and the whole year of as test set we applied a sliding window setting in which we predict the poll results of a given month using the previous months as training set training set containing the monthly values of the aggregators both sentiment and buzz aggregator for months prior the month intended to be predicted test set containing the values of the aggregators both sentiment and buzz aggregator of the month intended to be predicted we start by predicting the poll results of january using the previous months as training set we select the values of the aggregators of the months prior january september to december we use that data to train our regression model then we input the aggregators values of january the first record of the test set in the the trained model to obtain the poll results prediction predicting political polls using twitter sentiment we select the next month of the test set and repeat the process until all months are predicted the models are created using two regression algorithms a linear regression algorithm ordinary least squares ols and a regression algorithm random forests rf we also run an experiment using the derivative of the polls time series as gold standard poll results variations from poll to poll thus we also calculate the variations of the aggregate functions from month to month as features furthermore we repeat each experiment including and excluding the lagged self of the polls the last result of the poll for a given candidate or the last polls result variation when predicting polls variations we use mean absolute error mae as evaluation measure to determine the absolute error of each prediction then we calculate the average of the twelve mae s so we could know the global prediction error of our model pn yi n n is the number of forecasts fi is the model s forecast and yi the real outcome m ae results and discussion in this section we explain in detail the experiments and their results we perform two different experiments using absolute values and using monthly variations predicting polls results in this experiment the sentiment aggregators take absolute values in order to predict the absolute values of polls results mathematically speaking this experiment can be seen as y buzzaggregators sentimentaggregators in figure we see the global errors we obtained the results show that we obtain a mae for the parties poll results over months of using ordinary least squares and using random forests the lagged self of the polls assuming the last known poll result as prediction results in a mae of which was expectable since the polls exhibit slight changes from month to month this experiment shows that the inclusion of the lagged self produces average errors similar to the lagged self prediction fig error predictions for polls results fig error predictions for polls results variation predicting polls results variation according to our exploratory data analysis the polls results have a small variation between two consecutive months thus instead of predicting the absolute value of poll results we tried to predict the variation in this particular experiment the inclusion of the as feature in the regression model has not a determinant role figure including that feature we could not obtain lower mae than excluding it it means that the real monthly poll variation is not constant over the year in general using a regression algorithm we obtain lower mae the results show that when leading with polls results with slight predicting political polls using twitter sentiment fig mean absolute error buzz vs sentiment changes from poll to poll it makes sense to transform the dataset by taking differences between consecutive buzz and sentiment several studies state that the buzz has predictive power and reflects correctly the public opinion on social media following that premise we trained our models with buzz and sentiment aggregators separately to predict polls variations this experiment allowed us to compare the behavior of buzz and sentiment aggregators according to figure buzz and sentiment aggregators have similar results although the ols algorithm combined only with buzz aggregators has a slightly lower error than the other models it is not a significant improvement these results also show that random forests algorithm performs the best when combined only with sentiment aggregators feature selection one of the main goals of our work is to understand which aggregator or group of aggregators better suits our case study according to the previous experiments we can achieve lower prediction errors when training our model with buzz and sentiment aggregators separately however when training our model with these two kinds of aggregators separately we are implicitly performing feature selection we only have prediction two buzz features share and due to that small amount of features it was not necessary to perform any feature selection technique within buzz features thus we decided to apply a feature selection technique to the sentiment aggregators in order to select the most informative ones to predict the monthly polls results variation we use univariate feature selection selecting of the sentiment features total of features using this technique the random forests global error rose from to however ols presents an mae drop from to another important fact to notice is that if we perform univariate feature selection to all aggregators buzz and sentiment we will achieve the same mae value that when applied only to sentiment aggregators it means that buzz aggregators are discarded by the feature selection technique we try a different approach and perform a recursive feature elimination technique in this technique features are eliminated recursively according to a initial score given by the external estimator this method allows us to determine the number of features to select thus also selecting features the ols mae drops to once again none of the buzz features were selected furthermore both feature selection techniques select different features for each monthly prediction feature importance we select the random forest model of monthly variations to study the features importance as depicted in figure the higher the score the more important the feature is the importance of a feature is computed as the normalized total reduction of the criterion brought by that feature it is also known as the gini importance values correspond to the average of the gini importance over the different models trained in the experiments the single most important feature is the bermingham aggregate function followed by neutrals it is important to notice that when combining all the aggregate functions as features in a single regression model the buzz does not comprise a high gini importance even though when used as a single feature it produces similar results to the sentiment aggregate functions in general the standard deviation of the gini importance is relatively high this has to do with our experimental setup as the values depicted in the bar chart correspond to the average of the gini importance over different models months of testing set therefore feature importances vary over time while the mae tends to remain unchanged we can say that different features have different informative value over time and consequently it is useful to combine all the sentiment aggregation functions as features of the regression models over time predicting political polls using twitter sentiment fig aggregate functions importance in the random forests models outlook we studied a large set of sentiment aggregate functions to use as features in a regression model to predict political opinion poll results the results show that we can estimate the polls results with low prediction error using sentiment and buzz aggregators based on the opinions expressed on social media we introduced a strong baseline for comparison the lagged self of the polls in our study we built a model where we achieve the lowest mae using the linear algorithm ols combined only with buzz aggregators using monthly variations the model has an mae of we performed two feature selection techniques univariate feature selection and recursive feature elimination applying the recursive technique to the sentiment features we can achieve an mae of matching our best model furthermore the chosen features are not the same in every prediction regarding feature importance analysis prediction our experiments showed that bermingham aggregate function represents the highest gini importance in the random forests model summary of the contributions in this chapter we presented research work about prediction for orm making the following contributions analysis of the predictive power of online news regarding entity popularity on twitter for entities that are frequently mentioned on the news analysis of how to combine different sentiment aggregate functions to serve as features for predicting political polls chapter a framework for online reputation monitoring in this chapter we present a framework that puts together all the building blocks required to perform orm the framework is divided in two distinct components one is dedicated to entity retrieval and the other to text mining in practice these two components can act as two separate frameworks both are adaptable and can be reused in different application scenarios from computational journalism to finance or politics we start with a framework overview description and then we focus specifically on each of the two components the first component is relink a research framework for retrieval we carried the experiments on retrieval described in chapter using relink furthermore since we did not have access to training data based on news articles we describe a case study of using relink for entity retrieval from a large news collection we then describe the texrep framework which is responsible for text mining related tasks for orm such as entity filtering sentiment analysis or predictive tasks the experiments described in both chapter and chapter were carried out using texrep we also provide further detail how texrep was used as backend of the popstar project finally we perform an independent study of practical aspects of general purpose word embeddings from the twitter stream to serve as resource for future users of texrep framework overview the framework provides entity retrieval and text mining functionalities that enable the collection disambiguation retrieval of entities and relationships sentiment analysis data aggregation prediction and visualization of information from a framework for online reputation monitoring heterogeneous web data sources furthermore given that both components are built using modular architectures providing abstraction layers and well defined interfaces new functionalities or methods can be easily integrated the framework is divided in two components relink and texrep both can work as independently dedicated frameworks using specific data sources or can be put together in a unifying setup for orm as depicted in figure when working together relink and texrep are connected through the entity occurrences warehouse this is the central module of our framework for orm the entity occurrences warehouse contains extractions from occurrences of the entities of interest across the web data sources relink entity retrieval entity occurrences warehouse texrep text mining fig overview on the orm framework the data flow starts with texrep collecting data from web text data sources extraction of text passages containing entity mentions and disambiguation entitycentric text passages are then stored in the entity occurrences warehouse this data can then be used for retrieval indexing using relink or for downstream text mining tasks sentiment analysis using other modules of texrep we now describe relink and texrep architectures and internal data flow relink the relink framework is designed to facilitate experiments with retrieval query collections the formulation of queries in natural language and relational format i qei provide opportunities to define and explore a range of query formulations and search algorithms although relink provides support for late fusion design patterns it is mostly tailored for early fusion approaches where it is necessary to create entity and relationship representations at indexing time a typical early fusion retrieval experimental setup would involve search over a collection to extract relevant instances of entity tuples and then verify their correctness against the relevance judgments the key enabling components therefore are test collections of documents with annotated entity instances that could be framework overview extracted during search an indexing facility and a retrieval module to process queries and rank results fig relink framework architecture overview figure depicts the architecture of relink used in the experiments described in chapter we include the modules responsible for deriving relevance judgments from wikipedia the table parser module is described in section in chapter currently the relink framework includes the collection combined with text span annotations with links to wikipedia entities via freebase the entity linking precision and recall in are estimated at and respectively the relink extractor part of indexer applies an open information extraction method over the annotated corpus the two additional components are corpus index and retrieval both depicted in figure the implementation of all modules in retrieval and the indexer http a framework for online reputation monitoring module in corpus index are based on apache lucene and the letor module serves as a wrapper for indexing and retrieval based on the collection we create two essential resources entity index and entity pair relationship index for the entities that occur in the corpus for a given entity instance the er indexer identifies terms within the same sentence and considers them as entity types for the observed entity instance similarly for a given pair of entities the er indexer verifies whether they occur in the same sentence and extracts the separating string that string is considered a context term for the entity pair that describes their relationship type we obtain entity and entity pair extractions with corresponding sentences that are processed by the indexer once the inverted index er index is created any instance of an entity or entity pair can be retrieved in response to the contextual terms entity types and relationship types specified by the users search process the retrieval process is managed by the relinker module figure the query analyzer module processes information requests and passes queries in the structured format to the retriever query search is performed in stages to allow for experimentation with different methods and parameter settings first the retriever provides an initial set of results using lucene s default search settings and groups them by entity or entity pairs on query time using the lucene s groupingsearch the scorer then generates and applies feature functions of specific retrieval models with required statistics currently the scorer has implementations for early fusion variants and erdm the relinker is responsible for and providing final results based on the scores provided by the scorer and the parameter weights learned by letor texrep texrep is a research framework that implements text mining techniques to perform online reputation monitoring orm in various application domains such as computational social sciences political data science computational journalism computational finance or online marketing http framework overview texrep was designed with two main challenges in mind it should be able to cope with the text mining problems underlying orm and it should be flexible adaptable and reusable in order to support the specificities of different application scenarios we define that a text mining based system for online reputation monitoring must follow a set of technical and operational requirements batch and operation such a system must naturally be able to operate in collecting data as it is generated processing it and updating indicators however it is also important to be able to operate in batch mode in which it collects specific data from a period indicated by the user if available and then processes it the system should use a distributed approach to deal with great volumes of data hadoop it should also be able to operate autonomously for long periods of time measured in months adaptability the system should be able to adapt its models polarity classification through time as well as across different applications updating models often requires manually annotated data ned therefore the system should provide a flexible annotation interface modularity researchers should be able to plug in specific modules such as a new data source and respective crawler or a different visualization the system interfaces should use rest apis and json data format which allow users to add new modules that interact with other data sources wikipedia or facebook reusability the system should enable repeatability of all experiments to allow the research community to obtain equal results we will make the software package of a prototype publicly available as well as the data sources and configuration parameters used in experiments language independence each component of the system should apply a statistical language modeling completely agnostic to the language of the texts we decompose the use of text mining for orm into four distinct but interconnected tasks data collection entity filtering sentiment analysis and analytics each task is accomplished by one or more software modules for instance analytics tasks usually involves the use of the aggregation prediction and visualization modules figure presents the texrep architecture including the data flow between modules a framework for online reputation monitoring pipeline manager sentiment analysis configurations training data data collection server data collection client visualization data collection client entity occurrences warehouse entity filtering aggregation prediction texrep data sources knowledge base fig architecture and data flows of the texrep framework entity filtering and sentiment analysis represent the most challenging text mining problems tackled in the texrep framework when tracking what is being said online about the target entities it is necessary to disambiguate mentions when this is done incorrectly the knowledge obtained by the other modules is negatively affected consequently other text mining tasks such as sentiment analysis will benefit from filtering non relevant texts the current implementation of the entity filtering module uses the python library as the machine learning library interface providing access to texrep users to the most suitable learning algorithm and parameter tuning for their specific needs we studied a large set of features that describe the relationship between the target entity representation and a given text and we tried several different supervised learning algorithms that are available through the framework such as support vector machines svm and random forests rf the sentiment analysis module also uses implementation of supervised learning algorithms in order to predict sentiment polarity and intensity in short texts framework overview using regression analysis we use unsupervised learning of word embeddings in short texts to construct syntactic and semantic representations of words the sentiment analysis module combines word embeddings with traditional approaches such as techniques and features to train a classifier for sentiment polarity and a regressor for sentiment intensity analytics modules include aggregation visualization and prediction these modules are application specific and depend on user configurations for instance in the political domain it is common to create aggregate functions that represent relative popularity indicators between political parties or candidates these indicators are then used to predict elections on the other hand if we consider the financial domain due to its high volatility aggregation is usually performed with lower granularity minutes instead of days and target prediction variables are individual stock prices or variations texrep implements various aggregation functions and allows custom of tailored prediction models based on each application therefore texrep is able to adapt itself to the specificities of different application scenarios by implementing a modular and flexible design through user configurations and abstraction layers data collection depends on the specified data sources thus texrep decouples implementations from the data collection process management using a rest api if a user needs a different data collection client from the ones provided by default she is able to implement a specific client that is easily integrated into the framework the same applies to the analytics modules which are extensible by loading methods through an abstraction layer furthermore if users wish to extend texrep with topic modeling they only need to the new module and write topic assignments through the entity occurrences warehouse new aggregation functions could be implemented that use the topic of each mention as input in order to create topic trends visualizations the framework can be fully configured using configuration files that are processed in the pipeline manager which is the module responsible for forwarding specific parameterization to the other modules it is possible to specify the entities of interest data sources aggregate functions and prediction time windows module specific configurations are also specified in this module such as which training data should be used by the modules that rely on machine learning as explained texrep addresses the two aforementioned challenges of developing a text mining framework for orm the current version of the framework is implemented in python uses mongodb as nosql database and implements the mapreduce paradigm for aggregations the external and pluggable resources used are the a framework for online reputation monitoring learn library and the matplotlib for visualization though users can replace these two resources by others of their preference we provide the implementations of each module that we believe are the most generic as possible within the context of orm nevertheless users are also able to extend each module with the methods they see fit such as new features or data steps we now describe in detail how the different modules interact with each other as well as a detailed explanation of the current implementation of the entity filtering sentiment analysis and analytics modules data flow texrep collects data continuously and performs processing and analytics tasks the standard data flow is organized as follows first the user defines the entities of interest in the configurations files including canonical and alternative names these configurations are processed by the pipeline manager and forwarded to the data collection clients to search for texts news articles and tweets using entity names as queries on each data api the data collection clients implement api clients such as the case of twitter and yahoo finance for instance if the user is interested in collecting rss feeds of news outlets then the data collection client can be adapted to subscribe to those feeds and process them accordingly once collected texts are stored in the entity occurrences warehouse entity filtering classifies each text as relevant or not for each target entity using a supervised learning approach a knowledge base freebase is used to extract target entity representations and to compute similarity features with extracted mentions contexts once the texts are filtered sentiment analysis takes place the framework implements both polarity classification and sentiment regression for sentiment intensity detection then analytics modules are able to aggregate and create visualizations of trends in data or predictions of application specific dependent variables data collection the data collection server communicates with each data collection client using a rest api and therefore it allows modularity and a plugin approach for adapting to specific data sources the task of data collection is based on entity configurations containing the list of entities under study each data source has specific web interfaces rss feeds yahoo finance api or twitter api the data collection server manages the data collection clients through specific interfaces plugins that are adequate for the corresponding source for instance collecting data relink use case from twitter poses some challenges namely due to the limits on the amount of data collected we opted to create by default a data collection client for socialbus a distributed twitter client that enables researchers to continuously collect data from particular user communities or topics while respecting the established limits some data sources allow query by topics entity names while others do not rss feeds moreover in the case of twitter we might be interested in continuously monitoring a fixed group of twitter users the accounts of the entities of interest in such cases when we can not search directly by entity name in the specific data source we use the list of entity names to process collected texts that might be relevant the data collection server applies a sequential classification approach using a prefix tree to detect mentions this method can be seen as first step of filtering but it is still prone to noisy mentions for instance a tweet with the word cameron can be relative to several entities such as a former uk prime minister a filmmaker or a company consequently this problem is later tackled by the entity filtering module collected texts news or tweets are stored in a centralized nosql database mongodb the entities occurrence warehouse this setup provides modularity and flexibility allowing the possibility of developing specific data collection components tailored to specific data sources and is completely agnostic to the data format retrieved from each data source the data collection server annotates each text with the target entity which will be used by the entity filtering module to validate that annotation relink use case in this section we present a use case of the relink framework in the context of orm applied to computational journalism never before has computation been so tightly connected with the practice of journalism in recent years the computer science community has researched and new ways of processing and exploring news archives to help journalists perceiving news content with an enhanced perspective we created a demo the timemachine that brings together a set of natural language processing text mining and information retrieval technologies to automatically extract and index entity related knowledge from the news articles it allows users to issue queries containing keywords and phrases about news stories or events and retrieves the most relevant entities mentioned in the news articles through newsexplorer ibm watson http a framework for online reputation monitoring time timemachine provides readable and insights and a temporal perspective of news stories and mentioned entities it visually represents relationships among public figures in news articles as a social network graph using a force atlas algorithm layout for the interactive and clustering of entities news processing pipeline the news processing pipeline depicted in figure starts with a news cleaning module which performs the boilerplate removal from the raw news files once the news content is processed we apply the nerd module which recognizes entity mentions and disambiguates each mention to an entity using a set of heuristics tailored for news such as job descriptors barack obama president of usa and linguistic patterns well defined for the journalistic text style we use a bootstrap approach to train the ner system our method starts by annotating entity names on a dataset of news items this is performed using a simple dictionarybased approach using such training set we build a classification model based on conditional random fields crf we then use the inferred classification model to perform additional annotations of the initial seed corpus which is then used for training a new classification model this cycle is repeated until the ner model stabilizes the fig news processing pipeline entity snippet extraction consists of collecting sentences containing mentions to a given entity all snippets are concatenated generating an entity document which is then indexed in the entity index the entity index represents the frequency of of each entity with each term that it occurs with in the news therefore by relying on the redundancy of news terms and phrases associated with an entity we are able to retrieve the most relevant entity to a given input keyword or phrase query as we also index the snippet datetime it is possible to filter query results based on a time span for instance the keyword corruption might retrieve a different entity list results in different time periods quotations are typically short and very informative sentences which may directly or indirectly quote a given entity quotations are automatically relink use case extracted refer to quotations extraction module using linguistic patterns thus enriching the information extracted for each entity finally once we have all mentioned entities in a given news articles we extract entity tuples representing of entities in a given news article and update the entity graph by incrementing the number of occurrences of a node entity and the number of occurrences of the edge relation between any two mentions demonstration the setup for demonstration uses a news archive of portuguese news it comprises two different datasets a repository from the main portuguese news agency and a stream of online articles provided by the main web portal in portugal sapo which aggregates news articles from online newspapers the total number of news articles used in this demonstration comprises over million news articles the system is working on a daily basis processing articles as they are collected from the news stream timemachine allows users to explore its news archive through an entity search box or by selecting a specific date both options are available on the website homepage and in the top bar on every page there are a set of stories recommendations on the homepage suited for first time visitors the entity search box is designed to be the main entry point to the website as it is connected to the entity retrieval module of timemachine fig cristiano ronaldo egocentric network users may search for surface names of entities cristiano ronaldo if they know which entities they are interested to explore in the news although the most a framework for online reputation monitoring powerful queries are the ones containing keywords or phrases describing topics or news stories such as eurozone crisis or ballon d or nominees when selecting an entity from the ranked list of results users access the entity profile page which contains a set of automatically extracted entity specific data name profession a set of news articles quotations from the entity and related entities an entity timeline is also provided to allow users to navigate entity specific data through time by selecting a specific period different news articles quotations and related entities are retrieved furthermore users have the option of view network which consists in a interactive network depicting connections among entities in news articles for the selected time span an example of such visualization is depicted in figure and it is implemented using the graph drawing library sigma js together with force atlas algorithm for the clustered layout of entities nodes consist of entities and edges represent a of mentioned entities in the same news articles the size of the nodes and the width of edges is proportional to the number of mentions and respectively different node colors represent specific news topics where entities were mentioned by selecting a date interval on the homepage instead of issuing a query users get a global interactive network of mentions and of the most frequent entities mentioned in the news articles for the selected period of time texrep use case this section describes the design and implementation of the popmine system an use case of the proposed framework developed in the scope of the popstar project it is an open source platform which can be used and extended by researchers interested in tracking reputation of political entities on the web popmine operates either in batch or online mode and is able to collect texts from conventional media news items in mainstream media sites and social media blogs and twitter to process those texts recognizing topics and political entities to analyze relevant linguistic units to generate indicators of both frequency of mention and polarity of mentions to political entities across sources types of sources and across time as a proof of concept we present these indicators in a web application tailored for tracking political opinion in portugal the popstar website the system is available as an open source software package that can be used by other researchers from social sciences but also from any other area that is interested in tracking public opinion on the web texrep use case we opted to use data from news articles tweets and blog posts and each of these data sources requires its specific crawler news articles and blog posts are collected using rss feeds which eases the implementations of a specific crawler collecting data from twitter poses some challenges the need for large amounts of data coupled with twitter s imposed limits demand for a distributed system we opted to use which enables researchers to continuously collect data from particular user communities while respecting twitter s imposed limits the data collection components crawl data from specific data sources which implement specific web interfaces rss feeds twitter api each data source must have its own data collection module which in turn connects to the popmine system using rest services popmine stores data collected in a document oriented nosql database mongodb this configuration allows modularity and flexibility allowing the possibility of developing specific data collection components tailored to specific data sources the default setting of data collection modules comprise the following components news data from online news are provided by the service verbetes e from labs sapo this service handles online news from over portuguese news sources and is able to recognize entities mentioned in the news blogs blog posts are provided by the blogs monitoring system from labs sapo which includes all blogs with domain blogger and wordpress blogs written in portuguese twitter tweets are collected using the platform socialbus responsible for the compilation of messages from portuguese users of twitter tweets are collected in real time and submitted to a language classification in our experiments we opted to collect the tweets written in portuguese the information extraction component comprises a knowledge base containing metadata about entities names or jobs using a knowledge base is crucial to filter relevant data mentioning politicians such as news tweets and blog posts in our application scenario we opted to use verbetes a knowledge base which comprises names alternative names and professions of portuguese people mentioned often in news articles the information extraction components address two tasks named entity recognition and named entity disambiguation we envision an application scenario where we http a framework for online reputation monitoring need to track political entities usually this type of entities are well known therefore we opted to use a knowledge base to provide metadata about the target entities namely the most common surface forms of their names once we had the list of surface forms to search for we applied a sequential classification approach using a prefix tree to detect mentions this method is very effective on news articles and blog posts but can result in noisy mentions when applied to twitter for instance a tweet containing the word cameron can be related with several entities such as the former uk prime minister a filmmaker or a company furthermore tweets are short which results in a reduced context for entity disambiguation we then apply the entity filtering approach of texrep the opinions warehouse contains the messages filtered by the information extraction component and applies polarity classification to those messages using an external resource the opinionizer classifier one of the requirements of the opinionizer is to use manually labeled data to train the classifier we developed an online annotation tool for that effect we create opinion and polls indicators using the aggregator which is responsible to apply aggregation functions and smoothing techniques once we obtain the aggregated data we make available a set of web services that can be consumed by different applications such as the popstar website or other research experiences such as polls predictions using social media opinions data aggregation buzz is the daily frequency with which political leaders are mentioned by twitter users bloggers and online media news we use two types of indicators the first type is the relative frequency with which party leaders are mentioned by each medium twitter blogs and news on each day this indicator is expressed for each leader of each party as a percentage relative to the total number of mentions to all party leaders the second indicator is the absolute frequency of mentions a simple count of citations for each political leader to estimate trends in buzz we use the kalman filter we allow users to choose the smoothing degree for each estimated trend users can choose between three alternatives a fairly reactive one where trend is highly volatile allowing close monitoring of variations a very smooth one ideal to capture long term trends and an intermediate option displayed by default after identifying the polarity in each of the tweets there are several ways to quantify the overall sentiment regarding political leaders we can for instance look at each texrep use case target independently or in relative terms compare positive with negative references or simply look at one side of the polarity or look at daily weekly or monthly data records in this first prototype we opted to present two separate indicators and their evolution across time using in both cases the day as reference period the fist indicator is the logarithm of the ratio of positive and negative tweets by political leader party leaders and the president in other words a positive sign means that the political leader under consideration received more positive than negative tweets that day while a negative result means that he received more negative than positive tweets in mathematical notation logsentimenti log positivesi negativesi the second approach is to simply look at the negative tweets the vast majority of tweets in our base classifier and calculate their relative frequency for each leader in this way it is possible to follow each day which party leaders were in relative terms more or less subject to tweets with negative polarity in mathematical notation negativesi d negativesshare p negativesd fig twitter buzz share of political leaders a framework for online reputation monitoring visualization we created a to allow interactive visualization of the data collected and processed in real time by the popmine platform the site was developed within the scope of the popstar project public opinion and sentiment tracking analysis and research and presents the following data a mentions to portuguese party leaders in twitter in the blogosphere and in online news b sentiment conveyed through tweets regarding party leaders c voting intentions for the main political parties measured by traditional polls and d evaluation of the performance of said party leaders measured by polls an example chart is depicted in figure besides providing our indicators in the form of charts the website also has a dashboard offering a more compact view of trends across indicators for all politicians learning word embeddings for word embeddings have great practical importance since they can be used as precomputed features to ml models significantly reducing the amount of training data required in a variety of text mining tasks we aim to provide general purpose word embeddings for the text mining tasks in orm we are particularly interested in learning word embeddings from the twitter stream due to the specificities of user generated content it is relatively easy to get access to word embeddings trained from well formed texts such as wikipedia or online news however to the best of our knowledge there are no publicly available word embeddings learned from the portuguese twitter stream there are several challenges with computing and consistently distributing word embeddings concerning the intrinsic properties of the embeddings how many dimensions do we actually need to store all the useful semantic information how big should the embedded vocabulary be to have practical value how do these two factors interplay type of model used for generating the embeddings there are multiple possible models and it is not obvious which one is the best both in general or in the context of a specific type of applications http the material contained in this section was published in saleiro sarmento rodrigues soares oliveira learning word embeddings from the portuguese twitter stream a study of some practical aspects learning word embeddings for orm the size and properties of training data what is the minimum amount of training data needed should we include out of vocabulary words in the training optimization techniques to be used model hyperparameter and training parameters not only the space of possibilities for each of these aspects is large there are also challenges in performing a consistent evaluation of the resulting embeddings this makes systematic experimentation of alternative configurations extremely difficult in this work we make progress in trying to find good combinations of some of the previous parameters we focus specifically in the task of computing word embeddings for processing the portuguese twitter stream content such as twitter messages tends to be populated by words that are specific to the medium and that are constantly being added by users these dynamics pose challenges to nlp systems which have difficulties in dealing with out of vocabulary words therefore learning a semantic representation for those words directly from the stream and as the words arise would allow us to keep up with the dynamics of the medium and reduce the cases for which we have no information about the words starting from our own implementation of a neural word embedding model which should be seen as a flexible baseline model for further experimentation our research tries to answer the following practical questions how large is the vocabulary the one can realistically embed given the level of resources that most organizations can afford to buy and to manage as opposed to large clusters of gpu s only available to a few organizations how much data as a function of the size of the vocabulary we wish to embed is enough for training meaningful embeddings how can we evaluate embeddings in automatic and consistent way so that a reasonably detailed systematic exploration of the previously describe space of possibilities can be performed by answering these questions based on a reasonably small sample of twitter data we hope to find the best way to proceed and train embeddings for twitter vocabulary using the much larger amount of twitter data available but for which parameter experimentation would be unfeasible this work can thus be seen as a preparatory study for a subsequent attempt to produce and distribute a database of embeddings for processing portuguese twitter data a framework for online reputation monitoring neural word embedding model the neural word embedding model we use is the continuous cbow given a sequence of words wi the task the model tries to perform is that of predicting the middle word wi based on the two words on the left and the two words on the right p wi this should produce embeddings that closely capture distributional similarity so that words that belong to the same semantic class or which are synonyms and antonyms of each other will be embedded in close regions of the embedding the neural model is composed of the following layers a input word embedding layer that maps each of the input words represented by a vectors with dimensions into a low dimension space bits the projections matrix winput is shared across the inputs this is not be the embedding matrix that we wish to produce a merge layer that concatenates the previous embeddings into a single vector holding all the context information the concatenation operation ensures that the rest of the model has explicit information about the relative position of the input words using an additive merge operation instead would preserve information only about the presence of the words not their sequence a intermediate context embedding dense layer that maps the preceding representation of words into a lower dimension space still representing the entire context we have fixed this context representation to dimensions this ultimately determines the dimension of the resulting embeddings this intermediate layer is important from the point of view of performance because it isolates the still relatively input space x bits input word embeddings from the very output space a final output dense layer that maps the takes the previous representation of the entire input context and produces a vector with the dimensionality of the word output space dimensions this matrix woutput is the one that stores the word embeddings we are interested in a softmax activation layer to produces the final prediction over the word space that is the p wi distribution learning word embeddings for orm all neural activations in the model are sigmoid functions the model was implemented using the library which relies on keras for model development and we train the model using the adam optimizer with the default parameters experimental setup we are interested in assessing two aspects of the word embedding process on one hand we wish to evaluate the semantic quality of the produced embeddings on the other we want to quantify how much computational power and training data are required to train the embedding model as a function of the size of the vocabulary we try to embed these aspects have fundamental practical importance for deciding how we should attempt to produce the database of embeddings we will provide in the future all resources developed in this work are publicly apart from the size of the vocabulary to be processed the hyperparamaters of the model that we could potentially explore are i the dimensionality of the input word embeddings and ii the dimensionality of the output word embeddings as mentioned before we set both to bits after performing some quick manual experimentation full hyperparameter exploration is left for future work our experimental testbed comprises a desktop with a nvidia titan x pascal intel core quad gb ram and a ssd drive training data we randomly sampled tweets from a corpus of tweets collected from the portuguese twitter community the comprise a total of words approx words per tweets in average from those tweets we generated a database containing distinct along with their frequency counts in this process all text was to help anonymizing the information we substituted all the twitter handles by an artificial token we also substituted all http links by the token link we prepended two special tokens to complete the generated from the first two words of the tweet and we correspondingly appended two other special tokens to complete centered around the two last tokens of the tweet tokenization was perform by trivially separating tokens by blank space no linguistic such as for example separating punctuation from words was made we https https a framework for online reputation monitoring table number of available for training for different sizes of target vocabulary opted for not doing any for not introducing any linguistic bias from another tool tokenization of user generated content is not a trivial problem the most direct consequence of not performing any linguistic is that of increasing the vocabulary size and diluting token counts however in principle and given enough data the embedding model should be able to learn the correct embeddings for both actual words ronaldo and the words that have punctuation attached ronaldo in practice we believe that this can actually be an advantage for the downstream consumers of the embeddings since they can also relax the requirements of their own tokenization stage overall the dictionary thus produced contains approximately distinct entries our dictionary was sorted by frequency so the words with lowest index correspond to the most common words in the corpus we used the information from the database to generate all training data used in the experiments for a fixed size of the target vocabulary to be embedded we scanned the database to obtain all possible for which all tokens were among the top words of the dictionary the top most frequent words in the corpus depending on different numbers of valid training were found in the database the larger the more valid would pass the filter the number of examples collected for each of the values of is shown in table since one of the goals of our experiments is to understand the impact of using different amounts of training data for each size of vocabulary to be embedded we will run experiments training the models using and of the data available metrics related with the learning process we tracked metrics related to the learning process itself as a function of the vocabulary size to be embedded and of the fraction of training data used learning word embeddings for orm and for all possible configurations we recorded the values of the training and validation loss cross entropy after each epoch tracking these metrics serves as a minimalistic sanity check if the model is not able to solve the word prediction task with some degree of success if we observe no substantial decay in the losses then one should not expect the embeddings to capture any of the distributional information they are supposed to capture tests and data for intrinsic evaluation using the gold standard data described below we performed three types of tests class membership tests embeddings corresponding to members of the same semantic class months of the year portuguese cities smileys should be close since they are supposed to be found in mostly the same contexts class distinction test this is the reciprocal of the previous class membership test embeddings of elements of different classes should be different since words of different classes ere expected to be found in significantly different contexts word equivalence test embeddings corresponding to synonyms antonyms abbreviations porque abbreviated by pq and partial references slb and benfica should be almost equal since both alternatives are supposed to be used be interchangeable in all contexts either maintaining or inverting the meaning therefore in our tests two words are considered distinct if the cosine of the corresponding embeddings is lower than or to belong to the same class if the cosine of their embeddings is higher than or equivalent if the cosine of the embeddings is higher that or we report results using different thresholds of cosine similarity as we noticed that cosine similarity is skewed to higher values in the embedding space as observed in related work we used the following sources of data for testing class membership data this data was collected from the evaluation data provided by these correspond to semantic classes a framework for online reputation monitoring collected manually by the authors by checking top most frequent words in the dictionary and then expanding the classes these include the following sets number of elements in brackets smileys months countries names surnames portuguese cities for the class distinction test we pair each element of each of the gold standard classes with all the other elements from other classes removing duplicate pairs since ordering does not matter and we generate pairs of words which are supposed belong to different classes for word equivalence test we manually collected equivalente pairs focusing on abbreviations that are popular in twitters qt quanto or lx lisboa and on frequent acronyms slb benfica in total we compiled equivalence pairs for all these tests we computed a coverage metric our embeddings do not necessarily contain information for all the words contained in each of these tests so for all tests we compute a coverage metric that measures the fraction of the goldstandard pairs that could actually be tested using the different embeddings produced then for all the test pairs actually covered we obtain the success metrics for each of the tests by computing the ratio of pairs we were able to correctly classified as i being distinct cosine or ii belonging to the same class cosine or and iii being equivalent cosine or it is worth making a final comment about the gold standard data although we do not expect this gold standard data to be sufficient for a evaluation of the resulting embeddings it should be enough for providing us clues regarding areas where the embedding process is capturing enough semantics and where it is not these should still provide valuable indications for planning how to produce the much larger database of word embeddings results and analysis we run the training process and performed the corresponding evaluation for combinations of size of vocabulary to be embedded and the volume of training data available that has been used table presents some overall statistics after training for epochs the average time per epoch increases first with the size of the vocabulary to embed because the model will have more parameters and then for each with the volume of training data using our testbed section the total time of learning in our experiments varied from a minimum of seconds with and learning word embeddings for orm table overall statistics for combinations of models learned varying and volume of training data results observed after training epochs embeddings training data tuples avg training loss validation loss data data data data data data data data data data data data of data to a maximum of hours with and using of the training data available extracted from tweets these numbers give us an approximate figure of how time consuming it would be to train embeddings from the complete twitter corpus we have consisting of tweets we now analyze the learning process itself we plot the training set loss and validation set loss for the different values of figure left with epochs and using all the available data as expected the loss is reducing after each epoch with validation loss although being slightly higher following the same trend when using we see no model overfitting we can also observe that the higher is the higher are the absolute values of the loss sets this is not surprising because as the number of words to predict becomes higher the problem will tend to become harder also because we keep the dimensionality of the embedding space constant dimensions it becomes increasingly hard to represent and differentiate larger vocabularies in the same we believe this is a specially valuable indication for future experiments and for deciding the dimensionality of the final embeddings to distribute on the right side of figure we show how the number of training and validation examples affects the loss for a fixed we varied the amount of data used for training from to three trends are apparent as we train with more data we obtain better validation losses this was expected the second trend is that by using less than of the data available the model tends to overfit the data as indicated by the consistent increase in the validation loss after about epochs check a framework for online reputation monitoring fig continuous line represents loss in the training data while dashed line represents loss in the validation data left side effect of increasing using of training data right side effect of varying the amount of training data used with dashed lines in right side of figure this suggests that for the future we should not try any drastic reduction of the training data to save training time finally when not overfitting the validation loss seems to stabilize after around epochs we observed no effects the model seems simple enough for not showing that type of behavior this indicates we have a practical way of safely deciding when to stop training the model intrinsic evaluation table presents results for the three different tests described in section the first expected result is that the coverage metrics increase with the size of the vocabulary being embedded because the word equivalence test set was specifically created for evaluating embedding when embedding words we achieve almost test coverage on the other hand for the class distinction test set which was created by taking the cross product of the test cases of each class in class membership test set we obtain very low coverage figures this indicates that it is not always possible to previously compiled data and that it will be important to compile data directly from twitter content if we want to perform a more precise evaluation the effect of varying the cosine similarity decision threshold from to for class membership test shows that the percentage of test cases that are classified as correct drops significantly however the drop is more accentuated when training with learning word embeddings for orm only a portion of the available data the differences of using two alternative thresholds values is even higher in the word equivalence test the word equivalence test in which we consider two words equivalent word if the cosine of the embedding vectors is higher than revealed to be an extremely demanding test nevertheless for the results are far superior and for a much larger coverage than for lower the same happens with the class membership test on the other hand the class distinction test shows a different trend for larger values of but the coverage for other values of is so low that it would not make sense to hypothesize about the reduced values of true negatives tn percentage obtained for the largest it would be necessary to confirm this behavior with even larger values of one might hypothesize that the ability to distinguish between classes requires larger thresholds when is large also we can speculate about the need of increasing the number of dimensions to be able to encapsulate different semantic information for so many words table evaluation of resulting embeddings using class membership class distinction and word equivalence tests for different thresholds of cosine similarity embeddings data class membership coverage acc acc class distinction word equivalence tn tn coverage acc acc coverage further analysis regarding evaluation metrics despite already providing interesting practical clues for our goal of trying to embed a larger vocabulary using more of the training data we have available these results also a framework for online reputation monitoring revealed that the intrinsic evaluation metrics we are using are overly sensitive to their corresponding cosine similarity thresholds this sensitivity poses serious challenges for further systematic exploration of word embedding architectures and their corresponding which was also observed in other recent works by using these absolute thresholds as criteria for deciding the similarity of words we create a dependency between the evaluation metrics and the geometry of the embedded data if we see the embedding data as a graph this means that metrics will change if we apply scaling operations to certain parts of the graph even if its structure relative position of the embedded words does not change for most practical purposes including training downstream ml models absolute distances have little meaning what is fundamental is that the resulting embeddings are able to capture topological information similar words should be closer to each other than they are to words that are dissimilar to them under the various criteria of similarity we care about independently of the absolute distances involved it is now clear that a key aspect for future work will be developing additional performance metrics based on topological properties we are in line with recent work proposing to shift evaluation from absolute values to more exploratory evaluations focusing on weaknesses and strengths of the embeddings and not so much in generic scores for example one metric could consist in checking whether for any given word all words that are known to belong to the same class are closer than any words belonging to different classes independently of the actual cosine future work will necessarily include developing this type of metrics concluding remarks producing word embeddings from tweets is challenging due to the specificities of the vocabulary in the medium we implemented a neural word embedding model that embeds words based on information extracted from a sample of the portuguese twitter stream and which can be seen as a flexible baseline for further experiments in the field work reported in this paper is a preliminary study of trying to find parameters for training word embeddings from twitter and adequate evaluation tests and data results show that using less than of the available training examples for each vocabulary size might result in overfitting the resulting embeddings obtain reasonable performance on intrinsic evaluation tests when trained a vocabulary containing the most frequent words in a twitter sample of relatively small size nevertheless results exhibit a skewness in the cosine similarity scores that should be further explored summary of the contributions in future work more specifically the class distinction test set revealed to be challenging and opens the door to evaluation of not only similarity between words but also dissimilarities between words of different semantic classes without using absolute score values therefore a key area of future exploration has to do with better evaluation resources and metrics we made some initial effort in this front however we believe that developing new intrinsic tests agnostic to absolute values of metrics and concerned with topological aspects of the embedding space and expanding data with cases tailored for content is of fundamental importance for the progress of this line of work furthermore we plan to make public available word embeddings trained from a large sample of tweets collected from the portuguese twitter stream this will require experimenting with and producing embeddings with higher dimensionality to avoid the cosine skewness effect and training with even larger vocabularies also there is room for experimenting with some of the of the model itself activation functions dimensions of the layers which we know have impact on final results summary of the contributions the work reported in this chapter makes the following contributions a framework that supports research in entity retrieval and text mining tasks in the context of online reputation monitoring this framework is composed by two major components that can act as independent frameworks relink and texrep the relink framework that supports comprehensive research work in retrieval supporting the creating of test queries as well as early fusion based approaches for retrieval the texrep framework that is able to collect texts from online media such as twitter or online news and identify entities of interest classify sentiment polarity and intensity the framework supports multiple data aggregation methods as well as visualization and modeling techniques that can be used for both descriptive analytics such as analyze how political polls evolve over time and predictive analytics such as predict elections a framework for online reputation monitoring a study of some practical aspects namely vocabulary size training data size and intrinsic evaluation for the training and publishing word embeddings from the portuguese twitter stream that can be later used for orm related tasks chapter conclusions in this thesis we have addressed two computational problems in online reputation monitoring entity retrieval and text mining entities are the gravitational force that drives the orm process and consequently the work reported in this thesis gravitates around entities and their occurrences across the web we researched and developed methods for extraction retrieval analysis and prediction of information spread across the web the main objectives of this thesis were achieved resulting in several contributions to the problem of online reputation monitoring several competitive baselines were developed which we believe represent significant progress in a research area where open source work is scarce however there are still many issues to be addressed in the future recent developments in deep neural networks create opportunities to improve performance in several tasks we addressed in this thesis once we have access to larger quantities of training data it will be possible to easily adapt our research framework to include these techniques summary and main contributions retrieval we have established that orm benefits from entity retrieval capabilities and should not be constrained to classic data analytics reports users ought to be able to search for information from social media and online news furthermore reputation is not an isolated asset and depends also of the reputation of neighboring entities we studied the problem of retrieval using a perspective and we made several contributions to this line of research conclusions generalization of the problem of search to cover entity types and relationships represented by any attribute and predicate respectively rather than a predefined set a general probabilistic model for retrieval using bayesian networks proposal of two design patterns that support retrieval approaches using the model proposal of a dependence model that builds on the basic sequential dependence model sdm to provide extensible representations and dependencies suitable for complex queries proposal of an indexing method that supports a retrieval approach to the above problem a method for generating test collections which resulted in the relink query collection comprising queries results of experiments at scale with a comprehensive set of queries and corpora retrieval is a complex case of entity retrieval where the goal is to search for multiple unknown entities and relationships connecting them contrary to entity retrieval from structured knowledge graphs approaches to retrieval are more adequate in the context of orm this happens due to the dynamic nature of the data sources which are much more transient than other more stable sources of information wikipedia used in general entity retrieval consequently we developed retrieval methods that do not rely on fixed and predefined entity types and relationships enabling a wider range of queries compared to semantic approaches we started by presenting a formal definition of queries where we assume that a query can be decomposed as a sequence of each containing keywords related to a specific entity or relationship then we adopted a probabilistic formulation of the retrieval problem when creating specific representations for entities context terms and for pairs of entities relationships it is possible to create a graph of probabilistic dependencies between and entity plus relationship representations we use a bayesian network to depict these dependencies in a probabilistic graphical model to the best of our knowledge this represents the first probabilistic model of retrieval summary and main contributions however these conditional probabilities can not be computed directly from raw documents in a collection in fact this is a condition inherent to the problem of entity retrieval documents serve as proxies to entities and relationship representations and consequently we need to fuse information spread across multiple documents to be able to create those representations we proposed two design patterns early fusion and late fusion inspired from model and model of balog et al however in the context of orm we are only interested in early fusion early fusion aggregates context terms of entity and relationship occurrences to create two dedicated indexes the entity index and the relationship index once we have the two indexes it is possible to apply any retrieval method to compute the relevance scores of entity and relationship documents representations given the the joint probability to retrieve the final entity tuples is computed using a factorization of the conditional probabilities the individual relevance scores on the other hand late fusion consists in matching the directly on a standard document index alongside a set of entity occurrence in each document once we compute the individual relevance scores of each document given a we then aggregate the entity occurrences of the top k results to compute the final joint probability when using traditional retrieval models such as language models or these design patterns can be used to create unsupervised baselines for retrieval since our objective was to explore an early fusion approach to retrieval we developed a novel supervised early model for retrieval the entityrelationship dependence model erdm it uses markov random field to model term dependencies of and documents erdm can be seen as an extension of the sequential dependence model sdm for document retrieval in a way that it relies on query term dependencies but creates a more complex graph structure that connects terms of multiple queries and multiple documents to compute the probability mass function under the mrf one of the difficulties we faced while researching retrieval was the lack of test collections we therefore decided to contribute to this research problem by creating a method for creating test collections we realized that web tabular data often include implicit relationships between entities that belong to the same row in a table we developed a table parser that extracts tuples of related entities from wikipedia tables we then extract metadata such as table title or column name and provide it to editors together with the list of entity tuples we conclusions asked editors to create queries in which the list of entity tuples could serve as relevance judgments this process resulted in the creation and publication of the relink query collection comprising queries we believe relink qc will foster research work in retrieval we performed experiments at scale using the web corpus from which we extracted and indexed more than million entity and relationship occurrences we evaluated our methods using four different query sets comprising a total of queries as far as we know this is the largest experiment in retrieval considering the size of the query set and the data collection results show consistently better performance of the erdm model over three proposed baselines when comparing language models and as feature functions we observed variance on the performance depending on the query set furthermore using unsupervised early fusion proved to be very competitive when compared to erdm suggesting that it can be used in some application scenarios where the overhead of computing sequential dependencies might be unfeasible entity filtering and sentiment analysis entity filtering and sentiment analysis are two fundamental text mining problems in orm we participated in two well known external benchmark competitions in both tasks resulting in performance we made the following contributions to these two problems a supervised learning approach for entity filtering on tweets achieving performance using a relatively small training set created and made available word embeddings trained from financial texts a supervised learning approach for sentiment analysis of financial texts entity filtering can be seen as targeted named entity disambiguation we developed a supervised method that classifies tweets as relevant or to a given target entity this task is fundamental in orm as downstream tasks such as prediction can be highly affected by noisy input data we implemented a large set of features that can be generated to describe the relationship between a tweet mentioning a entity and a reference entity representation summary and main contributions we relied on metadata such as entity categories text represented with similarity between tweets and wikipedia entity articles freebase entities disambiguation feature selection of terms based on frequency and feature matrix transformation using svd although our approach can be perceived as relatively simple and low cost we achieved first place with an accuracy over at the filtering task of replab in a test set containing more than thousand tweets and different target entities regarding sentiment analysis we decided to focus our efforts in a not so well explored namely financial texts we participated in semeval task which focused on sentiment analysis of financial news and microblogs the task consisted in predicting a real continuous variable from to representing the polarity and intensity of sentiment concerning mentioned in short texts we modeled it as a regression analysis problem previous work in this domain showed that financial sentiment is often depicted in an implicit way we created word embeddings in order to obtain domain specific syntactic and semantic relations between words in this context we combined traditional features and to train a regressor of both sentiment and intensity results showed that different combination of features attained different performances on each nevertheless we were able to obtain cosine similarities above in both and mean average errors below in a scale range of representing less than of the maximum possible error prediction we explored two prediction problems in the context of orm performing analysis of the predictive power of information on the news to predict entity popularity on twitter as well as a study of sentiment aggregate functions to predict political opinion we made the following contribution in this research area analysis of the predictive power of online news regarding entity popularity on twitter for entities that are frequently mentioned on the news analysis of how to combine different sentiment aggregate functions to serve as features for predicting political polls we are aware that entity popularity on social media can be influenced by endogenous and exogenous factors but we are only interested in exploring the interplay between conclusions online news and social media reactions this could be useful for anticipating public relations damage control or even for editorial purposes to maximize attention and consequently revenue we explored different sets of signal extracted from online news mentioning entities that are frequently mentioned on the news such as politicians of footballers these signals could influence or at least are correlated with future popularity of those entities on twitter results show that performance varies depending on the target entity in general results are better in the case of predicting popularity of politicians due to the high unpredictability of live events associated with sports this is a general conclusion of this study as online news do not have predictive power for live events as twitter reactions happen quickly than the publication of the news for such cases results also show that the time of prediction affects the performance of the models for instance in the case of politicians score is higher when time of prediction occurs after lunch time which is an evidence that in politics most of the news events that trigger social media reactions are reported in the morning news the second predictive studied we carried out consisted in using sentiment polarity extracted from tweets to predict political polls there is no consensus on previous research work on what sentiment aggregate functions is more adequate to predict political results we explored several sentiment aggregate functions described in the literature to assess which one or combination would be more effective on predicting polls during the portuguese bailout in our study we achieved the lowest mean average error using a combination of buzz aggregation functions to predict monthly poll variations instead of absolute values on the other hand the most important individual feature was an aggregate function consisting on the logarithm of the ration positive and negative classified tweets a framework for orm we also created a framework specifically tailored for orm that puts together the we tackled throughout this thesis we believe this framework represents a significant contribution and paves the way to future research in the computational problems inherent to the process of monitoring reputation online more precisely we make the following contributions a framework that supports research in entity retrieval and text mining tasks in the context of online reputation monitoring this framework is composed by two major components that can act as independent frameworks relink and texrep summary and main contributions the relink framework that supports comprehensive research work in retrieval supporting the creating of test queries as well as early fusion based approaches for retrieval the texrep framework that is able to collect texts from online media such as twitter or online news and identify entities of interest classify sentiment polarity and intensity the framework supports multiple data aggregation methods as well as visualization and modeling techniques that can be used for both descriptive analytics such as analyze how political polls evolve over time and predictive analytics such as predict elections a study of some practical aspects namely vocabulary size training data size and intrinsic evaluation for the training and publishing word embeddings from the portuguese twitter stream that can be later used for orm related tasks the framework is divided in two distinct components one is dedicated to entity retrieval and the other to text mining in practice these two components can act as two separate frameworks both are adaptable and can be reused in different application scenarios from computational journalism to finance or politics relink framework is designed to facilitate experiments with retrieval query collections texrep was designed with two main challenges in mind it should be able to cope with the text mining problems underlying orm and it should be flexible adaptable and reusable in order to support the specificities of different application scenarios we also presented two use cases of our framework for orm in the first we use relink in the context of computational journalism while in the second we described the design and the implementation of the popmine system an use case of the proposed framework in the scope of the popstar project furthermore we presented a study of the practical aspects of learning word embeddings from the twitter stream our goal was to try to assess the feasibility of producing and publishing general purpose word embeddings for orm results showed that using less than of the available training examples for each vocabulary size might result in we obtained interesting performance on intrinsic evaluation when trained a vocabulary containing most frequent words in a twitter sample of relatively small size we proposed a set of gold standard data for intrinsic evaluation of word embeddings from user generated content nevertheless we realized that evaluation metrics using absolute values as thresholds might not be suitable due to the cosine skewness effect on large dimensional embedding spaces we propose to develop topological intrinsic evaluation metrics in future work conclusions limitations and future work one of the major obstacles we faced during the course of this thesis was the limited availability of labeled data for training and evaluation of the different tasks we tackled this is a common limitation in the scope of online reputation monitoring due to this obstacle we did not have the chance to perform extensive experimentation using more than one data source and language for each task this aspect reduces the generalization of the results obtained since they might be biased towards the available datasets we had access to therefore we leave for future work experimentation on each task with multiple datasets using different data sources and languages to perform comparable evaluations we also recognize that we tried to address many different tasks which reduced our capability of addressing every task with the same level of depth nevertheless we believe that exploring several new tasks in the scope of orm constitutes a strong contribution to foster future research work in this area during the course of this thesis we did not have the possibility of performing user studies to assess the global usefulness of our framework for orm we would like to leave that as future work while we had the objective of applying retrieval in online news and social media which represent the natural data sources for orm it was not possible to evaluate our approaches using these type of data sources research work in retrieval is still in its early stages and we believed it was necessary to first contribute to general retrieval and leave for future work specific evaluation in the context of orm we implemented and created a demo of the early fusion approach since it is unsupervised however it was not possible to apply erdm to online news due to the lack of training queries and relevance judgments for parameter tuning in either cases we aim to conduct an user experience in a near future to collect queries and relevance judgments in the context of orm recent work in deep neural networks makes the opportunity to beat the baselines we created in this thesis however most of the tasks we addressed do not have enough labeled data to use these techniques one of the most interesting avenues we would like to explore would be the use of neural networks as feature functions of the erdm model since we have a dataset of more than million entity and relationship extractions this represents an ideal scenario for deep learning we propose to use a window based prediction task similar to the cbow model for training word embeddings given a fixed window size one would learn a neural network that would provide a ranked list of given an input query we believe this approach would reduce limitations and future work the computational costs of the current erdm feature functions since we would not need to keep two huge indexes at query time we would like also to explore different priors in entity and relationship documents within erdm for instance creating source and time sensitive rankings would be useful when using transient information sources another promising avenue is transfer learning specially due to the lack of training resources in the context of orm the possibility of bilingual training or politics to finance transfer knowledge would constitute a major progress in this area references cees bm van riel charles j fombrun et al essentials of corporate communication implementing practices for effective reputation management routledge mats atvesson organization from substance to image organization studies diana maynard kalina bontcheva and dominic rout challenges in developing opinion mining tools for social media proceedings of nlp can u tag usergeneratedcontent gianluca demartini claudiu s firan tereza iofciu ralf krestel and wolfgang nejdl why finding entities in wikipedia is difficult sometimes information retrieval jeffrey pound peter mika and hugo zaragoza object retrieval in the web of data in proceedings of the international conference on world wide web pages acm charles j fombrun and cees bm van riel fame fortune how successful companies build winning reputations ft press don stacks a practioner s guide to public relations research measurement and evaluation business expert press krisztian balog yi fang maarten de rijke pavel serdyukov luo si et al expertise retrieval foundations and in information retrieval tom heath and christian bizer linked data evolving the web into a global data space synthesis lectures on the semantic web theory and technology mohamed yahya denilson barbosa klaus berberich qiuyue wang and gerhard weikum relationship queries on extended knowledge graphs in proceedings of the ninth acm international conference on web search and data mining pages acm anastasia giachanou and fabio crestani like it or not a survey of twitter sentiment analysis methods acm comput june issn doi references michela nardo marco and naltsidis walking down wall street with a tablet a survey of stock market predictions using the web journal of economic surveys jasmina miha nada and martin streambased active learning for sentiment analysis in the financial domain information sciences pedro saleiro eduarda mendes rodrigues carlos soares and oliveira texrep a text mining framework for online reputation monitoring new generation doi pedro saleiro natasa eduarda mendes rodrigues and carlos soares relink a research framework and test collection for retrieval in proceedings of the international acm sigir conference on research and development in information retrieval shinjuku tokyo japan august pages doi pedro saleiro natasa eduarda mendes rodrigues and carlos soares early fusion strategy for retrieval in proceedings of the first workshop on knowledge graphs and semantics for text retrieval and analysis with the international acm sigir conference on research and development in information retrieval sigir shinjuku tokyo japan august pages pedro saleiro sarmento eduarda mendes rodrigues carlos soares and oliveira learning word embeddings from the portuguese twitter stream a study of some practical aspects in progress in artificial intelligence epia conference on artificial intelligence epia porto portugal september proceedings pages doi pedro saleiro eduarda mendes rodrigues carlos soares and oliveira feup at task predicting sentiment polarity and intensity with financial word embeddings in proceedings of the international workshop on semantic evaluation pages association for computational linguistics doi pedro saleiro and carlos soares learning from the news predicting entity popularity on twitter in advances in intelligent data analysis xv international symposium ida stockholm sweden october proceedings pages doi pedro saleiro jorge teixeira carlos soares and oliveira timemachine search and visualization of news archives in advances in information retrieval european conference on ir research ecir padua italy march proceedings pages doi pedro saleiro gomes and carlos soares sentiment aggregate functions for political opinion polling using microblog streams in proceedings of the references ninth international c conference on computer science software engineering porto portugal july pages doi pedro saleiro silvio amir silva and carlos soares popmine tracking political opinion on the web in ieee international conference on computer and information technology cit ieee international conference on ubiquitous computing and communications iucc ieee international conference on dependable autonomic and secure computing dasc ieee international conference on pervasive intelligence and computing picom liverpool united kingdom october pages doi pedro saleiro luis rei arian pasquali carlos soares jorge teixeira pinto mohammad nozari zarmehri catarina and pedro strecht popstar at replab name ambiguity resolution on twitter in working notes for clef conference valencia spain september theo bc poiesz the image concept its place in consumer psychology journal of economic psychology gary h jones beth h jones and philip little reputation as reservoir buffering against loss in times of economic crisis corporate reputation review stephen j newell and ronald e goldsmith the development of a scale to measure perceived corporate credibility journal of business research charles fombrun the reptrak system in presented anniversary conference on reputation image identity and competitiveness pages kurniawati kurniawati graeme g shanks and nargiza bekmamedova the business impact of social media analytics in ecis page matt kaufmann e portmann and madjid fathi a concept of semantics extraction from web data by induction of fuzzy ontologies in technology eit ieee international conference on pages ieee edy portmann the fora framework a fuzzy grassroots ontology for online reputation management springer science business media julio gonzalo monitoring reputation in the wild online west in proceedings of the spanish conference on information retrieval page acm carrillo de albornoz i chugur corujo gonzalo meij de rijke and spina overview of replab evaluating online reputation monitoring systems clef references marija kristina and zdravko dovedan should academia care about online reputation management and monitoring in mipro proceedings of the international convention pages ieee sina samangooei trevor cohn nicholas gibbins and mahesan niranjan trendminer an architecture for real time analysis of social media text in icwsm ali khalili auer and ngonga ngomo text analytics using linked data in european semantic web conference pages springer pedro saleiro silvio amir silva and carlos soares popmine tracking political opinion on the web in computer and information technology ubiquitous computing and communications dependable autonomic and secure computing pervasive intelligence and computing ieee international conference on pages ieee christopher d manning prabhakar raghavan hinrich et al introduction to information retrieval volume cambridge university press cambridge gerard salton automatic information organization and retrieval karen sparck jones a statistical interpretation of term specificity and its application in retrieval journal of documentation stephen e robertson steve walker susan jones micheline m mike gatford et al okapi at nist special publication sp s fissaha adafre maarten de rijke and e tjong kim sang entity retrieval recent advances in natural language processing ranlp haiqiang chen huawei shen jin xiong songbo tan and xueqi cheng social network structure behind the mailing lists at trec expert finding track in trec national institute of standards and technology nist gianluca demartini tereza iofciu and arjen p de vries overview of the inex entity ranking track in focused retrieval and evaluation pages springer krisztian balog pavel serdyukov and arjen p de vries overview of the trec entity track technical report dtic document krisztian balog arjen p de vries pavel serdyukov and wen the first international workshop on search eos in acm sigir forum volume pages acm krisztian balog leif azzopardi and maarten de rijke formal models for expert finding in enterprise corpora in proceedings of the annual international acm sigir conference on research and development in information retrieval pages acm references leif azzopardi krisztian balog and maarten de rijke language modeling approaches for enterprise tasks in trec citeseer nick craswell arjen p de vries and ian soboroff overview of the trec enterprise track in trec volume pages zhao ru yuehua chen weiran xu and jun guo trec enterprise track experiments at bupt in trec desislava petkova and w bruce croft document representation for named entity retrieval in proceedings of the sixteenth acm conference on conference on information and knowledge management pages acm marc bron krisztian balog and maarten de rijke example based entity search in the web of data in european conference on information retrieval pages springer nansu zong sungin lee and kim discovering expansion entities for entity search in linked data journal of information science nikita zhiltsov alexander kotov and fedor nikolaev fielded sequential dependence model for entity retrieval in the web of data in proceedings of the international acm sigir conference on research and development in information retrieval pages acm jeffrey pound alexander k hudek ihab f ilyas and grant weddell interpreting keyword queries over web knowledge bases in proceedings of the acm international conference on information and knowledge management pages acm christina unger lorenz jens lehmann ngonga ngomo daniel gerber and philipp cimiano question answering over rdf data in proceedings of the international conference on world wide web pages acm xiaonan li chengkai li and cong yu queries over wikipedia acm transactions on intelligent systems and technology tist michael schmitz robert bart stephen soderland oren etzioni et al open language learning for information extraction in proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning pages association for computational linguistics jeffrey xu yu lu qin and lijun chang keyword search in databases synthesis lectures on data management references shady elbassuoni maya ramanath ralf schenkel marcin sydow and gerhard weikum ranking for queries on in proceedings of the acm conference on information and knowledge management pages acm tao cheng xifeng yan and kevin chang entityrank searching entities directly and holistically in proceedings of the international conference on very large data bases pages vldb endowment jack g conrad and mary hunter utt a system for discovering relationships by feature extraction from text databases in sigir pages springer jason dm rennie and tommi jaakkola using term informativeness for named entity detection in proceedings of the annual international acm sigir conference on research and development in information retrieval pages acm donald metzler and w bruce croft a markov random field model for term dependencies in proceedings of the annual international acm sigir conference on research and development in information retrieval pages acm fei song and w bruce croft a general language model for information retrieval in proceedings of the eighth international conference on information and knowledge management pages acm donald metzler and w bruce croft linear models for information retrieval information retrieval samuel huston and w bruce croft a comparison of retrieval models using term dependencies in proceedings of the acm international conference on conference on information and knowledge management pages acm fedor nikolaev alexander kotov and nikita zhiltsov parameterized fielded term dependence models for entity retrieval from knowledge graph in proceedings of the international acm sigir conference on research and development in information retrieval pages acm paul ogilvie and jamie callan combining document representations for knownitem search in proceedings of the annual international acm sigir conference on research and development in informaion retrieval pages acm faegheh hasibi krisztian balog and svein erik bratsberg exploiting entity linking in queries for entity retrieval in proceedings of the acm on international conference on the theory of information retrieval pages acm sayali kulkarni amit singh ganesh ramakrishnan and soumen chakrabarti collective annotation of wikipedia entities in web text in sigkdd acm references damiano spina enrique and julio gonzalo filter keywords and majority class strategies for company name disambiguation in twitter in clef springer ad delgado munoz raquel unanue alberto and fresno unsupervised company name disambiguation in twitter in icwsm workshop on analysis and mining of social streams pages maria christoforaki ivie erunse and cong yu searching social updates for entities in vlds pages viktor hangya and farkas filtering and polarity detection for reputation management on tweets in clef working notes amparo elizabeth cano basave andrea varga matthew rowe milan stankovic and dadzie making sense of microposts concept extraction challenge leon derczynski diana maynard niraj aswani and kalina bontcheva noise and impact on semantic annotation accuracy in proceedings of the acm conference on hypertext and social media pages acm xiaohua liu yitong li haocheng wu ming zhou furu wei and yi lu entity linking for tweets in acl pages mark a greenwood niraj aswani and kalina bontcheva reputation profiling with gate in clef online working leon derczynski diana maynard giuseppe rizzo marieke van erp genevieve gorrell troncy johann petrak and kalina bontcheva analysis of named entity recognition and linking for tweets information processing management edgar meij wouter weerkamp and maarten de rijke adding semantics to microblog posts in proceedings of the fifth acm international conference on web search and data mining pages acm alexandre davis adriano veloso altigran s da silva wagner meira jr and alberto hf laender named entity disambiguation in streaming data in acl long pages association for computational linguistics wei shen jianyong wang ping luo and min wang linking named entities in tweets with knowledge base via user interest modeling in proceedings of the acm sigkdd international conference on knowledge discovery and data mining pages acm kwak park lee and moon what is twitter a social network or a news media in www pages acm references habib and van keulen twitterneed a hybrid approach for named entity extraction and disambiguation for tweet natural language engineering fabian m suchanek gjergji kasneci and gerhard weikum yago a core of semantic knowledge in proceedings of the international conference on world wide web pages acm paolo ferragina and ugo scaiella tagme annotation of short text fragments by wikipedia entities in proceedings of the acm international conference on information and knowledge management pages acm andrea moro alessandro raganato and roberto navigli entity linking meets word sense disambiguation a unified approach transactions of the association for computational linguistics francesco piccinno and paolo ferragina from tagme to wat a new entity annotator in proceedings of the first international workshop on entity recognition disambiguation pages acm zhengyan he shujie liu mu li ming zhou longkai zhang and houfeng wang learning entity representation for entity disambiguation in acl pages wei fang jianwen zhang dilin wang zheng chen and ming li entity disambiguation by knowledge and text jointly embedding conll page jose g moreno romaric romain beaumont eva d hondt ligozat sophie rosset xavier tannier and brigitte grau combining word and entity embeddings for entity linking in european semantic web conference pages springer bing liu sentiment analysis and opinion mining synthesis lectures on human language technologies sara rosenthal preslav nakov svetlana kiritchenko saif m mohammad alan ritter and veselin stoyanov task sentiment analysis in twitter proceedings of saif mohammad svetlana kiritchenko and xiaodan zhu building the in sentiment analysis of tweets in semeva pages atlanta georgia usa association for computational linguistics efthymios kouloumpis theresa wilson and johanna d moore twitter sentiment analysis the good the bad and the omg icwsm david bamman and noah a smith contextualized sarcasm detection on twitter in proceedings of the international conference on web and social media pages aaai menlo park ca references bing liu sentiment analysis and subjectivity handbook of natural language processing mike thelwall kevan buckley and georgios paltoglou sentiment strength detection for the social web journal of the american society for information science and technology yoshua bengio deep learning of representations looking forward in statistical language and speech processing pages springer tomas mikolov ilya sutskever kai chen greg s corrado and jeff dean distributed representations of words and phrases and their compositionality in nips andrew l maas raymond e daly peter t pham dan huang andrew y ng and christopher potts learning word vectors for sentiment analysis in proceedings of the annual meeting of the association for computational linguistics human language pages association for computational linguistics igor labutov and hod lipson words in acl pages yaming sun lei lin nan yang zhenzhou ji and xiaolong wang radicalenhanced chinese character embedding in neural information processing pages springer duyu tang furu wei nan yang ming zhou ting liu and bing qin learning word embedding for twitter sentiment classification in acl pages gerard salton anita wong and yang a vector space model for automatic indexing communications of the acm david m blei andrew y ng and michael i jordan latent dirichlet allocation the journal of machine learning research auer christian bizer georgi kobilarov jens lehmann richard cyganiak and zachary ives dbpedia a nucleus for a web of open data springer scott deerwester susan t dumais george w furnas thomas k landauer and richard harshman indexing by latent semantic analysis journal of the american society for information science jeffrey pennington richard socher and christopher d manning glove global vectors for word representation in emnlp volume pages yoshua bengio ducharme pascal vincent and christian jauvin a neural probabilistic language model journal of machine learning research feb references ronan collobert and jason weston a unified architecture for natural language processing deep neural networks with multitask learning in proceedings of the international conference on machine learning pages acm omer levy and yoav goldberg neural word embedding as implicit matrix factorization in advances in neural information processing systems pages tomas mikolov yih and geoffrey zweig linguistic regularities in continuous space word representations in volume sanjeev arora yuanzhi li yingyu liang tengyu ma and andrej risteski a latent variable model approach to word embeddings arxiv preprint preslav nakov alan ritter sara rosenthal fabrizio sebastiani and veselin stoyanov task sentiment analysis in twitter proceedings of semeval pages rodrigues branco steven neale and silva distributional semantics models for portuguese in international conference on computational processing of the portuguese language pages springer bandari and huberman the pulse of news in social media forecasting popularity in icwsm yang and patterns of temporal variation in online media in wsdm pages acm weerkamp tsagkias and de rijke predicting the volume of comments on online news stories in cikm pages acm xiangnan he ming gao kan yiqun liu and kazunari sugiyama predicting the popularity of web items based on user comments in proceedings of the international acm sigir conference on research development in information retrieval pages acm swapna gottipati and jing jiang finding thoughtful comments from social media in coling volume pages annie louis and ani nenkova what makes writing great first experiments on article quality prediction in the science journalism domain transactions of the association for computational linguistics carlos castillo mohammed pfeffer and matt stempeck characterizing the life cycle of online news stories using social media reactions in proceedings of the acm conference on computer supported cooperative work social computing pages acm references riley crane and didier sornette robust dynamic classes revealed by measuring the response function of a social system proceedings of the national academy of sciences janette lehmann bruno j ramasco and ciro cattuto dynamical classes of collective attention in twitter in proceedings of the international conference on world wide web pages acm daniel m romero brendan meeder and jon kleinberg differences in the mechanics of information diffusion across topics idioms political hashtags and complex contagion on twitter in proceedings of the international conference on world wide web pages acm mikalai tsytsarau themis palpanas and malu castellanos dynamics of news events and social media reaction in proceedings of the acm sigkdd international conference on knowledge discovery and data mining pages acm harold dwight lasswell the comparative study of symbols an introduction number stanford university press maxwell e mccombs and donald l shaw the function of mass media public opinion quarterly matthew c moen ronald reagan and the social issues rhetorical support for the christian right the social science journal daniel riffe and alan freitag a content analysis of content analyses years of journalism quarterly journalism mass communication quarterly kimberly a neuendorf the content analysis guidebook sage daniel j hopkins and gary king a method of automated nonparametric content analysis for social science american journal of political science justin grimmer and brandon stewart text as data the promise and pitfalls of automatic content analysis methods for political texts political analysis bermingham and smeaton on using twitter to monitor political sentiment and predict election results workshop at the international joint conference for natural language processing ijcnlp november andranik tumasjan timm oliver sprenger philipp g sandner and isabell m welpe predicting elections with twitter what characters reveal about political sentiment icwsm micol and nathanael chambers learning for microblogs with distant supervision political forecasting with twitter in proceedings of the conference of the european chapter of the association for computational linguistics eacl association for computational linguistics references pawel sobkowicz michael kaschesky and guillaume bouchard opinion mining in social media modeling simulating and forecasting political opinions in the web government information quarterly social media in government selections from the annual international conference on digital government research avishay livne matthew p simmons eytan adar and lada a adamic the party is over here structure and content in the election in icwsm andranik tumasjan timm o sprenger philipp g sandner and isabell m welpe predicting elections with twitter what characters reveal about political sentiment in proceedings of the fourth international aaai conference on weblogs and social media d i wanted to predict elections with twitter and all i got was this lousy paper a balanced survey on election prediction using twitter data arxiv preprint brendan o connor ramnath balasubramanyan bryan r routledge and noah a smith from tweets to polls linking text sentiment to public opinion time series in proceedings of the international aaai conference on weblogs and social media jessica chung and eni mustafaraj can collective sentiment expressed on twitter predict political elections in proceedings of the aaai conference on artificial intelligence san francisco ca usa panagiotis metaxas eni mustafaraj and dani how not to predict elections ieee third int l conference on privacy security risk and trust and ieee third int l conference on social computing october doi daniel gayo avello panagiotis t metaxas and eni mustafaraj limits of electoral predictions using twitter in proceedings of the international conference on weblogs and social media daniel a of electoral prediction from twitter data social science computer review page bo pang and lillian lee opinion mining and sentiment analysis found trends inf efthymios kouloumpis theresa wilson and johanna moore twitter sentiment analysis the good the bad and the omg in proceedings of the international conference on weblogs and social media preslav nakov sara rosenthal zornitsa kozareva veselin stoyanov alan ritter and theresa wilson task sentiment analysis in twitter in proceedings of the international workshop on semantic evaluation semeval references johnson shukla and shukla on classifying the political sentiment of tweets nicholas diakopoulos and david shamma characterizing debate performance via aggregated twitter sentiment in proceedings of the sigchi conference on human factors in computing systems chi acm eric sanders and antal van den bosch relating political party mentions on twitter with polls and election results in dir pages marko skoric nathaniel poor palakorn achananuparp lim and jing jiang tweets and votes a study of the singapore general election in system science hicss hawaii international conference on pages ieee juan m soler fernando cuartero and manuel roblizo twitter as a tool for predicting elections results in proceedings of the international conference on advances in social networks analysis and mining asonam pages ieee computer society erik tjong kim sang and johan bos predicting the dutch senate election results with twitter in proceedings of the workshop on semantic analysis in social media pages association for computational linguistics lu chen wenbo wang and amit p sheth are twitter users equal in predicting elections a study of user groups in predicting us republican presidential primaries in social informatics pages springer joseph digrazia karissa mckelvey johan bollen and fabio rojas more tweets more votes social media as a quantitative indicator of political behavior plos one colin fink nathan bos alexander perrone erwu liu and jonathon kopecky twitter public opinion and the nigerian presidential election in social computing socialcom international conference on pages ieee manish gaurav amit srivastava anoop kumar and scott miller leveraging candidate popularity on twitter to predict election outcome in proceedings of the workshop on social network mining and analysis page acm nicholas a thapen and moustafa m ghanem towards passive political opinion polling using twitter in sma pages citeseer lei shi neeraj agarwal ankur agrawal rahul garg and jacob spoelstra predicting us primary elections with twitter workshop social network and social media analysis methods models and applications michael j jensen and nick anstead psephological investigations tweets votes and unknown unknowns in the republican nomination process policy internet references fabio franch wisdom of the crowds uk election prediction with social media journal of information technology politics nick beauchamp predicting and interpolating polling using twitter textual data in new directions in analyzing text as data workshop danish contractor and tanveer afzal faruquie understanding election candidate approval ratings using social media data in proceedings of the international conference on world wide web companion pages international world wide web conferences steering committee vasileios lampos daniel and trevor cohn a model of voting intention from social media in acl pages micol and nathanael chambers learning for microblogs with distant supervision political forecasting with twitter in proceedings of the conference of the european chapter of the association for computational linguistics pages association for computational linguistics mohamed yahya klaus berberich shady elbassuoni maya ramanath volker tresp and gerhard weikum natural language questions for the web of data in proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning pages association for computational linguistics uma sawant and soumen chakrabarti learning joint query interpretation and response ranking in proceedings of the international conference on world wide web pages acm judea pearl bayesian networks a model of memory for evidential reasoning in proceedings of the conference of the cognitive science society pages shuo zhang and krisztian balog design patterns for object retrieval in european conference on information retrieval pages springer chandra sekhar bhagavatula thanapon noraset and doug downey methods for exploring and mining tables on wikipedia in proceedings of the acm sigkdd workshop on interactive data exploration and analytics pages acm oliver lehmberg dominique ritze robert meusel and christian bizer a large public corpus of web tables containing time and context metadata in proceedings of the international conference companion on world wide web pages international world wide web conferences steering committee evgeniy gabrilovich michael ringgaard and amarnag subramanya freebase annotation of clueweb corpora krisztian balog and robert neumayer a test collection for entity search in dbpedia in proceedings of the international acm sigir conference on research and development in information retrieval pages acm references gustavo laboreiro sarmento jorge teixeira and oliveira tokenizing messages using a text classification approach in proceedings of the fourth workshop on analytics for noisy unstructured text data and pedro saleiro luis rei arian pasquali carlos soares jorge teixeira pinto mohammad nozari zarmehri catarina and pedro strecht popstar at replab name ambiguity resolution on twitter in clef working notes enrique julio gonzalo and felisa verdejo a general evaluation measure for document organization tasks in proceedings sigir july claudiu musat and stefan the impact of valence shifters on mining implicit economic opinions in international conference on artificial intelligence methodology systems and applications springer marjan van de kauter diane breesch and hoste analysis of explicit and implicit sentiment in financial news articles expert systems with applications keith cortis andre freitas tobias daudert manuela huerlimann manel zarrouk and brian davis task sentiment analysis on financial microblogs and news proceedings of semeval tomas mikolov kai chen greg corrado and jeffrey dean efficient estimation of word representations in vector space arxiv preprint gimpel schneider o connor das mills eisenstein heilman yogatama flanigan and noah a smith tagging for twitter annotation features and experiments in acl hlt short papersvolume andriy bodnaruk tim loughran and bill mcdonald using text to gauge financial constraints journal of financial and quantitative analysis theresa wilson janyce wiebe and paul hoffmann recognizing contextual polarity in sentiment analysis in emnlp deriu gonzenbach uzdilli lucchi de luca and jaggi swisscheese at task sentiment classification using an ensemble of convolutional neural networks with distant supervision proceedings of semeval reis olmo benevenuto kwak prates and j an breaking the news first impressions matter on online news in icwsm matko boanjak eduardo oliveira martins eduarda mendes rodrigues and sarmento twitterecho a distributed focused crawler to support open research with twitter data in proceedings of the international conference companion on world wide web pages acm references kohut keeter doherty dimock a directors and christian assessing the representativeness of public opinion surveys joao filgueiras and silvio amir popstar at replab polarity for reputation classification in fourth international conference of the clef initiative clef volume brendan o connor ramnath balasubramanyan bryan r routledge and noah a smith from tweets to polls linking text sentiment to public opinion time series icwsm oliveira martins rodrigues and sarmento twitterecho a distributed focused crawler to support open research with twitter data acm gianluca demartini malik muhammad saad missen roi blanco and hugo zaragoza taer time aware entity retrieval in cikm toronto canada acm michael matthews pancho tolchinsky roi blanco jordi atserias peter mika and hugo zaragoza searching through time in the new york times in humancomputer interaction and information retrieval pages krisztian balog maarten de rijke raymond franz hendrike peetz bart brinkman ivan johgi and max hirschel sahara discovering associations in online news in iswc omar alonso klaus berberich srikanta bedathur and gerhard weikum timebased exploration of news archives hcir jorge teixeira luis sarmento and eugenio oliveira creation of a reference news corpus for scenarios in cisti sarmento nunes jorge teixeira and oliveira propagating topic labels in news snippets in carla abreu jorge teixeira and oliveira encadear encadeamento de linguistica informatica e traducao mundos que se cruzam oslo studies in language pedro saleiro and sarmento piaf vs adele classifying encyclopedic queries using automatically labeled training data in oair jorge teixeira sarmento and oliveira a bootstrapping approach for training a ner with conditional random fields in progress in artificial intelligence mathieu jacomy tommaso venturini sebastien heymann and mathieu bastian a continuous graph layout algorithm for handy network visualization designed for the gephi software plos one references silvio amir miguel almeida bruno martins filgueiras and mario silva tugas exploiting unlabelled data for twitter sentiment analysis in proceedings of the international workshop on semantic evaluation semeval pages dublin ireland august association for computational linguistics url http omer levy yoav goldberg and ido dagan improving distributional similarity with lessons learned from word embeddings transactions of the association for computational linguistics chollet keras https diederik kingma and jimmy ba adam a method for stochastic optimization arxiv preprint georgiana dinu angeliki lazaridou and marco baroni improving learning by mitigating the hubness problem arxiv preprint manaal faruqui yulia tsvetkov pushpendre rastogi and chris dyer problems with evaluation of word embeddings using word similarity tasks acl page anna gladkova aleksandr drozd and computing center intrinsic evaluations of word embeddings what can we do better acl page
2
international journal of computing and business research ijcbr issn online volume issue september time efficient approach to offline hand written character recognition using associative memory net tirtharaj dash final year student department of information technology national institute of science and technology india abstract in this paper an efficient offline hand written character recognition algorithm is proposed based on associative memory net amn the amn used in this work is basically auto associative the implementation is carried out completely in c language to make the system perform to its best with minimal computation time a parallel algorithm is also developed using an api package openmp characters are mainly english alphabets small capital collected from system and from different persons the characters collected from system are used to train the amn and characters collected from different persons are used for testing the recognition ability of the net the detailed analysis showed that the network recognizes the hand written characters with recognition rate of in average case however in best case it recognizes the collected hand written characters with the developed network consumes sec average in serial implementation and sec average in parallel implementation using openmp keywords offline hand written character associative memory net openmp serial parallel introduction in the recent years hand written character recognition has been a challenging and interesting research area in the field of pattern recognition and image processing impedovo et mori et it contributes mainly to the interaction and improves the interface between the two pradeep et other international journal of computing and business research ijcbr issn online volume issue september human cognition methods viz face speech thumb print recognitions are also being great area of research imtiaz and fattah khurana and singh kurian and balakriahnan generally character recognition can be broadly characterized into two types i offline and ii online in offline method the pattern is captured as an image and taken for testing purpose but in case of online approach each point of the pattern is a function of time pressure slant strokes etc both the methods are best based on their application in the field yielding best accuracy with minimal cost of time is a crucial precondition for pattern recognition system therefore hand written character recognition is continuously being a broad area of research in this work an approach for offline character recognition has been proposed using associative memory network amn in fact to make it time efficient a parallel algorithm has been developed for the implementation of amn using openmp open multiprocessing amn is a neural network which can store patterns as memories when the network is being tested with a key pattern it corresponds by producing one of the stored patterns which closely resembles to key pattern based on the testing pattern amn can be of two types i memory net or ii memory net both the networks contains two layers a input layer and b output layer in case of memory net the input and target pattern are same sivanandam and deepa but in case of memory net the two patterns are different this work uses the as the character to be tested is same as the stored character the characters considered in this work are english alphabets both small and capital letters this paper is organized as follows section presented a general introduction to the character recognition systems and methods section gives a brief literature review of some methods proposed for character recognition section describes the proposed methodology of this work section is a result and discussion section which gives a detailed analysis of the work the paper is concluded in section with a note to future works literature review international journal of computing and business research ijcbr issn online volume issue september available literatures convey that various algorithms and techniques have been used in order to accomplish the task of character recognition some studies are described below source of the literature are google scholar scopus and ieee library neural network nn has been a backend of character classification in most of the methods this is due to its faster and reliable computation the methods used in front end could be a statistical approaches b kernel methods c support methods or d hybrid of fuzzy logic controllers multilayer perceptron mlp was used for bangla alphabet recognition by basu et the accuracy achieved in this work was and on the samples of training and testing respectively manivannan and neil proposed and demonstrated an optical network architecture for pattern recognition english alphabet used as patterns for the training and testing process pal and singh proposed nn based english character recognition system in this work mlp with one hidden layer was used about testing were carried out to test the performance of the design the best case accuracy obtained in this work was perwej and chaturvedi worked on english alphabet recognition using nn in this work binary pixels of the alphabets were used train the nn the accuracy achieved was found to be pal et proposed a modified quadratic classifier approach for handwritten numerals of six popular indian scripts with high level of recognition accuracy dinesh et used horizontal and vertical strokes and end points as feature for handwritten numerals this method reported an accuracy rate of in best case however this method used a thinning method resulting in loss of features yanhua and chuanjun recommended a novel chinese character recognition algorithm which was based on minimum distance classifier the algorithm attempted to work with two classes of feature and statistics the statistic feature decided the primary class and the structure feature used to identify chinese characters a good method of character recognition was proposed by huiqin et al in this work they proposed a distribution based algorithm based on image segmentation and international journal of computing and business research ijcbr issn online volume issue september distribution of pixels deflection correction method was adopted for flexibility as well as reduction of matching error this work avoided the burden of extracting the skeleton from the character the method gave excellent result and was robust methodology a methodology has been proposed which is demonstrated in figure figure proposed methodology collection of english alphabets both small and capital from i system and ii persons hand written extraction of pixels from the characters implementation of auto amn i training and ii testing using both a serial and b parallel algorithms comparison of results from serial and parallel processing with respect to time of execution generation of english alphabets english alphabets both small and capital are designed in the system using ms paint version in arial font no bold in bmp file format the dimension of the bmp file is with bit depth of some alphabets are given in figure international journal of computing and business research ijcbr issn online volume issue september figure english alphabets of the system hand written english alphabets are collected each one from different persons the characters are given in figure figure english alphabets collected from different persons extraction of pixel from the characters pixels are extracted from the character images bitmap files using a standard image function of matlab version the function is imread the function extracts the decimal values associated with each pixel the pixels are then stored in a text file for the experiment purpose memory net implementation serial algorithm initialize weight w to set the target pattern as the system s pattern international journal of computing and business research ijcbr issn online volume issue september input the handwritten pattern to the first layer of amn for to n do for to n do calculate the weight as wij new ij old end end for to n do for to n do calculate the net input to each output node as n x i wij i if yinj else end end parallel algorithm initialize weight w to set the target pattern as the system s pattern input the handwritten pattern to the first layer of amn pragma omp paralle shared w yin chunk p private tid i j do pragma omp for schedule static chunk for to n do for to n do calculate the weight as wij new ij old end end international journal of computing and business research ijcbr issn online volume issue september pragma omp for schedule static chunk for to n do for to n do calculate the net input to each output node n x i wij i if yinj else end end end system specification a computer system having gb ram and four processors is used for the complete work the operating system is ubuntu linux however for auto optimization by the compiler tag is used in the compilation command results and discussion the contribution of this work is a detailed analysis on the recognition accuracy for all the handwritten english alphabets total time of computation has been noted for both serial and parallel algorithm to compare the decision making speed recognition accuracy table shows a result of the testing the developed amn for a set of hand written characters it should be noted that the network is trained with the machine s alphabets and tested with the hand written alphabet however for reliability issue a hand written character is checked for times and the matching percentage is the average of the results table recognition accuracy of amn for offline hand written character recognition international journal of computing and business research ijcbr issn online volume issue september system s alphabet a b c d e f g h i j k l m n o p q r s t u v w x y z a b c d e f g h i j hand written alphabet for which highest match is achieved a b c o f f g h i j k l h h o p o r s t u v u x y z o b e d e p y b i j recognition accuracy time of computation sec serial parallel international journal of computing and business research ijcbr issn online volume issue september k l m n o p q r s t u v w x y z k l m n o p q r s t u v w x y z can be viewed as a detailed analysis of performance of the developed auto amn for offline hand written english alphabet recognition the network recognizes the handwritten character j with highest matching of however the network doesn t recognize some alphabets like d e m n q w a c f g h j and k and these alphabets are recognized as o f h h o u o e p y b i and k respectively with some matching error level of matching of each alphabet a plot has been given in figure to view the level up to which each english alphabet is matched by the amn the alphabets which are not recognized are awarded with matching international journal of computing and business research ijcbr issn online volume issue september level of matching a b c d e f g h i j k l m n o p q r s t u vw x y z a b c d e f g h i j k l m n o p q r s t u v w x y z english alphabet figure this plot shows level of matching of each alphabet time efficiency as it is already mentioned that the network is developed with two algorithms i serial and ii parallel it will be a good idea to check the timing variation in both the cases a plot given in figure shows speed up after achieved after the execution of parallel algorithm international journal of computing and business research ijcbr issn online volume issue september serial parallel decision making speed sec alphabet serial number figure decision making speed by serial and parallel algorithm conclusion in this paper an offline english character recognition system has been proposed the system is developed using auto associative memory net to make the developed system faster and reliable a parallel algorithm has been developed and tested successfully experimental study showed that the system recognizes characters with average recognition rate of character j is recognized with highest accuracy rate of the average time required by the serial algorithm to recognize a character is sec where as the parallel algorithm takes only sec on an average however automatic checking of a sequence of character by the network will play a great role in the world of character recognition the author is currently working on this issue references basu et al handwritten bangla alphabet recognition using an mlp based classifier proceeding of national conference on computer processing of bangla pp international journal of computing and business research ijcbr issn online volume issue september dinesh et al isolated handwritten kannada numeral recognition using structural feature and cluster iisn pp huiqin et al the research of algorithm for handwritten character recognition in correcting assignment system proceeding of international conference on image and graphics icig impedovo et al optical character recognition international journal of pattern recognition and artificial intelligence vol pp imtiaz and fattah a a local dominant feature selection scheme for face recognition international journal of computing and business research khurana and singh a model for human cognition international journal of computing and business research kurian and balakriahnan continuous speech recognition system for malayalam language using plp cepstral coefficient journal of computing and business research manivannan and neil optical network hybrid system for many patterns recognition proceeding of international symposium on intelligent systems and informatics sisy mori et al historical review of ocr research and development proceedings of ieee vol pp pal et al handwritten numeral recognition of six popular scripts international conference on document analysis recognition vol pp pal and singh handwritten english character recognition using neural network international journal of computer science and communication pp perwej and chaturvedi a neural networks for handwritten english alphabet recognition international journal of computer applications vol no pp pradeep et al diagonal based feature extraction for handwritten alphabet recognition system using neural network international journal of computer science and information technology pp sivanandam and deepa principles of soft computing publisher edition yanhua and chuanjun a recognition algorithm for chinese character based on minimum distance classifier proceeding of international workshop on computer science and engineering
9
oct on the analysis of yiyuan she department of statistics florida state university tallahassee fl abstract in modern data analysis optimization methods are usually favored to obtain sparse estimators in high dimensions this paper performs theoretical analysis of a class of iterative thresholding based estimators defined in this way oracle inequalities are built to show the nearly minimax rate optimality of such estimators under a new type of regularity conditions moreover the sequence of iterates is found to be able to approach the statistical truth within the best statistical accuracy geometrically fast our results also reveal different benefits brought by convex and nonconvex types of shrinkage introduction big data naturally arising in machine learning biology signal processing and many other areas call for the need of scalable optimization in computation although for problems newton or methods converge fast and have efficient implementations they typically do not scale well to high dimensional data in contrast optimization methods have recently attracted a great deal of attention from researchers in statistics computer science and engineering they iterate based on the gradient or a subgradient of the objective function and have each iteration step being in high dimensional statistics a algorithm typically proceeds in the following manner p t t where p is an operator that is easy to compute denotes the gradient of the loss function l and gives the stepsize such a simple iterative procedure is suitable for optimization and converges in arbitrarily high dimensions provided is properly small p can be motivated from the perspective of statistical shrinkage or regularization and is necessary to achieve good accuracy when the dimensionality is moderate or high for example a proximity operator parikh and boyd is associated with a convex penalty function but the problems of interest may not always be convex quite often p is taken as a certain thresholding rule in statistical learning such as scad fan and li the resulting estimators which we call are fixed points of to study the behavior of regardless of the sample size and dimensionality we will establish some oracle inequalities during the last decade people have performed rigorous analysis of many estimators defined as globally optimal solutions to some convex or nonconvex bunea et al zhang and huang bickel et al lounici et al zhang and zhang she among many others pose some new questions first although nicely an associated optimization criterion can be constructed for any given the objective may not be convex and the estimator may not correspond to any functional local or global minimum second there are various types of due to the abundant choices of but a comparative study regarding their statistical performance in high dimensions is lacking in the literature third are usually computed in an inexact way on big datasets indeed most practitioners have to terminate before full computational convergence these disconnects between theory and practice when using iterative thresholdings motivate our work the rest of the paper is organized as follows section introduces the the associated iterative and some necessary notation section presents the main results including some oracle inequalities and sequential analysis of the iterates generated by tisp section provides proof details background and notation thresholding functions definition thresholding function a thresholding function is a real valued function t defined for t and such that i t ii t for t iii t iv t t for t a vector version of still denoted by is defined componentwise if either t or is replaced by a vector from the definition u sup t t u must be monotonically and so its derivative is defined almost everywhere on given a critical number can be introduced such that u du for almost every u or ess inf u du u where ess inf is the essential infimum for the perhaps most popular and functions t sgn t t equals and respectively for any arbitrarily given we construct a penalty function t as follows t z u u du z sup s s u u du for any t this penalty will be used to make a proper objective function for the threshold may not equal in general for ease in notation in writing we always assume that is the threshold parameter unless otherwise specified then an important fact is that given any thresholding rule satisfies t t due to property iv from which it follows that t ph t where ph t z h u u du in particular ph t t and ph t t when has discontinuities such as t in t ambiguity may arise in definition to avoid the issue we assume the quantity to be thresholded never corresponds to any discontinuity of this assumption is mild because practically used thresholding rules have few discontinuity points and such discontinuities rarely occur in real applications we assume a model y where x is an n p design matrix y is a response vector in rn is the unknown coefficient vector and is a random vector with mean zero and scale bounded by cf definition in section for more detail then a driven by the computational procedure is defined as a solution to the x t x t where the scaling parameter does not depend on having appropriately large is crucial to guarantee the convergence of the computational procedure all popularly used penalty functions are associated with thresholdings such as the r scad fan and li mcp zhang capped zhang elastic net zou and hastie berhu owen he et she to name a few table lists some examples from a shrinkage perspective thresholding rules usually suffice in statistical learning equation can be in terms of the scaled deign and the corresponding coefficient vector t y t we will show that the in the scaled form does not have to adjust for the sample size which is advantageous in regularization parameter tuning a simple iterative procedure can be defined based on or t t y t t which is called the iterative selection procedure tisp she from theorem of she given an arbitrary tisp ensures the following descent property when f f t here the energy function objective function is constructed as p x f p where the penalty p can be as defined in or more generally p t t q t with q an arbitrary function satisfying q t r and q t if t s for some s furthermore we can show that when any limit point of t is necessarily a fixed point of and thus a see she for more detail therefore f is not necessarily unique when has example penalties like the capped t and ph are all associated with the same because of the mapping from penalty functions to thresholding functions iterating with a thresholding rule is perhaps more convenient than solving a nonconvex penalized optimization problem indeed some penalties like scad are designed from the thresholding viewpoint the following theorem shows thatpthe set of include all locally optimal solutions of theorem let be a local minimum point or a minimum point of if is continuous at x t y x t x must satisfy x t y x t the converse is not necessarily true namely may not guarantee functional local optimality let alone global optimality this raises difficulties in statistical analysis we will give a novel and unified treatment which can yield nearly optimal error rate for various thresholdings table some examples of thresholding functions and their associated quantities soft t t ridge t hard t if if p elastic net berhu if t t if t if if min capped t p scad a mcp if if sgn t if t if sgn t if t if t if a if sgn t t if sgn t dp ph t if dt if if lr r if r sgn t max r r otherwise the set is a singleton p if t if if main results to address the problems in arbitrary dimensions with possibly large p n we aim to establish oracle inequalities donoho and johnstone for any t define j j j recall t t ph t for convenience we use to denote when there is no ambiguity and ph are used similarly we denote by an inequality that holds up to a multiplicative constant unless otherwise specified we study scaled satisfying equation where and and so by abuse of notation we still write for and x for as mentioned previously we always assume that is continuous at x t y x t x in sections similarly section assumes that is continuous at t x t y x t t the past works on the lasso show that a certain incoherence requirement must be assumed to obtain sharp error rates in most theorems we also need to make similar assumptions to prevent the design matrix from being too collinear we will state a new type of regularity conditions which are called comparison regularity conditions under which oracle inequalities and sequential statistical error bounds can be obtained for any oracle inequalities under in this subsection we use to make a bound of the prediction error of our regularity condition is stated as follows a ssumption k given x there exist k such that the following inequality holds for any rp kx roughly means that can dominate with the help from and for some k t t theorem let p be any satisfying x y x with log ep and a a constant then for any sufficiently large a the following oracle inequality holds for rp e kx provided k is satisfied for some constants k theorem is applicable to any let s examine two specific cases first consider which indicates that is convex because ph and ph is ph t s ph t ph s due to its concavity zhang and zhang k is always satisfied for any k corollary suppose satisfies then holds for all corresponding without requiring any regularity condition in the case of or scad thresholding does not depend on the magnitude of and we can get a finite complexity rate in the oracle inequality also can be slightly relaxed by replacing with in we denote the modified version by k corollary suppose that corresponds to a bounded nonconvex penalty satisfying t r for some constant c then in the setting of theorem e kx j log ep provided k is satisfied for some constants k remark the side of the oracle inequalities involves a bias term and a complexity term letting in say the bias vanishes and we obtain a prediction error bound of the order j log ep omitting constant factors where j denotes the number of nonzero components in on the other hand the existence of the bias term ensures the applicability of our results to approximately sparse signals for example when has many small but nonzero components we can use a reference with a much smaller support than j to get a lower error bound as a benefit from the tradeoff remark when holds with the proof of theorem shows that the multiplicative constant for can be as small as the corresponding oracle inequalities are called sharp in some works koltchinskii et this also applies to theorem our proof scheme can also deliver highprobability form results without requiring an upper bound of remark corollary applies to all like because when t t for t it is worth mentioning that the error rate of j log ep can not be significantly improved in a minimax sense in fact under the gaussian noise contamination and some regularity conditions there exist constants c c such that inf j e kx cpo j c where denotes an arbitrary estimator of and po j j j log see lounici et al for a proof the bound in achieves the minimax optimal rate up to a mild logarithm factor for any n and oracle inequalities under this part uses instead of to make an oracle bound we will show that under another type of comparison regularity conditions all thresholdings can attain the essentially optimal error rate given in corollary we will also show that in the case of our condition is more relaxed than many other assumptions in the literature a ssumption k given x there exist k such that the following inequality holds for any rp kx j p theorem let be a and log ep with a a sufficiently large constant then e kx j holds for any rp if k is satisfied for some constants k remark some fusion thresholdings like those associated with elastic net berhu and cf table involve an additional shrinkage in the situation the complexity term in the oracle inequality should involve both j and we can modify our regularity conditions to obtain such bounds using the same proof scheme the details are however not reported in this paper in addition our results can be extended to with a stepsize parameter given and suppose is introduced such that t t for any then for any as a fixed point of t t y an analogous result can be obtained the only change is that is replaced by to give some more intuitive regularity conditions we suppose is concave on examples include r mcp scad and so on the concavity implies t s t s and so j and c j c where j c is the complement of j and is the subvector of indexed by j then is implied by below for given j j a ssumption k j given x j there exist k such that for any rp j c c or j c when is the it is easy to verify that a sufficient condition for is k j c for some and k has a simper form than in the following we give the definitions of the re and the compatibility condition bickel et van de geer and to make a comparison to a ssumption re j given j p we say that x satisfies re j if for positive numbers c or more restrictively for all rp satisfying assume re j holds when c holds trivially with otherwise indicates k with k so intuitively we have the following relationship in particular is less demanding than re next let s compare the regularity conditions required by and to achieve the nearly optimal error rate recall k and k in theorem and corollary respectively kx j kx ph k implies k indeed for c j ph c j ph ph j ph j ph on the other hand corollary studies when all have the optimal performance guarantee while practically one may initialize with a carefully chosen starting point theorem given any there exists a which minimizes such that holds without requiring any regularity condition in particular if corresponds to a bounded nonconvex penalty as described in corollary then there exists a such that holds free of regularity conditions theorem does not place any requirement on x so it seems that applying may have some further advantages in practice how to efficiently pick a estimator to completely remove all regularity conditions is however beyond the the scope of the current paper for a possible idea of relaxing the conditions see remark finally we make a discussion of the scaling parameter our results so far are obtained after performing x with the prediction error is invariant to the transformation but it affects the regularity conditions seen from is related to the stepsize appearing in also known as the learning rate in the machine learning literature from the computational results in section must be large enough to guarantee tisp is convergent the larger the value of is the smaller the stepsize is and so the slower the convergence is based on the machine learning literature slow learning rates are always recommended when training a nonconvex learner artificial neural networks perhaps interestingly in addition to computational efficiency reasons all our statistical analyses caution against using an extremely large scaling when for example k for an unscaled x reads kx ph j which becomes difficult to hold when is very large this makes the statistical error bound break down easily therefore a good idea is to have just appropriately large mildly greater than the sequential analysis of the iterates in the next part also supports the point sequential algorithmic analysis we perform statistical error analysis of the sequence of iterates defined by tisp t x t y x t t where and is the starting point the study is motivated from the fact that in applications are seldom computed exactly indeed why bother to run tisp till computational convergence how does the statistical accuracy improve or deteriorate at t increases lately there are some key advances on the topic for example agarwal et al showed that for convex problems not necessarily strongly convex proximal gradient algorithms can be geometrically fast to approach a globally optimal solution within the desired statistical precision under a set of conditions we however care about the statistical error between t and the genuine in this work we will introduce two comparison regularity conditions analogous to and to present both and error bounds hereinafter denote t by where a is a positive matrix a ssumption k given x there exist k such that the following inequality holds kx a ssumption k given x there exist k such that the following inequality holds kx k j and require a bit more than and respectively due to the theorem and the corollary below perform sequential analysis of the iterates and reveal the explicit roles of k which can often be treated as constants theorem suppose pk p is satisfied for some k then for log ep with a sufficiently large the following error bound holds with probability at least t x t t x k where c c are universal positive constants similarly under the same choice of regularity parameter if k t is satisfied for some k is true with probability at least t x t t x k j corollary in the setting of theorem for any initial point rp we have k k j t x t t x t x t t x under k s and k s s t tively with probability at least here k k remark we can get some sufficient conditions for and similar to the discussions made in section when is strictly less than can be relaxed to kx for some the proof in section also gives results with an additional additive term in the upper bounds similar to remark we can also study with stepsize in which case the weighting matrix in changes from i t x to x t x and the factor in and is replaced by remark theorem still applies when k and are dependent on for example if we use a varying threshold sequence t x t y x t t t then becomes t t x t t x j x this allows for much larger values of to be used in earlier iterations to attain the same accuracy it relaxes the regularity condition required by applying a fixed threshold level at the end we some results under to get more intuition and implications for a general x unscaled reads t t x t x k j set to be a number slightly larger than then we know that the prediction error t decays geometrically fast to o log ep with high probability when k are viewed as constants a similar conclusion is true for the estimation error this is simply due to t t x t t t x accordingly there is no need to run tisp till can terminate the algorithm earlier at say tmax log j without sacrificing much statistical accuracy the formula also reflects that the quality of the initial point affects the required iteration number there are some related results in the literature i as mentioned previously in a broad convex setting agarwal et al proved the geometric decay of the optimization error t to the desired statistical precision where is the convergent point loh and wainwright extended the conclusion to a family of nononvex optimization problems and they showed that when some regularity conditions hold every local minimum point is close to the authentic in comparison our results are derived toward the statistical error between t and directly without requiring all local minimum points to be statistically accurate ii zhang showed a similar statistical error bound for an elegant regularization procedure however the procedure carries out an expensive optimization at each step instead involves a simple and cheap thresholding and our analysis covers any acknowledgement the author would like to thank the editor the associated editor and two anonymous referees for their careful comments and useful suggestions that improve the quality of the paper the author also appreciates florentina bunea for the encouragement this work was supported in part by nsf grant proofs throughout the proofs we use c c l to denote universal constants they are not necessarily the same at each occurrence given any matrix a we use r a to denote its column space denote by pa the orthogonal projection matrix onto r a pa a at a at where stands for the moorepenrose pseudoinverse let p p given j p we use xj to denote a column submatrix of x indexed by j definition is called a random variable if there exist constants c c such that p t the scale for is defined as inf e exp rp is called a random vector with scale bounded by if all marginals are satisfying rp examples include gaussian random variables and bounded random variables such as bernoulli note that the assumption that vec is does not imply that the components of must be we begin with two basic facts because they are special cases of lemma and lemma in she respectively we state them without proofs lemma given an arbitrary thresholding rule let p be any function satisr fying p p q where sup s s u u du q is nonnegative and q t for all then y is always a globally optimal solution to ky p it is the unique optimal solution provided is continuous at lemma let ky denote by the unique minimizer of then for any proof of theorem let s u u u for u assume is a local minimum point the proof for a minimum point follows the same lines we write as f for simplicity let h denote the gateaux differential of f at with by the definition of h increment h h f p exists for any h r let l we consider the following directional vectors dj dp t with dj and dj j then for any j dj dj xtj y s sgn dj dj s if if due to the local optimality of dj when we obtain t x sgn when x and x x s to summarize when f achieves a local minimum or a minimum or more generally a local minimum at we have sgn xtj x y xtj y when is continuous at xtj x y implies that xtj x y hence must be a satisfying x t y x t proofs of theorem and theorem given let be any be any vector and the first result constructs a useful criterion for on basis of lemma and lemma lemma any satisfies the following inequality for any rp kx x t x i kx where to handle we introduce another lemma p lemma suppose and let log ep then there exist universal constants c c such that for any constants a the following event sup ph t a b occurs with probability at most c exp where t the lemma plays an important role in bounding the last stochastic term in its proof is based on the following results lemma suppose there exists a globally optimal solution o to ky ph such that for any j j p either or p lemma given x and j j p define r p r xj for some j j let po j j log j then for any t p p sup j c exp where l c c are universal constants let r ph o with given in lemma the starting value of j is because when j substituting it into gives kx t x i kx ph r kx ph a because p r t c exp we know e r let with a ab and set b the regularity condition k implies that ph choose a to satisfy a a combining the last two inequalities gives e kx k e e r a with the last inequality due to kx kx for any c the proof of theorem follows the lines of the proof of theorem with replaced by ph j and replaced by e kx j e j e r a the details are omitted proof of theorem from the proof of lemma there exists a which minimizes f l this means that the term x t x i can be dropped from following the lines of section holds under a modified version of k which replaces with kx using the of ph we know that any design matrix satisfies for any k proof of theorem and corollary let f l where l lemma let t x t y x t t then the following triangle inequality holds for any rp t t x t t x f f letting in the lemma we have t i t t x t t x x i moreover under k with kx t i t x combining the last two inequalities gives t x t t x t x t t x k x i p let rp j j log ep we define an event e with its complement given by e c sup ph a b by lemma there exists a universal constant l such that for any l a p e c clearly e implies t x ph o take b a l and then on e we get the desired statistical accuracy bound x i t x t t x k the bound under can be similarly proved noticing that holds for any t corollary is immediately true proofs of lemmas proof of lemma let f l with l define g l given g can be expressed as c where c depends on only let be a satisfying x t x x t y based on lemma and lemma we have g g from which it follows that f f x t x i this holds for any rp proof of lemma let lh a a ph b b and eh lh and because ph eh we prove that eh the occurrence of eh implies that lh o for any o defined by o arg min ph a b with a lemma exists at least one global minimizer states that there oo satisfying ph oo oo and thus lh oo oo this means that sup oo lh oo so eh and it suffices to prove occurs with high probability or more specifically p c exp given j p define rp j j let r we will use lemma to bound its tail probability let j j log jp we claim that p sup j c exp indeed j a p j p i j p i j p i j where the last inequality is due to inequality now follows from lemma set we write with as j noticing some basic facts that j due to stirling s p i po j cj log ep p approximation ii j j j for some c and iii j log ep log p j for any j we get p r t p q x p a sup i j t p q x p sup j t p x x p q p p sup lpo j j c exp exp j log p exp p x exp log p exp j exp p where the last inequality due to the sum of geometric series proof of lemma similar to the proof of lemma we set fh l ph with l and construct gh fh l l under for any gh fh t i x t x o let be a globally optimal solution to fh then o o x t o x t y gives fh o gh o o gh o o fh o with the second inequality due to lemma therefore o must also be a global minimizer of fh and by definition o demonstrates a threshold gap as desired proof of lemma by definition is a stochastic process with increments the induced metric on is euclidean d to bound the metric entropy log n d where n d is the smallest cardinality of an that covers under d we notice that is in a jdimensional in rp the number of such balls pxj bp j p is at most jp where bp denotes the unit ball in rp by a standard volume argument see vershynin p p j j log log log n j d log j j where c is a universal constant the conclusion follows from dudley s integral bound talagrand proof of lemma we use the notation in the proof of lemma with g defined in by lemma and lemma we obtain g t g t namely t i t t l to cancel the term we give two other inequalities based on secondorder bounds l l t t t i t t x l t t t i l t t x adding the three inequalities together gives the triangle inequality references agarwal negahban and wainwright j fast global convergence of gradient methods for statistical recovery ann bickel ritov and tsybakov a b simultaneous analysis of lasso and dantzig selector the annals of statistics pages bunea tsybakov a and wegkamp sparsity oracle inequalities for the lasso electronic journal of statistics donoho and johnstone i ideal spatial adaptation via wavelet shrinkages biometrika fan and li variable selection via nonconcave penalized likelihood and its oracle properties journal of the american statistical association he she and wu stationary sparse causality network learning mach learn koltchinskii lounici and tsybakov a b penalization and optimal rates for noisy matrix completion ann loh and wainwright j regularized with nonconvexity statistical and algorithmic theory for local optima mach learn lounici pontil tsybakov a and van de geer oracle inequalities and optimal inference under group sparsity annals of statistics owen a b a robust hybrid of lasso and ridge regression prediction and discovery contemporary mathematics parikh and boyd proximal algorithms foundations and trends in optimization she y iterative selection procedures for model selection and shrinkage electronic journal of statistics she y an iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors computational statistics and data analysis she y selective factor extraction in high dimensions arxiv preprint talagrand the generic chaining upper and lower bounds of stochastic processes springer monographs in mathematics springer van de geer and on the conditions used to prove oracle results for the lasso electronic journal of statistics vershynin introduction to the analysis of random matrices compressed sensing zhang nearly unbiased variable selection under minimax concave penalty ann zhang and huang j the sparsity and bias of the lasso selection in linear regression ann statist zhang and zhang a general theory of concave regularization for high dimensional sparse estimation problems statist zhang analysis of convex relaxation for sparse regularization mach learn zou and hastie regularization and variable selection via the elastic net jrssb
10
sep compatible actions and tensor products valeriy bardakov and mikhail neshchadim abstract for a pair of groups g h we study pairs of actions g on h and h on g such that these pairs are compatible and tensor products g h are defined introduction brown and loday introduced the tensor product g h for a pair of groups g and h following works of miller and lue the investigation of the tensor product from a group theoretical point of view started with a paper by brown johnson and robertson the tensor product g h depends not only on the groups g and h but also on the action of g on h and on the action of h on moreover these actions must be compatible see the definition in section in the present paper we study the following question what actions are compatible the paper is organized as follows in section we recall a definition of tensor product formulate some its properties and give an answer on a question of thomas proving that there are nilpotent group g and some group h such that in g h the derivative subgroup g h is equal to in the section we study the following question let a group h acts on a group g by automorphisms is it possible to define an action of g on h such that this pair of actions are compatible some necessary conditions for compatibility of actions will be given and in some cases will be prove a formula for the second action if the first one is given in the section we construct pairs compatible actions for arbitrary groups and for nilpotent groups give a particular answer on the question from section in section we study groups of the form g and describe compatible actions preliminaries in this article we will use the following notations for elements x y in a group g the conjugation of x by y is xy y xy and the commutator of x and y is x y xy y xy we write for the derived subgroup of g g g gab for the abelianized group the second hypercenter g of g is the subgroup of g such that g g date march mathematics subject classification primary secondary key words and phrases tensor product compatible action nilpotent group bardakov and neshchadim where g z g is the center of a group recall the definition of the tensor product g h of groups g and h see it is defined for a pair of groups g and h where each one acts on the other on right g h g g h g h h g h h g hg and on itself by conjugation in such a way that for all g g and h h and h g g g in this situation we say that g and h act compatibly on each other the tensor product g h is the group generated by all symbols g h g g h h subject to the relations h h and g g for all g g h in particular as the conjugation action of a group g on itself is compatible then the tensor square of a group g may always be defined also the tensor product g h is defined if g and h are two normal subgroups of some group m and actions are conjugations in m the following proposition is well known we give a proof only for fullness proposition let g and h be abelian groups independently on the action of g on h and h on g the group g h is abelian see proposition let g and h be arbitrary groups if the actions of g on h and h on g are trivial then the group g h gab h ab is the abelian tensor product proof we have the equality g h g h where g is the action of the commutator g by conjugation on g but g is abelian and g analogously h hence g h is abelian from the previous formula and triviality actions we have g g g g g g g analogously h hence g h is abelian remind presentation of tensor product as a central extension see the derivative subgroup of g by h is called the following subgroup dh g g h hg g h g g h hi the map g h dh g defined by g h g gh is a homomorphism its kernel a ker is the central subgroup of g h and g acts on g h by the rule g h x gx hx x g there exists the short exact sequence a g h dh g compatible actions and tensor products in this case a can be viewed as z dh g via conjugation in g h under the action induced by setting a g ax a a x g h x the following proposition gives an answer on the following question is there nonabelian tensor product g h such that g h g which of thomas formulated in some letter to the authors proposition let g fn fn k be a free nilpotent group of rank n and h aut g is its automorphism group then dh g g h proof let fn be a free group of rank n with the basis xn g fn fn be a free k nilpotent group for k let g acts trivially on h and elements of h act by automorphisms on it is easy to see that these actions are compatible let us show that in this case g h to do it let us prove that lies in g h take h aut g which acts on the generators of g by the rules xn xn then xn xn hence the generator lies in g h analogously xn lie in g h this completes the proof what actions are compatible in this section we study question let a group h acts on a group g by automorphisms is it possible to define an action of g on h such that this pair of actions are compatible consider some examples example let us take g a h b in dependence on actions we have three cases if the action of h on g and the action of g on h are trivial then by the second part of proposition g h is abelian tensor product let h acts on g ab and the action g on h is trivial it is not difficult to check that g and h act compatibly on each other to find dh g g h we calculate a b ab a hence dh g but dg h by the definition g h is generated by elements a b b a using the defining relations h g h g g bardakov and neshchadim we find aa ab on the other side a b ba a b a b hence a and in this case we have the same result let h acts on g ab and g acts on in this case g and h act on each other indeed a a b but hence the equality a a ab b a ba does not hold ab a b a let g h be some groups actions of g on h and h on g are defined by homomorphisms g aut h h aut g and by definition gh h hg g g g h the actions are compatible if h g h and g h h g for all g g h in this case we will say that the pair is compatible rewrite these equalities in the form h and g c h where b g is the inner automorphism of g which is induced by conjugation of g gb g g g and analogously b h is the inner automorphism of h which is induced by the conjugation of h b h h h compatible actions and tensor products theorem if the pair defines compatible actions of h on g and g on h then the following inclusions hold naut g h inn g naut h g inn h here inn g and inn h are the subgroups of inner automorphisms if h aut g is an embedding and naut g h inn g then defining g aut h by the formula g h h b g h h we get the compatible actions proof the first claim immediately follows from the relations to prove the second claim it is enough to check or that is equivalent the equality g g h h using the definition rewrite the left side of h g g rewrite the right side of g g g g h b b from and h g g b b using the homomorphism b g h b g b h b g h in the last equality we used the formula hence the equality holds question are the inclusions naut g h inn g naut h g inn h sufficient for compatibility of the pare bardakov and neshchadim compatible actions for nilpotent groups at first recall the following definition definition let g and h be groups and e g e h are their normal subgroups we will say that g is comparable with h with respect to the pare if there are homomorphisms g h h g such that x x mod for all x g y h y y mod x y y note that if then are mutually inverse isomorphisms the following theorem holds theorem let g h be groups and there exist homomorphisms g h h g such that x x mod g y y mod h for all x g y then the action of g on h and the action of h on g by the rules xy y y y x x x x g y h are compatible the following equalities hold x y y y x y x x g y proof let us prove that the following relation holds x y y for this denote the left hand side of this relation by l and transform it l x y x y y x y c y c x c y c here c c since c g then the commutator y c lies in the center of hence l xx y denote the right hand side of this relation by r and transform it r y y y we see that l r the first relation from the definition of compatible action holds the checking of the second relation is the similar compatible actions and tensor products from this theorem we have particular answer on question for nilpotent groups corollary if g h are nilpotent groups then any pare of homomorphisms g h h g define the compatible action problem let g and h be free nilpotent groups by corollary any pair of homomorphisms where hom g h hom h g defines a tensor product m g give a classification of the groups m note that for arbitrary groups corollary does not hold indeed let g i h i be free groups of rank define the homomorphisms g h h g by the rules then the conditions of compatible actions does not hold tensor products g note that the group aut is trivial and hence any group g acts on only trivially this section is devoted to the answer on the following question question let g be a group and aut g be an automorphism of order let and aut g such that under what conditions the pare is compatible if aut g is trivial automorphism then by the second part of proposition g gab is an abelian tensor product in the general case we have proposition let g be a group be a cyclic group of order two with the generator aut g be a homomorphism g aut be the trivial homomorphism then the pare of actions is compatible if and only if for any g g holds g gc g where c g is a central element of g such that c g c g in particular if the center of g is trivial then g gab bardakov and neshchadim proof since inn g normalizes then for every g g holds b g using this equality for arbitrary element x g we get since is an arbitrary element of g then c g is a central element of applying to the equality g gc g we have g g c g gc g c g that is c g c g for an arbitrary abelian group a we know that a z a the following proposition is some analog of this property for tensor product proposition let a be an abelian group is the cyclic group of order and acts on the elements of a by the following manner a a then the tensor product a is defined and there is an isomorphism a a proof it is not difficult to check that defined actions are compatible since a acts on trivially and a is abelian then the defining relations of the tensor product h h a a h have the form h a h h h a h the relations a a a a h give only one relation a a a a which follows from since the set of relations is a full system of relations for a then there exists the natural isomorphism of a on a that is defined by the formular a a a a acknowledgement the authors gratefully acknowledge the support from the and also we thank ivanov lavrenov and thomas for the interesting discussions and useful suggestions compatible actions and tensor products references brown loday excision homotopique en basse dimension acad sci paris ser i math brown loday van kampen theorems for diagrams of spaces topology with an appendix by zisman brown johnson robertson some computations of tensor products of groups algebra donadze larda thomas more on the tensor product and the bogomolov multiplier preprint pp lue the ganea map for nilpotent groups london math soc miller the second homology group of a group relations among commutators proceedings ams sobolev institute of mathematics novosibirsk russia novosibirsk state university novosibirsk russia novosibirsk state agrarian university dobrolyubova street novosibirsk russia address bardakov sobolev institute of mathematics and novosibirsk state university novosibirsk russia address neshch
4
spanning tree congestion and computation of generalized partition sunil chandran feb yun kuen cheung and davis department of computer science and automation indian institute of science india sunil max planck institute for informatics saarland informatics campus germany ycheung dissac abstract we study a natural problem in graph sparsification the spanning tree congestion stc problem informally the stc problem seeks a spanning tree with no routing too many of the original edges the root of this problem dates back to at least years ago motivated by applications in network design parallel computing and circuit design variants of the problem have also seen algorithmic applications as a preprocessing step of several important graph algorithms for any general connected graph with n vertices and m edges we show that its stc is at most o mn which is asymptotically optimal since we also demonstrate graphs with stc at least mn we present a time algorithm which computes a spanning tree with congestion o mn log n we also present another algorithm for computing a spanning tree with congestion o mn this algorithm runs in time when m n log n for achieving the above results an important intermediate theorem is generalized theorem for which chen et al gave a proof we give the first elementary and constructive proof by providing a local search algorithm with running time which is a key ingredient of the time algorithm we discuss a few consequences of the theorem concerning graph partitioning which might be of independent interest we also show that for any graph which satisfies certain expanding properties its stc is at most o n and a corresponding spanning tree can be computed in polynomial time we then use this to show that a random graph has stc n with high probability introduction graph generally describes a transformation of a large input graph into a graph that preserves certain feature this work was done while this author was visiting max planck institute for informatics germany supported by alexander von humboldt fellowship part of the work done while this author was a visitor at the courant institute nyu the visit was funded in part by new york university sunil chandran yun kuen cheung and davis issac distance cut congestion flow either exactly or approximately the algorithmic value is clear since the smaller graph might be used as a preprocessed input to an algorithm so as to reduce subsequent running time and memory requirement in this paper we study a natural problem in graph sparsification the spanning tree congestion stc problem informally the stc problem seeks a spanning tree with no routing too many of the original edges the problem is by network design applications where designers aim to build sparse networks that meet traffic demands while ensuring no connection edge is too congested indeed the root of this problem dates back to at least years ago under the name of load factor with natural motivations from parallel computing and circuit design applications the stc problem was formally defined by ostrovskii in and since then a number of results have been presented the probabilistic version of the stc problem coined as probabilistic capacity mapping also finds applications in several important graph algorithm problems the problem two canonical goals for graph sparsification problems are to understand the between the sparsity of the output graph s and how well the feature is preserved and to devise efficient algorithms for computing the sparser graph s these are also our goals for the stc problem we focus on two scenarios a general connected graphs with n vertices and m edges and b graphs which exhibit certain expanding properties for a we show that the p spanning tree congestion stc is at most o mn which is a factor of better than the trivial bound of we present a algorithm which computes a spanning tree with congestion o mn log n we also present another algorithm for computing a spanning tree with congestion o mn this algorithm runs in time when m n n for almost all ranges of average degree we also demonstrate graphs with stc at least mn for b we show that the expanding properties permit us to devise polynomialtime algorithm which computes a spanning tree with congestion o n using this result together with a separate argument we show that a random graph has n stc with high probability for achieving the results for a an important intermediate theorem is generalized theorem which was first proved by chen et al their proof uses advanced techniques in topology and homology theory and is definition in a graph g v e a is a of v into vj such that for each j k g vj is connected theorem theorems let g v e be a graph let w be a weight function w v for any u v let w u p w v given any k distinct terminal vertices tk and k positive for brevity we say for henceforth spanning tree congestion pk integers tk such that for each j k tj w tj and ti w v there exists a of v into vj such that for each j k tj vj and w vj tj w v one of our main contributions is to give the first elementary and constructive proof by providing a local search algorithm with running time theorem a there is an algorithm which given a graph computes a satisfying the conditions stated in theorem in time b if we need a instead of the input graph remains assumed to be the algorithm s running time improves to log k we make three remarks first the log k algorithm is a of our algorithm for computing a spanning tree with congestion o mn second since theorem guarantees the existence of such a partition the problem of computing such a partition is not a decision problem but a search problem our local search algorithm shows that this problem is in the complexity class pls we raise its completeness in pls as an open problem third the running times do not depend on the weights the stc problem related problems and our results given a connected graph g v e let t be a spanning tree for an edge e u v e its detour with respect to t is the unique path from u to v in t let dt e t denote the set of edges in this detour the stretch of e with respect to t is e t the length of its detour the dilation of t is e t the of an edge e t is ec e t f e e dt f t the number of edges in e whose detours contain the congestion of t is cong t ec e t the spanning tree congestion stc of the graph g is stc g mint cong t where t runs over all spanning trees of we note that there is an equivalent definition for which we will use in our proofs for each e t removing e from t results in two connected components let ue denote one of the components then ec e t ue v ue various types of congestion stretch and dilation problems are studied in computer science and discrete mathematics in these problems one typically seeks a spanning tree or some other structure with minimum congestion or dilation we mention some of the problems where minimization is done over all the spanning trees of the given graph the low stretch spanning tree lsst problem is to find a spanning tree which minimizes the total stretch of all the edges of it is easy to see that minimizing the total stretch is equivalent to minimizing the total of the selected spanning tree the stc problem is to find a spanning tree of minimum congestion notation hides all polynomial factors in input size sunil chandran yun kuen cheung and davis issac tree spanner problem is to find a spanning tree of minimum dilation the more general spanner problem is to find a sparser subgraph of minimum distortion there are other congestion and dilation problems which do not seek a spanning tree but some other structure the most famous among them is the bandwidth problem and the cutwidth problem see the survey for more details among the problems mentioned above several strong results were published in connection with the lsst problem alon et al had shown a lower bound of max n log n m upper bounds have been derived and many efficient algorithms have been devised the current best upper bound is m log n since total stretch is identical to total the best upper bound for the lsst problem automatically implies an m n log n upper bound on the average but in the stc problem we concern the maximum as we shall p see for some graphs the maximum has to be a factor of larger than the average in comparison there were not many strong and general results for the stc problem though it was studied extensively in the past years the problem was formally proposed by ostrovskii in prior to this simonson had studied the same parameter under a different name to approximate the cut width of graph a number of results were presented on this topic some complexity results were also presented recently but most of these results concern special classes of graphs the most general result regarding stc of general graphs is an o n n upper bound by rautenbach and regen in and a matching lower bound by ostrovskii in note that the above upper bound is not interesting when the graph is sparse since there is also a trivial upper bound of in this paper we come up with a strong improvement to these bounds after years theorem informal for a connected graph g with n vertices and m edges its spanning tree congestion is at most o mn p in terms of average degree davg we can state this upper bound as o n davg there is a matching lower bound our proof for achieving the o mn upper bound is constructive it runs in exponential time in general for graphs with m n n edges it runs in time by using an algorithm of chen et al for computing confluent flow from splittable flow we improve the running time to polynomial but with a slightly worse upper bound guarantee of o mn log n motivated by an open problem raised by ostrovskii concerning stc of random graphs we formulate a set of expanding properties and prove that for any graph satisfying these properties its stc is at most o n we devise a polynomial time algorithm for computing a spanning tree with congestion o n for such graphs this result together with a separate argument n permit us to show that for random graph g n p with p c log for some n spanning tree congestion small constant c its stc is n with high probability thus resolving the open problem raised by ostrovskii completely graph partitioning and the generalized theorem it looks clear that the powerful theorem can make an impact on graph partitioning we discuss a number of its consequences which might be of wider interest graph is a prominent topic in graph and has a wide range of popular goal is to partition the vertices into sets such that the number of edges across different sets is small while the objective minimizing the total number of edges across different sets is more widely studied in various applications the more natural objective is the objective minimizing the maximum number of edges leaving each set the objective is our focus here depending on applications there are additional constraints on the sets in the partition two natural constraints are i balancedness the sets are approximately balanced in sizes and ii each set induces a connected subgraph the balancedness constraint appears in the application of domain decomposition in parallel computing while the constraint is motivated by algorithms for spanning tree construction imposing both constraints simultaneously is not feasible for every graph for instance consider the star graph with more than vertices and one wants a thus it is natural to ask for which graphs do partitions satisfying both constraints exist theorem implies a simple sufficient condition for existence of such partitions by setting the weight of each vertex in g to be its degree and using the elementary fact that the maximum degree g n for any graph g on n vertices and m edges we have proposition if g is a graph with m edges then there exists a such that the total degree of vertices in each part is at most consequently the objective is also at most due to expander graphs this bound is optimal up to a small constant factor this proposition together with lemma implies the following crucial lemma for achieving some of our results lemma let g be a graph with m edges then stc g proposition can be generalized to include approximate balancedness in terms of number of vertices by setting the weight of each vertex to be plus its degree in g we have proposition given any fixed c if g is a graph with m edges and n vertices then there exists a such that the total note that the stc problem is relevant only for connected graphs since the threshold function for graph connectivity is logn n this result applies for almost all of the relevant range of values of sunil chandran yun kuen cheung and davis issac degree of vertices in each part is at most and the number of vertices n in each part is at most c further related work concerning stc problem okamoto et al gave an algorithm for computing the exact stc of a graph the probabilistic version of the stc problem coined as probabilistic capacity mapping is an important tool for several graph algorithm problems the problem showed that in the probabilistic setting distance and capacity are interchangeable which briefly says a general upper bound for one objective implies the same general upper bound for the other thus due to the results on lsst there is an upper bound of log n on the maximum average congestion s result also implies an o log n approximation algorithm to the problem improving upon the o n approximation algorithm of feige and krauthgamer however in the deterministic setting such interchanging phenomenon does not hold there is a simple tight bound n for dilation but for congestion it can be as high as n n for the precise definitions more background and key results about the concepts we have just discussed we recommend the writing of andersen and feige graph is a prominent research topic with wide applications so it comes no surprise that a lot of work has been done on various aspects of the topic we refer readers to the two extensive surveys by schaeffer and by teng kiwi spielman and teng formulated the problem and gave bounds for classes of graphs with small separators which are then improved by steurer on the algorithmic side many of the related problems are so the focus is on devising approximation algorithms sparkled by the seminal work of arora rao and vazirani on sparsest cut and of spielman and teng on local clustering graph algorithms with various constraints have attracted attention across theory and practice we refer readers to for a fairly recent account of the development the objective has been extensively studied the objective while striking as the more natural objective in some applications has received much less attention the only algorithmic work on this objective and its variants are svitkina and tardos and bansal et al none of the above work addresses the constraint the classical version of theorem the vertex weights are uniform was proved independently by and s proof uses homology theory and is s proof is elementary and is constructive implicitly but he did not analyze the running time polynomial time algorithms for constructing the were devised for k but no algorithm was known for general graphs with k recently hoyer and thomas provided a clean presentation of s proof in there was a paper by ma and ma in journal of computer science and technology which claimed a algorithm for all however according to a recent study ma and ma s algorithm can fall into an endless loop also said the algorithm should be wrong see spanning tree congestion by introducing their own terminology which we use for our constructive proof of theorem notation given a graph g v e an edge set f e and disjoint vertex subsets v we let f e f and technical overview to prove the generalized theorem constructively we follow the same framework of s proof and we borrow terminology from the recent presentation by hoyer and thomas but it should be emphasized that proving our generalized theorem is not since in s proof at each stage a single vertex is moved from one set to other to make progress while making sure that the former set remains connected in our setting in addition to this we also have to ensure that the weights in the partitions do not exceed the specified limit and hence any vertex that can be moved from one set to another need not be candidate for being transferred the proof is presented in section as discussed a crucial ingredient for our upper bound results is lemma which is a direct corollary of the generalized theorem the lemma takes care of the cases for other cases we provide a recursive way to construct a low congestion spanning tree see section for details for showing our lower bound for general graphs the challenge is to maintain high congestion while keeping density small to achieve this we combine three expander graphs with little overlapping between them and we further make those overlapped vertices of very high degree this will force a adjacent to the centroid of any spanning tree to have high congestion see section for details we formulate a set of expanding properties which permit constructing a spanning tree of better congestion guarantee in polynomial time the basic idea is simple start with a vertex v of high degree as the root now try to grow the tree by keep attaching new vertices to it while keeping the invariant that the subtrees rooted at each of the neighbours of v are roughly balanced in size each such subtree is called a branch but when trying to grow the tree in a balanced way we will soon realize that as the tree grow all the remaining vertices may be seen to be adjacent only to a few number of heavy branches to help the balanced growth the algorithm will identify a transferable vertex which is in a heavy branch and it and its descendants in the tree can be transferred to a lighter branch another technique is to use multiple rounds of matching between vertices in the tree and the remaining vertices to attach new vertices to the tree this will tend to make sure that all subtrees do not grow uncontrolled by showing that random graph satisfies the expanding properties with appropriate parameters we show that a random graph has stc of n with high probability generalized theorem we prove theorem in this section observe that the classical theorem follows from theorem by taking w v for all v v and tj nj sunil chandran yun kuen cheung and davis issac for all j k we note that a perfect generalization where one requires that w vj tj is not possible think when all vertex weights are even integers while some tj is odd let g v e be a graph on n vertices and m p edges and w v be a weight function for any subset u v w u w u let wmax w v key combinatorial notions we first highlight the key combinatorial notions used for proving theorem see figures and for illustrations of some of these notions fitted partial partition first we introduce the notion of fitted partial partition fpp an fpp a is a tuple of k subsets of v ak such that the k subsets are pairwise disjoint and for each j k tj aj g aj is connected and w aj tj wmax we say the set is fitted for satisfying this inequality we say an fpp is a strict fitted partial partition sfpp if ak is a proper subset of v we say the set aj is light if w aj tj and we say it is heavy otherwise note that there pk exists at least one light set in any sfpp for otherwise w ak tj w v which means ak v also note that by taking aj tj we have an fpp and hence at least one fpp exists configuration for a set aj in an fpp a and a vertex v aj tj we define the reservoir of v with respect to a denoted by ra v as the vertices in the same connected component as tj in g aj v note that v ra v for a heavy set aj a sequence of vertices zp for some p is called a cascade of aj if aj tj and aj ra zi for all i the cascade is called a null cascade if p if the cascade is empty note that for light set we do not need to define its cascade since we do not use it in the proof see figure a configuration ca is defined as a pair a d where a ak is an fpp and d is a set of cascades which consists of exactly one cascade possibly a null cascade for each heavy set in a a vertex that is in some cascade of the configuration is called a cascade vertex given a configuration we define rank and level inductively as follows any vertex in a light set is said to have level for i a cascade vertex is said to have rank i if it has an edge to a vertex but does not have an edge to any vertex for i a vertex u is said to have level i for i if u ra v for some cascade vertex v but u ra w for any cascade vertex w such that rank of w is less than i a vertex that is not in ra v for any cascade vertex v is said to have level a configuration is called a valid configuration if for each heavy set aj rank is defined for each of its cascade vertices and the rank is strictly increasing in the spanning tree congestion ra ra ra tj fig given a configuration a d and a heavy set aj in a the figure shows a cascade for the heavy set aj and several reservoirs of the cascade vertices for any z z ra z a cascade vertex z is a of g aj g aj z is disconnected the removal of z from aj will lead to at least two connected components in g aj z and the connected component containing tj is the reservoir of z we identify tj but we clarify that a terminal vertex is never in a cascade each epoch between z and z and also the epoch above is a subset of vertices b aj where b z and g b is connected note that in general it is possible that there is no vertex above the last cascade vertex cascade if zp is the cascade then rank rank zp note that by taking aj tj and taking the null cascade for each heavy set in this case aj is heavy if w tj tj we get a valid configuration see figure configuration vectors and their total ordering for any vertex we define its neighborhood level as the smallest level of any vertex adjacent to it a vertex v of level is said to satisfy maximality property if each vertex adjacent on it is either a cascade vertex has a level of at most or is one of the terminals tj for some j for any a valid configuration is called an configuration if all vertices having level at most satisfy the maximality property note that by definition any valid configuration is a configuration for a configuration ca ak d we define sa v an edge uv is said to be a bridge in ca if u sa v aj for some j k and level v sunil chandran yun kuen cheung and davis issac rank rank rank rank rank rank vertices in all light sets level fig an instance of a valid configuration every blue represent an edge from a cascade vertex to a vertex in some reservoir or light set every cascade vertex connected to a light set has rank and all vertices in the epoch immediately below a rank cascade vertex are of level inductively every cascade vertex connected to a vertex of level i has rank i and all vertices in the epoch immediately below a rank i cascade vertex are of level i all vertices above the last cascade vertex of each cascade has level a valid configuration ca is said to be if the highest rank of a cascade vertex in ca is exactly if there are no cascade vertices then we take the highest rank as ca is and all bridges uv in ca if any are such that u sa and level v note that taking aj tj and taking the null cascade for each heavy set gives a configuration for each configuration ca a d we define a configuration vector as below la nan where la is the number of light sets in a and na is the total number of all vertices in ca next we define ordering on configuration vectors let ca and cb be configurations we say ca cb if spanning tree congestion la lb or la lb and we say ca cb if la lb and we say ca cb if ca cb or ca cb we say ca cb if la lb and na nb for all for n we say ca cb if ca cb or ca cb and na nb we say ca cb if ca cb or ca cb we say ca cb ca is strictly better than cb if ca n cb proof of theorem we use two technical lemmas about configuration vectors and their orderings to prove theorem a the proof of theorem b follows closely with the proof of theorem a but makes use of an observation about the rank of a vertex in the local search algorithm to give an improved bound on the number of configuration vectors navigated by the algorithm lemma given any configuration ca a ak da that does not have a bridge we can find an configuration cb b bk db in polynomial time such that cb ca proof since ca is any vertex that is at level satisfies maximality property so for satisfying we only need to worry about the vertices that are at level let xj be the set of all vertices x aj such that x is adjacent to a vertex level x level x as the highest rank of any cascade vertex is x tj and x is not a cascade vertex of rank we claim that there exists at least one j for which xj is not empty if that is not the case then we exhibit a cut set of size at most k for each j such that aj is a heavy set with a cascade let yj be the highest ranked cascade vertex in aj for each j such that aj is a heavy set with a null cascade let yj be tj let y be the set of all yj such that aj is a heavy set note that k as a is an sfpp and hence has at least one light set let be the set of all vertices in v y that have level and z be the remaining vertices in v y since a is an sfpp sa and since all vertices in sa have level we have that z is not empty because there exists at least one light set in a and the vertices in a light set have level we show that there is no edge between and z in suppose there exists an edge uv such that u and v z if u sa then uv is a bridge which is a contradiction by our assumption that ca does not have a bridge hence u aj for some j k note that aj has to be a heavy set otherwise u has level we have that u is not a cascade vertex as all cascade vertices with level are in y and u tj as all tj such that level tj are in y also v is not of level as otherwise u xj but we assumed xj is empty but then v has level at most u has level and there sunil chandran yun kuen cheung and davis issac is an edge uv this means that ca was not which is a contradiction thus there exists at least one j for which xj is not empty for any j such that xj there is at least one vertex xj such that xj xj ra xj now we give the configuration cb as follows we set bj aj for all j k for each heavy set aj such that xj we take the cascade of bj as the cascade of aj appended with xj for each heavy set aj such that xj we take the cascade of bj as the cascade of aj it is easy to see that cb is as each vertex that had an edge to vertices in ca is now either a rank cascade vertex or a vertex or is tj for some j also notice that all the new cascade vertices that we introduce the xj s have their rank as and there is at least one rank cascade vertex as xj is not empty for some j since there were no bridges in ca all bridges in cb has to be from sb to a vertex having level hence cb is all vertices that had level at most in ca retained their levels in cb and at least one vertex of ca became a vertex in cb because the cascade vertex that was at rank becomes vertex now in at least one set since ca had no vertices this means that cb ca lemma given an configuration ca a ak da having a bridge we can find in polynomial time a valid configuration cb b bk db such that one of the following holds cb ca and cb is an configuration or cb ca there is a bridge v in cb such that sb and level v and cb is an configuration proof let uv be a bridge where u sa let aj be the set containing note that level v because ca is we keep bj aj for all j j but we modify aj to get bj as described below we maintain that if aj is a heavy set then bj is also a heavy set for all j and hence maintain that lb la case aj is a light set when we take bj aj u for all j such that bj is a heavy set cascade of bj is taken as the null cascade we have w aj tj because aj is a light set so w bj w aj w u tj wmax and hence bj is fitted also g bj is connected and hence bk is an fpp we have cb ca because either bj became a heavy set in which case lb la or it is a light set in which case lb la and it is easy to see that cb is case aj is a heavy set when case w aj u tj wmax we take bj aj u for each j such that bj is a heavy set aj is also heavy set for such j the cascade of bj is taken as the cascade of aj g bj is clearly connected and bj is fitted by assumption of the case that we are in hence b is indeed an fpp observe that all vertices that had level in ca still has level in cb since level v was in ca by of ca u also has level in cb and u had level in ca hence cb ca it is also easy to see that cb remains case w aj u tj wmax let z be the cascade vertex of rank in aj note that aj should have such a cascade vertex as v aj has level let spanning tree congestion be aj ra z z is the set of all vertices in aj z with level we initialize bj aj u now we delete vertices one by one from bj in a specific order until bj becomes fitted we choose the order of deleting vertices such that g bj remains connected consider a spanning tree of g z has at least one leaf which is not z we delete this leaf from bj and we repeat this process until is just the single vertex z or bj becomes fitted if bj is not fitted even when is the single vertex z then delete z from bj if bj is still not fitted then delete u from bj note that at this point bj aj and hence is fitted also note that g bj remains connected hence bk is an fpp bj does not become a light set because bj became fitted when the last vertex was deleted from it before this vertex was deleted it was not fitted and hence had weight at least tj wmax before this deletion since the last vertex deleted has weight at most wmax bj has weight at least tj and hence is a heavy set now we branch into two subcases for defining the cascades case z z was not deleted from bj in the process above for each j such that bj is a heavy set the cascade of bj is taken as the cascade of aj since a new level vertex u is added and all vertices that had level at most retain their level we have that cb ca it is also easy to see that cb remains case z bj z was deleted from bj for each j such that bj is a heavy set the cascade of bj is taken as the cascade of aj but with the rank cascade vertex if it has any deleted from it cb ca because all vertices that were at a level of or smaller retain their levels observe that there are no bridges in cb to vertices that are at a level at most all vertices at a level at most still maintain the maximality property and we did not introduce any cascade vertices hence cb is it only remains to prove that there is a bridge v in cb such that level v we know z sb since z was a rank cascade vertex in ca z had an edge to z such that z had level in ca observe that level of z is at most in cb as well hence taking v zz completes the proof proof of theorem a we always maintain a configuration ca a da that is for some if the fpp a is not an sfpp at any point then we are done so assume a is an sfpp we start with the configuration where aj tj and the cascades of all heavy sets are null cascades if our current configuration ca is an configuration that has no bridge then we use lemma to get a configuration cb such that cb ca and b is we take cb as the new current configuration ca if our current configuration ca is an configuration with a bridge then we get an configuration cb for some such that cb ca by repeatedly applying lemma at most times so in either case we get a strictly better configuration that is for some in polynomial time we call this an iteration of our algorithm notice that the number of iterations possible is at most the number of distinct configuration vectors possible it is easy to see that the of distinct configuration vectors with highest rank at most r is at most since rank n sunil chandran yun kuen cheung and davis issac of any is at most n the number of iterations of our algorithm is at most n k n which is at most n since each iteration runs in polynomial time as guaranteed by the two lemmas the required running time is when the algorithm terminates the fpp given by the current configuration is not an sfpp and this gives the required partition proof of theorem b since any graph is also connected the algorithm will give the required partition due to theorem a we only need to prove the better running time claimed by theorem b for this we show that the highest rank attained by any vertex during the algorithm is at most k since the number of distinct uration vectors with highest rank r is at most we then have that the n o log k running time is which is o as claimed hence it n only remains to prove that the highest rank is at most k for this observe that in an configuration for each i the union of all vertices having level i and the set of terminals together forms a cutset since the graph is this means that for each i the number of vertices having level i is at least the required bound on the rank easily follows upper bounds for spanning tree congestion we first state the following easy lemma which together with proposition implies lemma lemma in a graph g v e let be a vertex and let t be any neighbours of suppose that there exists a v such that for all j tj vj and the sum of degree of vertices in each vj is at most let be an arbitrary spanning tree of g vj let ej the edge s tj let be the spanning tree of g defined as ej then has congestion at most theorem for any connected graph g v e there is algorithm which o n log computes a spanning tree with congestion at most mn in o time theorem for any connected graph g v e there is a polynomial time algorithm which computes a spanning tree with congestion at most mn log the two algorithms follow the same framework depicted in algorithm it is a recursive algorithm the parameter is a global parameter which is the number of edges in the input graph g in the first level of the recursion let denote the number of vertices in this graph the only difference between the two algorithms is in line on how this step is executed with between the running time of the step t nh mh and the guarantee d nh mh for proving theorem we use theorem b spanning tree congestion p proposition yielding nh mh nh and t nh mh nh log nh for proving theorem we make p use of an algorithm in chen et al which yields d nh mh nh log nh and t nh mh poly nh mh algorithm findlcst h input a connected graph h vh eh on nh vertices and mh edges output a spanning tree of h p if mh then return an arbitrary spanning tree of h end lp m k y a global minimum vertex cut of h if k then x the smallest connected component in h vh y see figure z vh x y findlcst h x findlcst h y z h y z is connected as y is a global min cut return an arbitrary edge between x and y else an arbitrary vertex in vh pick neighbours of in the graph h denote them by let ej denote edge tj for j see figure compute a of h denoted by vj such that for each j tj vj and the total degree graph h of vertices in each vj is at most d nh mh let the time needed be t nh mh for an arbitrary spanning tree of g vj s return ej end in the rest of this section we first discuss the algorithm in chen et then we prove theorem the proof of theorem is almost identical and is deferred to appendix confluent flow and the algorithm of chen et al in a confluent flow problem the input includes a graph g v e a demand function w v and sinks t v for each v v a flow of amount w v is routed from v to one of the sinks but there is a restriction at every vertex u v the outgoing flow must leave u on at most edge the outgoing flow from u is unsplittable the problem is to seek a flow satisfying the demands which minimizes the node congestion the maximum sunil chandran yun kuen cheung and davis issac fig the scenario in algorithm when the graph has low connectivity the vertex set y is a global minimum vertex cut of the graph the vertex set x is the smallest connected component after the removal of y and z is the union of all the other connected components fig the scenario in algorithm when the graph has high connectivity incoming flow among all vertices since the incoming flow is maximum at one of the sinks it is equivalent to minimize the maximum flow received among all sinks here we assume that no flow entering a sink will leave splittable flow problem is almost identical to confluent flow problem except that the above restriction is dropped now the outgoing flow at u can split along multiple edges note that here the maximum incoming flow might not be at a sink it is known that splittable flow can be solved in polynomial time for brevity we drop the phrase from now on theorem section suppose that given graph g demand w and sinks there is a splittable flow with node congestion q then there exists a spanning tree congestion polynomial time algorithm which computes a confluent flow with node congestion at most ln q for the same input corollary let g be a graph with m edges then for any k and for any vertices t v there exists a polynomial time algorithm which computes an v such that for all j tj vj and the total degrees of vertices in each vj is at most ln corollary follows from theorem and proposition see appendix for details congestion analysis we view the whole recursion process as a recursion tree there is no endless loop since down every path in the recursion tree the number of vertices in the input graphs are strictly decreasing on the other hand note that the leaf of the recursion tree pis resulted by either i when the input graph h to that call satisfies mh or ii when lines are executed an internal node appears only when the of the input graph h is low and it makes two recursion calls we prove the following statement by induction from for each graph which is the input to some call in thep recursion tree the returned spanning tree of that call has congestion at most log nh we first handle the two basis cases i and ii in case i findlcst p returns an arbitrary spanning tree and the congestion is bounded by mh in case ii by and lemma pfindlcst returns a tree with congestion at most nh log nh log nh next let h be the input graph to a call which is represented by an internal node of the recursion tree recall the definitions of x y z in the algorithm let x note that x nh then by induction hypothesis the congestion of the returned spanning tree is at most max congestion of in h x congestion of in h y z p nh x log nh x x viewing x as a real variable by taking derivative it is easy to see that the above expression is maximized at x thus the congestion is at most p p p nh log nh log nh as desired by theorem runtime analysis at every internal node of the recursion tree the algorithm makes two recursive calls with two and strictly smaller vertex size inputs the dominating knitting cost is in line for computing a global minimum vertex cut which is that it can be done in polynomial time since at every leaf of the recursion tree the running time is polynomial by standard analysis on algorithms the running time of the whole algorithm is polynomial which completes the proof of theorem sunil chandran yun kuen cheung and davis issac lower bound for spanning tree congestion here we give a lower bound on spanning tree congestion which matches our upper bound theorem for any sufficiently large n and for any m satisfying m max log n there exists a connected graph with n o n vertices and m m edges for which the spanning tree congestion is at least mn we start with the following lemma which states that for a random graph g n p when p is sufficiently large its edge expansion is np with high probability the proof of the lemma uses only fairly standard arguments and is deferred to appendix lemma for any integer n and p logn n let g n p denote the random graph with n vertices in which each edge occurs independently with probability then with probability at least o i the random graph is connected ii the number of edges in the random graph is between and and iii for each subset of vertices s with the number of edges leaving s is at least n in particular for any sufficiently large integer n when m log n by setting p there exists a connected graph with n vertices and edges such that for each subset of vertices s with the m number of edges leaving s is at least we denote such a graph by h n m we discuss our construction here see figure before delving into the proof the vertex set v is the union of three vertex subsets such that p n and are disjoint in each of and we embed h n m the edge sets are denoted respectively up to this point the construction is similar to that of ostrovskii except that we use h n m instead of a complete graph the new component in our construction is adding the following edges for each vertex v add an edge between v and every vertex in v the set of these edges are denoted similarly for each vertex v add an edge between v and every vertex in v the set of these edges are denoted this new component is crucial without it we could only prove a lower bound of n mn nm proof of theorem p let g v e be the graph constructed as above the whole graph has vertices the number m p of edges is at least due to edges in and and is at most mn which is at most for all sufficiently large it is well known that for any tree on n vertices there exists a vertex x called a centroid of the tree such that removing x decomposes the tree into connected components each of size at most now consider any spanning tree of the spanning tree congestion h n m h n m h n m fig our construction for spanning tree congestion are three vertex subsets of the same size in each of the subsets we embed expander h n m there is a small overlap between and while are disjoint for any vertex we add edges between it and any other vertex in similarly for any vertex we add edges not shown in figure between it and any other vertex in given graph let u be a centroid of the tree without loss of generality we can assume that u otherwise we swap the roles of and the removal of u and its adjacent edges from the tree decomposes the tree into a number of connected components for any of these components which intersects it must contain pat least one vertex of thus the number of such components is at most and hence there exists one of them denoted by uj such that p p n let ej denote the that connects u to uj then there are three cases p p case n n n due to the property of h n m the congestion of ej is at least min n mn p p case n n and let w p uj note that by this case s assumption due to the edge subset the congestion of ej is at least p n w w mn p p case n n and let w uj and let z uj p note that n n suppose then a contradiction to the assumption that u is a sunil chandran yun kuen cheung and davis issac centroid thus due to the edge subset the congestion of ej is at least w z w z n p p n mn graphs with expanding properties for any vertex subset u w v let nw u denote the set of vertices in w which are adjacent to a vertex in u let n u nv u definition a graph g v e on n vertices is an n s t expanding graph if the following four conditions are satisfied for each vertex subset s with s s n for each vertex subset s with s s for each vertex subset s with and for any subset s s s for each vertex subset s s v s theorem for any connected graph g which is an n s t graph there is a polynomial time algorithm which computes a spanning tree with congestion at most n t max s t where n next we present the polynomial time algorithm in theorem and its analysis algorithm let g be an n s t graph by condition every vertex has degree at least let be a vertex of degree d and let vd be its d neighbours we maintain a tree t rooted at such that t td vd where td are trees rooted at vd respectively we call the s as branches see figure we start with each branch ti vi in order to minimize congestion we grow t in a balanced way we maintain that the tin s are roughlyoof the same size a branch is saturated if it contains at least max s vertices at any point of time let vt be the set of vertices in t and vt be the vertices not in t often we will move a subtree of a saturated branch ti to an unsaturated branch tj to ensure balance for any x vt let tx denote the subtree of t rooted at x a vertex x of a saturated branch ti is called transferable to branch tj if x has a neighbour y in tj and the tree tj xy tx is unsaturated see figure spanning tree congestion vd td vt fig the tree t and its branches tj tj y x y x fig transfer of a subtree from a saturated branch to an unsaturated branch the algorithm is divided into two phases which are described below throughout the algorithm whenever a branch ti gets modified t gets modified accordingly and whenever t gets modified vt and vt gets modified accordingly phase repeatedly do one of the following two actions until n we will prove that the precondition of at least one of the actions is satisfied if n if there exists a b vt such that b has a neighbour a in some unsaturated branch ti add the vertex b and the edge ab to branch ti if there exists at least one transferable vertex see figure find the transferable vertex x such that tx is the smallest let be the sunil chandran yun kuen cheung and davis issac branch currently containing x tj be a branch to which it is transferable and y be an arbitrarily chosen neighbour of x in tj a remove the subtree tx from and add it to tj with x as a child of y b pick a b vt that has a neighbour a arbitrarily chosen if many either in or in tj we will show in the analysis that such b exists we add vertex b and edge ab to the branch containing a to or tj phase while vt repeat find a maximum matching of g vt vt the bipartite graph formed by edges of g between vt and vt let m be the matching add all edges of m to t in the analysis below we say that a tree is saturated if it contains at least a vertices we will determine its appropriate value by the end of the analysis analysis of phase we claim that during phase if n the precondition of either step or step is satisfied we also show the existence of a vertex b as specified in step whenever step is reached given these and the fact that a vertex in vt is moved to vt either in step or in step during each round of phase we have that phase runs correctly and terminates after a linear number of rounds during phase we will also maintain the invariant that each branch has at most a vertices thus each saturated branch has exactly a vertices we call this invariant the balancedness note that balancedness is not violated due to step as the new vertex is added to an unsaturated branch it is not violated during step as the branches and tj as defined in step become unsaturated at the end of the step we define the hidden vertices of t denoted by h ht as follows they are the vertices which are not adjacent to any vertices outside the tree to any vertex in vt if there is an unsaturated branch with a vertex clearly the precondition of step is satisfied so let us assume that all the vertices in all unsaturated branches are hidden in such a case we show that the precondition of step is satisfied if we argue that in this case s otherwise take a subset h h of cardinality s then by condition n h which is contained in vt has cardinality at least n a contradiction since n the number of saturated branches is at most to ensure that at least one unsaturated branch exists we set a such that let u denote the set of vertices in all unsaturated branches since all vertices in u are hidden vertices then by condition u note that the vertices in n u are all in the saturated branches by the principle there exists a saturated branch containing at least n u n vertices of n u by setting a the above calculation guarantees the existence of a saturated branch containing at least vertices of n u let ti be such a branch spanning tree congestion in ti pick a vertex x ti n u such that tx does not contain any vertex in n u except x then the size of tx is at most a u a let y u be a vertex which is adjacent to x and tj be the branch containing y since tj has at most vertices x is a transferable vertex to tj thus precondition of step is satisfied we further set a s so that in each saturated branch there is at least one unhidden vertex in particular ti has an unhidden vertex which is adjacent to some b vt the vertex b is either adjacent to a vertex in tx or a vertex in ti tx as required in step analysis of phase since g is connected m is in each iteration of phase and hence phase terminates in linear number of rounds at the end of phase since vt is empty t is clearly a spanning tree it only remains to estimate the congestion of this spanning tree towards this we state the following modified hall s theorem which is an easy corollary of the standard hall s theorem lemma in a bipartite graph l r with for any vertex w l let r w denote the neighbours of w in r then for any w l let r w r w suppose that there exist t such that for any w l we have w then the bipartite graph admits a matching of size at least recall that phase consists of multiple rounds of finding a matching between vt and vt as long as condition with s vt plus the modified hall s theorem with l vt and r vt guarantees that in each round at least t t n m l number of vertices in vt are matched thus after at most log rounds of matching after reaching condition with s vt plus the modified hall s theorem with l vt and r vt guarantees that after one more round of matching all but t vertices are left in vt by the end of phase each branch had at most a vertices after each round of matching the cardinality of each branch is doubled at most thus the maximum possible number of vertices in each branch after running the whole algorithm is at most l m log t and hence the stc is at most recall that we need a to satisfy a n l mo set a max s n and a thus we sunil chandran yun kuen cheung and davis issac random graph let g g n p where p log and the following lemmas show that with high probability g is an n s t graph with s np np t and hence o the proof of the lemmas are deferred to appendix lemma for any s v g such that we have s n with probability at least where lemma for any s v g such that we have s with probability at least o where lemma for all a v g such that and for all s a with probability at least s has at least neighbors in v g a where lemma for all s v g the cut size s v g s is at most with probability at least plugging the bounds from above lemmas into theorem together with a separate lower bound argument theorem in appendix we have the following theorem in appendix we also present a proof of this theorem theorem if g g n p where p log then with probability at least o its stc is n discussion and open problems in this paper we provide thorough understanding both combinatorially and algorithmically on the spanning tree congestion of general graphs and random graphs on course of doing so we also provide the first constructive proof for the generalized theorem which might be of independent interest following are some natural open problems finding the spanning tree with minimum congestion is indeed bodlaender et al showed a for the stc problem does a constant or a factor approximation polynomial time algorithm exist we present an algorithm for computing a spanning tree achieving congestion at most o mn the algorithm runs in time when m n n is there a polynomial time algorithm for constructing such a spanning tree for a graph a connected where all parts are of size at most o log k can be found in polynomial time due to an algorithm of chen et al can we improve the sizes of parts to o is finding partition if not is it polynomial time solvable spanning tree congestion references ittai abraham yair bartal and ofer neiman nearly tight low stretch spanning trees in focs pages ittai abraham and ofer neiman using to build a low stretch spanning tree in stoc pages noga alon richard karp david peleg and douglas b west a game and its application to the problem siam j ingo gautam das david dobkin deborah joseph and soares on sparse spanners of weighted graphs discrete computational geometry reid andersen and uriel feige interchanging distance and capacity in probabilistic mappings corr sanjeev arora satish rao and umesh vazirani expander flows geometric embeddings and graph partitioning acm nikhil bansal uriel feige robert krauthgamer konstantin makarychev viswanath nagarajan joseph naor and roy schwartz graph partitioning and small set expansion siam j sandeep bhatt fan chung frank thomson leighton and arnold rosenberg optimal simulations of tree machines preliminary version in focs pages hans bodlaender fedor fomin petr golovach yota otachi and erik jan van leeuwen parameterized complexity of the spanning tree congestion problem algorithmica hans bodlaender kyohei kozawa takayoshi matsushima and yota otachi spanning tree congestion of graphs discrete mathematics random graphs cambridge university press and andrew thomason random graphs of small order northholland mathematics studies leizhen cai and derek corneil tree spanners siam discrete jiangzhuo chen robert kleinberg rajmohan rajaraman ravi sundaram and adrian vetta almost tight bounds and existence theorems for confluent flows acm michael elkin yuval emek daniel spielman and teng lowerstretch spanning trees siam j uriel feige and robert krauthgamer a polylogarithmic approximation of the minimum bisection siam j on division of graphs to connected subgraphs colloq math soc janos bolyai ludovic hofer and thibaud lambert study of the article an o algorithm to find a in a graph alexander hoyer and robin thomas the theorem arxiv url http david johnson christos papadimitriou and mihalis yannakakis how easy is local search comput syst marcos kiwi daniel spielman and teng domain decomposition theor comput sunil chandran yun kuen cheung and davis issac ioannis koutis gary miller and richard peng a nearly o m log n time solver for sdd linear systems in focs pages kyohei kozawa and yota otachi spanning tree congestion of rook s graphs discussiones mathematicae graph theory kyohei kozawa yota otachi and koichi yamazaki on spanning tree congestion of graphs discrete mathematics hiu fai law siu lam leung and mikhail ostrovskii spanning tree congestions of planar graphs involve a homology theory for spanning trees of a graph acta math acad sci hungaricae christian dieter rautenbach and friedrich regen on spanning tree congestion discrete nakano md saidur rahman and takao nishizeki a algorithm for planar graphs inf process yoshio okamoto yota otachi ryuhei uehara and takeaki uno hardness results and an exact exponential algorithm for the spanning tree congestion problem graph algorithms ostrovskii minimal congestion trees discrete ostrovskii minimum congestion spanning trees in planar graphs discrete ostrovskii minimum congestion spanning trees in bipartite and random graphs acta mathematica scientia harald optimal hierarchical decompositions for congestion minimization in networks in stoc pages raspaud ondrej and imrich vrto congestion and dilation similarities and differences a survey in sirocco pages satu elisa schaeffer graph clustering computer science review shai simonson a variation on the min cut linear arrangement problem mathematical systems theory daniel spielman and teng a local clustering algorithm for massive graphs and its application to nearly linear time graph partitioning siam j david steurer tight bounds for the boundary decomposition cost of weighted graphs in spaa pages hitoshi suzuki naomi takahashi and takao nishizeki a linear algorithm for bipartition of biconnected graphs inf process zoya svitkina and tardos multiway cut in pages teng scalable algorithms for data and network analysis foundations and trends in theoretical computer science koichi wada and kimio kawaguchi efficient algorithms for tripartitioning triconnected graphs and graphs in concepts in computer science international workshop wg utrecht the netherlands june proceedings pages spanning tree congestion a missing proofs in sections and proof of corollary first of all we set the demand of each vertex in the flow problem to be the the degree of the vertex in g and t as the sinks in the flow problem by proposition there exists an u such that for all j tj uj and the total degrees of vertices in each uj is at most with this by routing the demand of a vertex in uj to tj via an arbitrary path in g uj only we construct a splittable flow with node congestion at most by theorem one can construct a confluent flow with node congestion at most ln in polynomial time obviously in the confluent flow all the flow originating from one vertex goes completely into one sink set vj to be the set of vertices such that the flows originating from these vertices go into tj it is then routine to check that v is our desired proof of theorem instead of giving the full proof we point out the differences from the proof of theorem first in handling the basis case ii by theorem b proposition and lemma we havep an improvedp upper bound on the congestion of the returned tree which is thus can be improved to s p nh x nh again by viewing x as a real variable and taking derivative it is easy to see that the above expression is maximized at x so the above bound is at most s p nh nh p as desired concerning the running time it is clear that in the worst case it is dominated by some calls to the algorithm in theorem b note that the number of such calls is at most since each call to the algorithm is on a disjoint set of vertices there remains one concern which is the connectedness of h y z suppose the contrary that h y z is not connected let c be one of its connected components so that it contains the least number of vertices from y then c contains at most vertices from y y note that c y is a vertex cut set of the graph h thus contradicting that y is a global minimum vertex cut set sunil chandran yun kuen cheung and davis issac proof of lemma it is well known that the requirements i and ii are satisfied with probability o for each subset s with by the chernoff bound h i p p e s v s n since p logn n the above probability is at most then by a union bound the probability that iii is not satisfied is at most x b n s x ns x n spanning tree congestion of random graphs proof of theorem we first present a simple proof that random graph has stc n with high probability theorem gives the upper bound and theorem gives the lower bound the proof of theorem uses lemma and the fact that for random graphs and minimum degree are equal with high probability theorem does not give an efficient algorithm theorem if g g n p where p log then the spanning tree congestion of g is at most with probability at least o proof it is known that the threshold probability for a random graph being is same as the threshold probability for it having minimum degree at least k since p log using chernoff bound and taking union bound over all vertices gives that g has minimum degree at least with probability at least o hence g is with probability at least o we also have that the number of edges in g is at most p with probability at least o now by using lemma we have that with probability at least o the spanning tree congestion is at most theorem if g g n p where p log then the spanning tree congestion of g is n with probability o proof by using chernoff bounds and applying union bound it is easy to show that with probability o every vertex of g has degree at most np for a sufficiently large constant also by lemma with probability o properties i and iii of that lemma holds in the proof below we conditioned on the above mentioned highly probable events take a spanning tree t of g which gives the minimum congestion let u be a centroid of the tree t each connected component of t u has at most vertices if there is a connected component with number of vertices at least spanning tree congestion then define this connected component as t else all connected components have at most vertices in this case let t be the forest formed by the union of a minimum number of connected components of t u such that it is easy to see that also the number of edges in t from v t to v t v t is at most degg u which is at most np by property iii of lemma the number of edges between v t and v g t is p each of these edges in g between v t and v g t have to contribute to the congestion of at least one of the edges in t between v t and v g v t now since t sends at most np tree edges to other parts of t it follows that there exists one edge in t with congestion at least p np n as claimed random graph satisfies expanding properties constants for easy reference we list out the constants used proof of lemma let s v g the probability that a fixed vertex in s does not have edge to s is at most p p since n n log n the expected value of s is at least n hence using chernoff bound the probability that s n is at most since the number of such s is at most we have the lemma by applying union bound proof of lemma let s v g since log n we have for sufficiently large divide s into groups of size the probability that such a group does not have edge to s is at most p the expected number of groups having edge to s is at least thus by chernoff bound the probability that s is at most log log n the number of sets of size is at most log n hence taking union bound over all s with we get the required lemma proof of lemma first we prove that for all c d v g such that and c d there exist at least one edge between c and d with high probability the probability that there is no edge between such a fixed c and d is at most p the number of pairs of such c and d is at most hence by taking union bound the probability that for all c and d the claim holds is at least using the above claim we prove that for all s a s has at least neighbors in a v g a with high probability suppose there is an s which violates the claim note that we can assume because otherwise the claim is vacuously true let b a n s there can not be any edges between s and b also so is at least and when is at least hence using the previous claim there is an edge between s and b with probability at least hence we get a contradiction and hence our claim is true with probability at least sunil chandran yun kuen cheung and davis issac proof of lemma let c s denote s v g s for a fixed vertex subset s the expected value of c s is at most therefore probability that c s log n is at most using chernoff bounds the probability that c s for all sets s of size k is at least using union bound and using k the probability that c s for all vertex subsets s is at least using union bound over all k n
8
nov seamless single shot object pose prediction bugra tekin epfl sudipta sinha microsoft research pascal fua epfl abstract detection pipeline made of one cnn to coarsely segment the object and another to predict the locations of the projections of the object s bounding box given the segmentation which are then used to compute the pose using a pnp algorithm the method is effective but slow due to its nature is a different pipeline that relies on the ssd architecture to predict bounding boxes and a very rough estimate of the object s orientation in a single step this is followed by an approximation to predict the object s depth from the size of its bounding box in the image to lift the detections to both and require a further pose refinement step for improved accuracy which increases their running times linearly with the number of objects being detected we propose a approach for simultaneously detecting an object in an rgb image and predicting its pose without requiring multiple stages or having to examine multiple hypotheses unlike a recently proposed technique for this task that only predicts an approximate pose that must then be refined ours is accurate enough not to require additional as a result it is much faster fps on a titan x pascal gpu and more suitable for processing the key component of our method is a new cnn architecture inspired by that directly predicts the image locations of the projected vertices of the object s bounding box the object s pose is then estimated using a pnp algorithm for single object and multiple object pose estimation on the l ine m od and o cclusion datasets our approach substantially outperforms other recent approaches when they are all used without postprocessing during a pose refinement step can be used to boost the accuracy of these two methods but at fps or less they are much slower than our method in this paper we propose a deep cnn architecture that takes the image as input and directly detects the projections of the bounding box vertices it is trainable and accurate even without any a posteriori refinement and since we do not need this refinement step we also do not need a precise and detailed textured object model that is needed by other methods we only need the bounding box of the object shape for training this can be derived from other easier to acquire and approximate shape representations we demonstrate accuracy on the l ine m od dataset which has become a de facto standard benchmark for pose estimation however we are much faster than the competing techniques by a factor of more than five when dealing with a single object furthermore we pay virtually no when handling several objects and our running time remains constant whereas that of other methods grow proportional to the number of objects which we demonstrate on the o cclusion dataset introduction object detection and pose estimation is crucial for augmented reality virtual reality and robotics currently methods relying on depth data acquired by rgbd cameras are quite robust however active depth sensors are power hungry which makes object detection methods for passive rgb images more attractive for mobile and wearable cameras there are many fast keypoint and methods that are effective for textured objects however they have difficulty handling weakly textured or untextured objects and processing video streams which are quite common when dealing with cameras on wearable devices therefore our contribution is an architecture that yields a fast and accurate pose prediction without requiring any it extends single shot cnn architectures for detection in a seamless and natural way to the detection task our implementation is based on yolo but the approach is amenable to other singleshot detectors such as ssd and its variants deep learning techniques have recently been used to address these limitations is a object related work we now review existing work on pose estimation ranging from classical feature and template matching methods to newer trainable methods classical methods traditional rgb object instance recognition and pose estimation works used local keypoints and feature matching local descriptors needed by such methods were designed for invariance to changes in scale rotation illumination and viewpoints such methods are often fast and robust to occlusion and scene clutter however they only reliably handle textured objects in high resolution images other related methods include registration hausdorff matching oriented chamfer matching for edges and chamfer matching for aligning models to images methods the advent of commodity depth cameras has spawned many object pose estimation methods for example hinterstoisser et al proposed template matching algorithms suitable for both color and depth images rios et al extended their work using discriminative learning and cascaded detections for higher accuracy and efficiency respectively methods were used on indoor robots for object recognition pose estimation grasping and manipulation brachmann et al proposed using regression forests to predict dense object coordinates to segment the object and recover its pose from dense correspondences they also extended their method to handle uncertainty during inference and deal with rgb images zach et al explored fast dynamic programming based algorithms for images methods in recent years research in most pose estimation tasks has been dominated by cnns techniques such as viewpoints and keypoints and render for cnn cast object categorization and pose estimation into classification tasks specifically by discretizing the pose space in contrast posenet proposes using a cnn to directly regress from a rgb image to a pose albeit for camera pose estimation a slightly different task since posenet outputs a translational and a rotational component the two associated loss terms have to be balanced carefully by tuning a during training to avoid this problem the newer posecnn architecture is trained to predict object pose from a single rgb image in multiple stages by decoupling the translation and rotation predictors a geodesic loss function more suitable for optimizing over rotations have been suggested in another way to address this issue has recently emerged in the cnns do not directly predict object pose instead they output coordinates masks or discrete orientation predictions from which the pose can be inferred because all the predictions are in the image the problem of weighting different loss terms goes away also training becomes numerically more stable resulting in better performance on the l ine m od dataset we also adopt this philosophy in our work in parallel to these developments on the object detection task there has been a progressive trend towards single shot cnn frameworks as an alternative to methods such as that first find a few candidate locations in the image and then classifies them as objects or background recently single shot architectures such as yolo and ssd have been shown to be fast and accurate ssd has been extended to predict the object s identity its bounding box in the image and a discrete estimate of the object s orientation in this paper we go beyond such methods by extending a architecture to directly predict a few coordinates from which the full object pose can be accurately recovered approach with our goal of designing an trainable network that predicts the pose in we were inspired by the impressive performance of single shot object detectors such as yolo this led us to design the cnn architecture shown in fig we designed our network to predict the projections of the corners of the bounding box around our objects the main insight was that yolo was originally designed to regress bounding boxes and to predict the projections of the bounding box corners in the image a few more points had to be predicted for each object instance in the image then given these coordinates and the ground control points for the bounding box corners the pose can be calculated algebraically with an efficient pnp algorithm takes a similar approach however they first find a segmentation mask around the object and present a cropped image to a second network that predicts the eight corners in the image we now describe our network architecture and explain various aspects of our approach in details model we formulate the pose estimation problem in terms of predicting the image coordinates of virtual control points associated with the models of our objects of interest given the coordinate predictions we calculate the object s pose using a pnp algorithm we parameterize the model of each object with control points for these control points we select the corners of the tight bounding box fitted to the model similar to in addition we use the centroid of the object s model as the point this parameterization is general and can be s s figure overview a the proposed cnn architecture b an example input image with four objects c the s s grid showing cells responsible for detecting the four objects d each cell predicts locations of the corners of the projected bounding boxes in the image e the output tensor from our network which represents for each cell a vector consisting of the corner locations the class probabilities and a confidence value associated with the prediction used for any rigid object with arbitrary shape and topology in addition these control points are guaranteed to be well spread out in the image and could be semantically meaningful for many objects our model takes as input a single full color image processes it with a architecture shown in figure a and divides the image into a regular grid containing s s cells as shown in figure c in our model each grid location in the output tensor will be associated with a multidimensional vector consisting of predicted image locations of the control points the class probabilities of the object and an overall confidence value at test time predictions at cells with low confidence values ie where the objects of interest are not present will be pruned the output target values for our network are stored in a tensor of size s s d visualized in fig e the target values for an object at a specific spatial cell location i s s is placed in the cell in the tensor in the form of a d dimensional vector vi when n objects are present in different cells we have n such vectors vn in the tensor we train our network to predict these target values the control points in our case are the object model s center and bounding box corners but could be defined in other ways as well to train our work we only need to know the bounding box of the object not a detailed mesh or an associated texture map as in yolo it is crucial that a trained network is able to predict not only the precise locations but also high confidence values in regions where the object is present and low confidence where it isn t present in case of object detection yolo uses for its confidence values an intersection over union iou score associated with the predicted and true rectangles in the image in our case the objects are in and to compute an equivalent iou score with two arbitrary cuboids we would need to calculate a convex hull corresponding to their intersections this would be tedious and would slow down training therefore we take a different approach we model the predicted confidence value using a confidence function shown in figure the confidence function c x returns a confidence value for a predicted point denoted by x based on its distance dt x from the ground truth target point formally we define the confidence function c x as follows c x e dt x dth if dt x dth otherwise the distance dt x is defined as the euclidean distance in the image space to achieve precise localization points we do not constrain the network s output as those points should be allowed to fall outside the cell the predicted control point gx gy is defined as confidence distance figure confidence c x as a function of the distance dt x between a predicted point and the true point with this function we choose a sharp exponential function with a value dth instead of a monotonically decreasing linear function the sharpness of the exponential function is defined by the parameter in practice we apply the confidence function to all the control points and calculate the mean value and assign it as the confidence as mentioned earlier we also predict c conditional class probabilities at each cell the class probability is conditioned on the cell containing an object overall our output tensor depicted in figure e has dimension s s d where the spatial grid corresponding to the image dimensions has s s cells and each such cell has a d dimensional vector here d because we have xi yi control points c class probabilities and one confidence value our network architecture follows the fully convolutional yolo architecture thus our network has convolutional layers and layers similar to yolo we choose s and have a spatial grid on which we make our predictions we also allow higher layers of our network to use features by adding a passthrough layer specifically we bring features from an earlier layer at resolution apply batch normalization and resize the input image during training as the network downsamples the image by a factor of we change the input resolution to a multiple of randomly chosen from the set to be robust to objects of different size training procedure our final layer outputs class probabilities x y coordinate locations for the control points and the overall confidence score during training this confidence value is computed on the fly using the function defined in eq to measure the distance between the current coordinate predictions and the dt x we predict offsets for the coordinates with respect to cx cy the corner of the associated grid cell for the centroid we constrain this offset to lie between and however for the corner gx f x cx gy f y cy where f is chosen to be a sigmoid function in case of the centroid and the identity function in case of the eight corner points this has the effect of forcing the network to first find the approximate cell location for the object and later refine its eight corner locations we minimize the following loss function to train our complete network l lpt lconf lid here the terms lpt lconf and lid denote the coordinate loss confidence loss and the classification loss respectively we use error for the coordinate and confidence losses and cross entropy for the classification loss as suggested in to improve model stability we downweight the confidence loss for cells that don t contain objects by setting to for cells that contain objects we set to when multiple objects are located close to each other in the scene they are more likely to appear close together in the images or be occluded by each other in these cases certain cells might contain multiple objects to be able to predict the pose of such multiple objects that lie in the same cell we allow up to candidates per cell and therefore predict five sets of control points per cell similarly to we precompute with five anchor boxes that define the size ie the width and height of a rectangle tightly fitted to a masked region around the object in the image during training we assign whichever anchor box has the most similar size to the current object as the responsible one to predict the coordinates for that object pose prediction we detect and estimate the pose of objects in by invoking our network only once at test time we estimate the confidence scores for each object by multiplying the class probabilities and the score returned by the confidence function each grid cell produces predictions in one network evaluation and cells with predictions with low confidence are pruned using a confidence threshold for large objects and objects whose projections lie at the intersection of two cells multiple cells are likely to predict highly confident detections to obtain a more robust and well localized pose estimate we inspect the cells in the neighborhood of the cell which has the maximum confidence score we combine the individual corner predictions of these adjacent cells by computing a weighted average of the individual detections where the weights used are the confidence scores of the associated cells at the network gives the projections of the object s centroid and corners of its bounding box along with the object identity we estimate the pose from the correspondences between the and points using a pnp pose estimation method in our case pnp uses only such control point correspondences and provides an estimate of the rotation r and translation t of the object in camera coordinates implementation details we initialize the parameters of our network by training the original network on the imagenet classification task as the pose estimates in the early stages of training are inaccurate the confidence values computed using eq are initially unreliable to remedy this we pretrain our network parameters by setting the regularization parameter for confidence to subsequently we train our network by setting to for the cells that contain an object and to otherwise to have more reliable confidence estimates in the early stages of the network in practice we set the sharpness of the confidence function to and the distance threshold to pixels we use stochastic gradient descent for optimization we start with a learning rate of and divide the learning rate by at every epochs to avoid overfitting we use extensive data augmentation by randomly changing the hue saturation and exposure of the image by up to a factor of we also randomly scale and translate the image by up to a factor of of the image size our implementation is based on pytorch we will make our code publicly available for the sake of reproducibility experiments we first evaluate our method for estimating the pose of single objects and then we evaluate it in the case where multiple objects are present in the image we use the same datasets and evaluation protocols as in which we review below we then present and compare our results to the state of the art methods datasets we test our approach on two datasets that were designed explicitly to benchmark object pose estimation algorithms we describe them briefly below linemod has become a de facto standard benchmark for object pose estimation of textureless objects in cluttered scenes the central object in each rgb image is assigned a rotation translation and id a full mesh representing the object is also provided method object ape benchvise cam can cat driller duck eggbox glue holepuncher iron lamp phone average refinement brachmann ours refinement brachmann table comparison of our approach with algorithms on linemod in terms of reprojection error we report percentages of correctly estimated poses bold face numbers denote the best overall methods bold italic numbers denote the best methods among those that do not use refinement as opposed to the ones that use if different note that even though we do not rely on the knowledge of a detailed object model our method consistently outperforms the baselines occlusion is a detection and pose estimation dataset that contains additional annotations for all objects in a subset of the linemod images as its name suggests several objects in the images are severely occluded due to scene clutter which makes pose estimation extremely challenging with the exception of it has primarily been used to test algorithms that require depth images evaluation metrics we use three standard metrics to evaluate pose accuracy namely reprojection error average distance of model vertices referred to as add metric and iou score as in in all cases we calculate the accuracy as the percentage of correct pose estimates for certain error thresholds when using the reprojection error we consider a pose estimate to be correct when the mean distance between the projections of the object s mesh vertices using the estimate and the ground truth pose is less than pixels this measures the closeness of the true image projection of the object to that obtained by using the estimated pose this metric is suitable for augmented reality applications when comparing poses using the add metric we take a pose estimate to be correct if the mean distance between the true coordinates of mesh vertices and those estimated given the pose is less than of the object s diameter for most objects this is approximately a threshold but for smaller objects such as ape the threshold drops to about for rotationally symmetric objects whose pose can only be computed up to one degree of rotational freedom we modify slightly the metric as in method and compute x min k rx t k m where r t are the rotation and translation the predicted ones and m the vertex set of the model we use this metric when evaluating the pose accuracy for the rotationally invariant objects eggbox and glue as in to compute the iou metric we measure the overlap between the projections of the model given the and predicted pose and accept a pose as correct if the overlap is larger than single object pose estimation we first estimate the pose of the central object in the rgb only linemod images without reference to the depth ones we compare our approach to those of which operate under similar conditions in this dataset the training images are selected such that the relative orientation between corresponding pose annotations are larger than a threshold to avoid being influenced by the scene context we segment the training images using the segmentation masks provided with the dataset and replace the background by a random image from the pascal voc dataset we use exactly the same splits as in we report our results in terms of reprojection error in table and pose error in table we provide example pose predictions of our approach in figure refinement brachmann object ape benchvise cam can cat driller duck eggbox glue holepuncher iron lamp phone average refinement ours brachmann table comparison of our approach with algorithms on linemod in terms of add metric we report percentages of correctly estimated poses bold face numbers denote the best overall methods bold italic numbers denote the best methods among those that do not use refinement as opposed to the ones that use if different threshold object ape benchvise cam can cat driller duck eggbox glue holepuncher iron lamp phone average ours ours ours table comparison of our approach with without refinement using different thresholds for the pose metric comparative accuracy accuracy in terms of projection error in table we compare our results to those of brachmann et al and to both of these competing methods involve a pipeline that comprises a detection step followed by pose prediction and refinement since we do not have a refinement stage we show in the table their results without and with it in both cases we achieve better pose estimation accuracies in table we perform a similar comparison with whose authors report their projection accuracy in terms of the iou metric that method also requires a posteriori refinement and our results are again better in both cases even though relies on a large training set of rendered images that are sampled over a wide range of viewpoints and locations the competing methods before refinement we outperform all the methods by a significant margin of at least after refinement our pose estimates are still better than brachmann et al by assuming the additional knowledge of a full cad model and using it to further refine the pose and boost their pose estimation accuracy without any bells and whistles our approach achieves pose estimation accuracy in all the metrics without refinement when compared against methods that rely on the additional knowledge of full cad models and pose refinement it still achieves performance in projection error and iou metrics and yields comparable accuracy in the add metric our approach could be used in conjunction with such refinement strategies to further increase the accuracy however this comes at a heavy computational cost as we describe below accuracy in terms of the add metric in tables and we compare our methods against the other in terms of the average of the distances as described in section in table we give numbers before and after refinement for the authors do not report results without refinement however they provided us with the accuracy numbers reported in table the authors were not able to provide their accuracy numbers without refinement for this metric but made their code publicly available we ran their code with provided pretrained models to obtain the pose errors method object ape benchvise cam can cat duck glue holepuncher iron lamp phone average driller eggbox refinement ours refinement table comparison of our approach against on linemod using iou metric the authors of were able to provide us the results of our approach the refinement accuracy speed in table we report the computational efficiency of our approach for single object pose estimation in comparison to the approaches our approach runs at performance in contrast to the existing approaches which fall short of it in particular our algorithm runs at least times faster than the techniques for single object pose estimation as can be seen in table pose refinement in brachmann et al increase the accuracy significantly by at an additional of miliseconds per object also gets a substantial improvement of in accuracy at an additional of miliseconds per object even without correcting for the pose error our approach outperforms brachmann et al and yields close accuracy to while being times faster for single object pose estimation as discussed also in the unrefined poses computed from the bounding boxes of the ssd object detector are rather approximate we confirmed this by running their publicly available code with the provided pretrained models we report the accuracy numbers without the refinement using the add metric in table for different thresholds while providing a good initialization for the subsequent pose processing the pose estimates of without refinement are much less accurate than our approach the further refinement increases the pose estimation accuracy significantly however st of a computational time of miliseconds per object moreover in contrast to our approach the refinement requires the knowledge of the full object cad model in figure we show example results of our method on the l ine m od we include more visual results of our method in the supplementary material method overall speed for object refinement runtime fps fps fps fps brachmann et al rad lepetit kehl et al ours table comparison of the overall computational runtime of our approach for a single object in comparison to we further provide the computational runtime induced by the pose refinement stage of and report pose estimation accuracy as in the identity of the objects can not be assumed to be known a priori and has to be guessed to this end the method of assumes that it has access to image crops based on the bounding boxes we make no such assumptions instead we jointly detect the object in estimate its identity and predict its pose we generate our training images with the approach explained in section we further augment the linemod training data by adding into the images objects extracted from other training sequences we report our pose estimation accuracy in figure and demonstrate that even without assuming information as in the case of our method yields satisfactory pose accuracy in the case of severe occlusions for object detection purposes we consider an estimate to be correct if its detection iou is larger than note that here the detection iou corresponds to the overlap of the bounding boxes of the object rather than the overlap of the projected masks as is the case for the iou metric defined in sec in table we report a mean average precision map of which is similar to the accuracy reported by and outperforms the ones reported by method map hinterstoisser et al brachmann et al kehl et al ours table the detection experiment on the occlusion dataset left plot right multiple object pose estimation our approach provides accurate poses with performance upon one network invocation our only computational overhead is an efficient pnp algorithm which operates on just points per object furthermore we do not require full colored object models to further refine our initial pose estimates our approach is therefore scalable to handle multiple objects as shown in figure and has only a negligible computational overhead of pnp while the competing approaches have a linear runtime growth we use the occlusion dataset to compare our approach to brachmann et al for detection this it is not explicitly stated in but the authors confirmed this to us in private email communication figure pose estimation results of our approach note that our method can recover the pose in these challenging scenarios which involve significant amounts of clutter occlusion and orientation ambiguity in the last column we show failure cases due to motion blur severe occlusion and specularity this figure is best viewed on a computer screen tions with only decrease in accuracy we can reach to a runtime of fps and the runtime virtually remains the same for estimating the pose of multiple objects method figure percentage of correctly estimated poses as a function of the projection error for different objects of the occlusion dataset kehl et al ours runtime in miliseconds fps fps fps fps table of our method on the l ine m od dataset accuracy reported is the percentage of correctly estimated poses with respect to the projection error the same network model is used for all four input resolutions timings are on a nvidia titan x pascal gpu conclusion projection metric speed number of objects figure the runtime of our approach with increasing number of objects as compared to that of we also evaluated the accuracy and speed of our approach for different input resolutions as explained in section we adopt a training procedure and change the input resolution during training randomly as in this allows us to be able to change the input resolution at and predict from images with higher resolution this is especially useful for predicting the pose of small objects more robustly as we do not have an initial step for object detection and produce image crops which are then resized to higher resolutions for pose prediction as in our approach requires better handling of the small objects in table we compare the accuracy and computational efficiency of our approach for different input we have proposed a new cnn architecture for fast and accurate pose prediction that naturally extends the single shot object detection paradigm to object detection our network predicts locations of the projections of the objects bounding box corners which involves predicting just a few more points than for bounding box regression given the predicted corner projections the pose is computed via an efficient pnp method for high accuracy existing object detectors all refine their pose estimates during postprocessing a step that requires an accurate object model and also incurs a runtime overhead per detected object in contrast our single shot predictions are very accurate which alleviates the need for refinement due to this our method is not dependent on access to object models and there is virtually no overhead when estimating the pose of multiple objects our method is it runs at fps depending on the image resolution this makes it substantially faster than existing methods acknowledgements we would like to thank mahdi rad and vincent lepetit for fruitful discussions and providing the results of their method in table also we thank wadim kehl fabian manhardt and slobodan ilic for helpful discussions and for their help in evaluating their algorithm without postprocessing in table references brachmann krull michel gumhold shotton and rother learning object pose estimation using object coordinates in eccv brachmann michel krull ying yang gumhold et al pose estimation of objects and scenes from a single rgb image in cvpr choi and christensen textureless object detection and tracking an approach in iros choi and christensen object pose estimation in unstructured environments robotics and autonomous systems collet martinez and srinivasa the moped framework object recognition and pose estimation for manipulation the international journal of robotics research everingham van gool williams winn and zisserman the pascal visual object classes voc challenge ijcv hinterstoisser holzer cagniart ilic konolige navab and lepetit multimodal templates for detection of objects in heavily cluttered scenes in iccv hinterstoisser lepetit ilic holzer bradski konolige and navab model based training detection and pose estimation of objects in heavily cluttered scenes in accv huttenlocher klanderman and rucklidge comparing images using the hausdorff distance tpami kehl manhardt tombari ilic and navab making detection and pose estimation great again in iccv kehl milletari tombari ilic and navab deep learning of local patches for object detection and pose estimation in eccv kendall grimes and cipolla posenet a convolutional network for camera relocalization in iccv lai bo ren and fox a hierarchical object dataset in icra lai bo ren and fox a scalable approach for joint object and pose recognition in aaai lepetit and fua monocular tracking of rigid objects a survey foundations and trends in computer graphics and vision lepetit and fua epnp an accurate o n solution to the pnp problem ijcv li gu and kanade robustly aligning a shape model and its application to car alignment of unknown pose tpami liu tuzel veeraraghavan and chellappa fast directional chamfer matching in cvpr liu anguelov erhan szegedy reed fu and berg ssd single shot multibox detector in eccv lowe fitting parameterized models to images tpami lowe object recognition from local features in iccv mahendran ali and vidal pose regression using convolutional neural networks cvprw michel kirillov brachmann krull gumhold savchynskyy and rother global hypothesis generation for object pose estimation in cvpr poirson ammirato fu liu kosecka and berg fast single shot detection and pose estimation in rad and lepetit a scalable accurate robust to partial occlusion method for predicting the poses of challenging objects without using depth in iccv ramnath sinha szeliski and hsiao car make and model recognition using curve alignment in wacv redmon divvala girshick and farhadi you only look once unified object detection in cvpr redmon and farhadi better faster stronger cvpr ren he girshick and j sun faster towards object detection with region proposal networks in nips and tuytelaars discriminatively trained templates for object detection a real time scalable approach in iccv rothganger lazebnik schmid and ponce object modeling and recognition using local image descriptors and spatial constraints ijcv sock kasaei lopes and kim object pose estimation and camera motion planning using rgbd images in iccv su qi li and guibas render for cnn viewpoint estimation in images using cnns trained with rendered model views in iccv tulsiani and malik viewpoints and keypoints in cvpr wagner reitmayr mulloni drummond and schmalstieg pose tracking from natural features on mobile phones in ismar xiang schmidt narayanan and fox posecnn a convolutional neural network for object pose estimation in cluttered scenes arxiv preprint zach and pham a dynamic programming approach for fast and robust object pose recognition from range images in cvpr zhang and cao combined holistic and local patches for recovering object pose in iccv zhu derpanis yang brahmbhatt zhang phillips lecce and daniilidis single image object detection and pose estimation for grasping in icra supplemental material seamless single shot object pose prediction in the supplemental material we provide details on how the training images were prepared and on the proposed confidence weighted prediction step we also present qualitative results on o cclusion and l ine m od training images as discussed in the main paper we segment the foreground object in the images in the training set using the segmentation masks provided and paste the segmented image over a random image taken from the pascal voc dataset examples of such images which are given as input to the network at training time are shown in figure this operation of removing the actual background prevents the network from learning the scene context and is essential in order to achieve proper generalization prediction in the final step of our method we compute a weighted sum of multiple sets of predictions for the corners and the centroid using associated confidence values as weights on l ine m od this gave a improvement in accuracy with the projection metric the first step involves scanning the full grid to find the cell with the highest confidence for each potential object we then consider a neighborhood around it on the grid and prune the cells with confidence values lower than the detection threshold of on the remaining cells we compute a average of the associated predicted vectors where the eight corner points and the centroid have been stacked to form the vector the averaged coordinates are then used in the pnp method this refinement on the grid usually improves the pose of somewhat large objects that occupy several adjoining cells in the grid figure shows an example where the ape object lies between two adjoining cells and the confidence weighting improves the pose accuracy figure left the grid on a image middle confidence values for predictions of the ape object on the grid right cropped view of our pose estimate shown in blue and the ground truth shown in green here three cells next to the best cell have good predictions and their combination gives a more accurate pose than the best prediction alone best viewed in color figure top using segmentation masks given in l ine m od we extract the foreground objects in our training images and composite them over random images from pascal voc bottom we also augment the training set by combining images of multiple objects taken from different training images qualitative results we show qualitative results from the o cclusion and l ine m od datasets in figures to these examples show that our method is robust to severe occlusions rotational ambiguities in appearance reflections viewpoint change and scene clutter a b c figure results on the o cclusion dataset our method is quite robust against severe occlusions in the presence of scene clutter and rotational pose ambiguity for symmetric objects a input images b pose predictions of multiple objects c a magnified view of the individual pose estimates of six different objects is shown for clarity in each case the bounding box is rendered on the input image the following color coding is used a pe gold b enchvise green c an red c at purple d riller cyan d uck black g lue orange h olepuncher blue in addition to the objects from the o cclusion dataset we also visualize the pose predictions of the benchvise object from the l ine m od dataset as in we do not evaluate on the eggbox object as more than of close poses are not seen in the training sequence this image is best viewed on a computer screen a b c figure results on the o cclusion dataset our method is quite robust against severe occlusions in the presence of scene clutter and rotational pose ambiguity for symmetric objects a input images b pose predictions of multiple objects c a magnified view of the individual pose estimates of six different objects is shown for clarity in each case the bounding box is rendered on the input image the following color coding is used a pe gold b enchvise green c an red c at purple d riller cyan d uck black g lue orange h olepuncher blue in addition to the objects from the o cclusion dataset we also visualize the pose predictions of the benchvise object from the l ine m od dataset as in we do not evaluate on the eggbox object as more than of close poses are not seen in the training sequence this image is best viewed on a computer screen figure example results on the l ine m od dataset left a pe middle b enchvise right c am the projected bounding boxes are rendered over the image and they have been cropped and resized for ease of visualization the blue cuboid is rendered using our pose estimate whereas the green cuboid is rendered using the ground truth object pose note that the input image dimension is pixels and the objects are often quite small noticeable scene clutter and occlusion makes these examples challenging figure example results on the l ine m od dataset left c an middle c at right d riller the projected bounding boxes are rendered over the image and they have been cropped and resized for ease of visualization the blue cuboid is rendered using our pose estimate whereas the green cuboid is rendered using the ground truth object pose note that the input image dimension is pixels and the objects are often quite small noticeable scene clutter and occlusion makes these examples challenging figure example results on the l ine m od dataset left d uck middle e ggbox right g lue the projected bounding boxes are rendered over the image and they have been cropped and resized for ease of visualization the blue cuboid is rendered using our pose estimate whereas the green cuboid is rendered using the ground truth object pose note that the input image dimension is pixels and the objects are often quite small noticeable scene clutter and occlusion makes these examples challenging figure example results on the l ine m od dataset left h ole p uncher middle i ron right l amp and p hone the projected bounding boxes are rendered over the image and they have been cropped and resized for ease of visualization the blue cuboid is rendered using our pose estimate whereas the green cuboid is rendered using the ground truth object pose note that the input image dimension is pixels and the objects are often quite small noticeable scene clutter and occlusion makes these examples challenging
1
deep residual text detection network for scene text xiangyu zhu yingying jiang shuli yang xiaobing wang wei li pei fu hua wang and zhenbo luo machine learning lab samsung r d institute of china beijing beijing china text detection is a challenging problem in computer vision in this paper we propose a novel text detection network based on prevalent object detection frameworks in order to obtain stronger semantic feature we adopt resnet as feature extraction layers and exploit feature by combining hierarchical convolutional networks a vertical proposal mechanism is utilized to avoid proposal classification while regression layer remains working to improve localization accuracy our approach evaluated on dataset achieves which outperforms previous results in scene text detection text detection deep ctpn residual networks i introduction text detection is an important part of text content analysis especially for reading natural text in the wild scene text detection is becoming increasing attractive to researchers with the development of smart phone and tremendous demands of text recognition in augmentation reality ar unlike traditional documental text detecting scene text seems to be a much more challenging task due to illuminations perspective distortion and complex background in the last few decades series of methods had been proposed to deal with this problem which achieved considerable performance those methods can be categorized into sliding window based methods and connected component cc based methods sliding window based method utilizes sliding windows to search the image densely for candidate text regions and classifies regions by traditional machine learning tools this kind of method can be quite slow as a consequence of densely search and multiscale windows comparing to the previous method cc base method draws more attention until recently it involved several steps typically three first ccs are extracted from images as character candidates second a character classifier is trained to remove the ccs finally remained ccs are going to be grouped into by clustering or rules maximally stable extremal regions mser is one of the most popular methods it had been reported outstanding performance in benchmark however the following limitations constrain its further improvement in performance words constituent of single character are ignored by grouping rules for the sake of precision characters in low color contrast can not be extracted by mser and another disadvantage is the complex convolutional neural networks cnn approach has led to a great breakthrough in object detection region proposal cnn was the first attempt to classify proposals by cnn then faster was proposed where a subnetwork named rpn was designed to generate proposals autonomously by feature maps and a few additional convolution layers faster used as baseline for feature map extraction and proposal classification until deep residual network resnet was presented resnet was reported better performance in pascal voc and ilsvrc comparing to and googlenet moreover the structure of resnet was designed fully convolutional without heavy fully connected layers the resnet version of faster was observed better performance inspired by the great progress in object detection a few cnn based methods had been proposed to address scene text detection the connectionist text proposal network ctpn is a novel framework based on faster which benefits from an additional recurrent neural network and vertical proposal mechanism in this paper we came up with a framework called residual text detection network rtn rtn were inspired by resnet and ctpn vertical proposal mechanism first resnet was used to generate strong semantic feature instead of traditional networks like rather than a naively layer replacement we combine features to produce hierarchy residual feature the outstanding performance was mainly contributed by this stronger semantic feature second vertical proposal mechanism was adopted and an additional regression part was used to improve localization accuracy this step was implemented by a two stage training strategy it achieved on ii related work a object detection with the success of deep convolutional network in image recognition was inspired to classify region proposal via cnn after was proposed the related object detection approaches had been developed rapidly such as sppnet fast faster faster is a mature prevalent framework that trained and tested from end to end the framework constitutes of three parts feature map generation feature maps representing semantic information were extracted by deep convolutional network was used in faster proposal generation a simple resnet vertical mechanism rpn conv blstm box coordinates regression hierarchy residual feature psroi pooling pool architecture of residual text detection network rtn convolutional network name region proposal networks rpn was designed to generate candidate regions with the input of feature maps region classification and regression by sharing features regions proposals were projected to the location in feature maps then a following fast structure outputted final results by classification and regression influenced by the latest progress in image recognition other deeper convolutional networks were transplant to this framework instead of including googlenet and resnet resnet was proved to be a superior convolutional network than googlenet and in imagenet classification task is a completely fully convolutional architecture that combines resnet and faster rcnn together fpn feature pyramid network exploits pyramid of resnet and the framework using fpn won the champion of coco detection challenge besides faster based pipeline single shot multibox detector ssd and you look only once yolo are two representative and promising works ssd is one of the first attempts to utilizing convolutional networks while yolo is extremely faster than all the methods mentioned above however they do not get a superior performance with significant margin comparing to faster rcnn pipeline cnn based text detection general object detection pipeline can be transplant to text detection realm barrier free cnn based text detection gradually becomes the most promising approach zhang proposed a fully convolutional network for text detection in arbitrary orientation instead of semantic segmentation it achieved an of on deeptext proposed a and pooling based on the framework of faster it achieved on inspired by ssd liao presented a approach called textboxes jointly predictions and word recognition were utilized ctpn is a unique network abandoned fast classification and regression which can be treated as a novel individual rpn with recurrent neural network rnn it achieved previous on as fmeasure among published papers nevertheless it was just a prototype for detection using rnn and fixed width proposal is harmful for localization accuracy iii residual text detection network the architecture of this residual text detection network rtn is shown in fig it consists of three parts hierarchy residual feature map for feature extraction vertical mechanism rpn for proposal prediction and bounding box regression part for higherlocalization accuracy a hierarchy residual feature map in our framework we use resnet to derive feature map from original images the feature map is a serial of features in formation similar to handcraft feature it is fed to rpn and regression part resnet consists of concatenate blocks and have the same stride as output pixels in rfcn region proposals were predicted by they believed the feature maps were semantic strong enough and comparable to feature maps differs from resnet in structure thus a simple replacement from to resnet would not work properly unlike typical resnet based detection does not share the same feature map between rpn and regression parts is utilized to generate proposals in rpn while for regression in this kind of methods rpn is unable to use a deeper semantic feature by visualizing feature maps of and we find out contains too many low level features while and are competitive to on the first glance we have carried out series of experiments on faster baseline using and respectively framework using detected edges and lines instead of objects and required much more computation due to larger feature map sizes it was a strong evidence that contained too many low level features to be used directly on the contrary baselines using and detected text correctly however framework using fails on detecting small text due to coarse resolution feature maps although represents deeper feature the resolution is half comparing to even we adopt the trous algorithm to compensate stride difference the performance is still unsatisfactory using as the only feature maps might be insufficient for text detection but abandon deeper representations seems to be an unwise choice we believe using in a proper way will contribute to proposal prediction it is rational to come up with a naive idea that predicting proposals on and respectively like previous approaches did such as ssd and textboxes in this way not only we can detect fine scale text and robust to scale invariance but also utilizing deeper feature representations nevertheless it is inconvenient to identify reliability from proposals without an additional classification as we introduced vertical mechanism to rpn it seems to be a rather complicated problem to deal with that we combine the hierarchy feature maps and together to produce a new hierarchy feature map in this way we can use both and feature maps simultaneously and the task to identify which feature maps are more reliable is assigned to convolution layers as shown in fig the input size of original images is after several convolutional layers and get feature maps in size and corresponding to pixels and pixels stride a deconvolution layer was used to upsample make sure the shapes of match exactly we attach a convolution layer with kernel which aim to work as learnable weights for combining and our experiment shows hierarchy feature lead to an improvement on both precision and recall x x deconv x hierarchy feature x hierarchy residual network architecture first was upsampled to make sure its shapes match second we attached convlutional layers with kernal to both and finally hierarchy feature was produced by element wise addition vertical mechanism rpn in faster a serial of cnn is used to classify proposals the structure is called fast however ctpn abandoned fast structure namely rpn output vertical proposals directly without classification and regression as we know rpn can be treated as a general object detection system if the detection task is to distinguish only one category from background two categories in total it seems that rpn is already competent for text detection depending on vertical proposal mechanism and recurrent neural network ctpn was able to detect text without that mechanism makes the final model much smaller in this approach we adopt this vertical mechanism to rpn anchors and ground truth are divided into fixed width pixels boxes shown in fig particularly spaces between ground truths are treated as negative samples this enable the method to output result in word level sequences of vertical proposals will be predicted by rpn a threshold is applied to remove vertical proposals therefore remained adjacent text proposals can be connected together to produce text line proposals yellow box ground truth of vertical proposals green box space between words which are treated as negative samples bounding box regression by connecting vertical proposals we will obtain proposals as result nevertheless fixed width proposal might lead to inaccurate localization when the beginning and the end of vertical proposals are not exactly fit text in small text case the problem becomes more serious unlike general object detection this inaccuracy will influence recognition tremendously if parts of the characters are not included in bounding box they might be omitted or wrongly recognized on the contrary a loose bounding box contains much background and that could be recognized as additional characters in conclusion a tight and exact bounding box is significant for text detection and recognition to achieve this goal we introduce bounding box regression to get exact coordinates just as faster and did in their framework in this paper we refer to fast structure as text line proposals are obtained in section b bounding box offset of every proposal were calculated however classification is not contained in this part only regression is remained a further classification is unnecessary and experiments show it is harmful for performance this is because recurrent neural networks we adopted in rpn have a tendency to connect words into text lines after we set word level as network learning goal text line level proposal might be classified as negative result the bounding box regression loss is defined as in this functions is the ground truth of bounding box is predicted coordinates and stand for x coordinate and width smooth function is used for regression this loss function is almost the same as what used in fast except that two coordinates offsets are predicted instead of four coordinates it is unnecessary to regression y coordinate and height which was done in rpn layers for every single vertical proposal we develop a two stage training strategy to implement this further regression stage one hierarchy residual feature and vertical mechanism rpn were trained the learning rates of regression parts were set to stage two regression parts were trained individually the learning rates of resnet hierarchy residual feature and rpn were set to a normal rpn presented in faster is used to generate anchors and train regression parts and will not be used in test model training and testing details our model was trained on natural images collected and labeled by ourselves these images were labeled in word level and resized to scale there is no overlap between these images or any kind of public dataset available on the internet on the condition of the extremely similarity between training set and testing set training set was not included to prevent testing set training ground truth is labeled in word level and then divided into vertical ground truth by a fixed width pixels in proposal layer corresponding to vertical proposals mentioned above the space between words were labeled as negative samples anchor has an iou overlap with space samples were signed as negative label about of negative samples are space by adding space sample the networks tend to output word level proposal rather than level iv experiments a evaluation of hierarchy residual feature in ctpn natural images were collected and label for training much less than ours in order to prove the improvement is a consequence of stronger semantic feature map rather than much more training data we implement our own version of ctpn and training on images all the experiments carried out below were trained on the same amount of images in this experiment we used and as backbone for feature extraction feature map generated by different layers were evaluated including of of and hierarchy residual feature map used in rtn table shows the performances on we use ctpn framework as baseline and different feature maps mentioned above are evaluated all the parameters and following processing are the same we evaluated these methods on two scales respectively namely and scale means the shortest side of images is no more than pixels and the longest side can not exceed pixels so does to scale one observation is that all these feature maps are competitive in scale however when it comes to scale margins between these methods becoming considerable we had run the open source test code provided by the author of ctpn marked as larger scale did not benefit performance on the contrary the degraded moreover our ctpn implementation with improved slightly on in conclusion larger test scale does not always helpful for detection and localization nevertheless by simply replacing to improved to from to it proves has a superior feature representation comparing to as other papers mentioned furthermore baseline with hierarchy residual feature map achieved the best performance with which improve points on recall comparing to the original ctpn the results shows baseline with hierarchy residual feature achieves the best performance on both recall and precision on scale which could be a convincing evidence for stronger semantic feature we evaluated rtn on icdar benchmarks it consists of focused text images taken in the wild the evaluation criteria are provided by the robust reading competition as previous works did first the effectiveness of hierarchy residual feature map was verified comparing to other prevalent feature extraction layers then additional regression layers were proved to be helpful for localization accuracy finally this method was compared to other published methods and it achieved performance table evaluating baseline with different feature map on method ctpn ctpn ctpn ctpn rtn rtn http https backbone scale feature map precision recall f score example detection results of our rtn on the benchmark the first row of images is the result before connection and regression the second row of images is the result after vertical proposal connection and regression table regression improvement by additional convolutional layers regression precision recall table comparison with publications on method yin faster baseline seglink deeptext textboxes cctn ctpn proposed rtn f score regression improvement proposals connected by fixed width vertical proposals are inaccurate on both beginning and end sides moreover the evaluation criteria are extremely strict detection bounding box can be judged as false positive sample if its boundary exceed ground truth slightly it means this inaccuracy can degrade performance on both recall and precession even if the texts are detected correctly through bounding box regression we are able to deal with this problem properly as shown in table rtn with regression improved on both recall and precision benefit from this additional regression evaluation of rtn on after proving the effectiveness of hierarchy residual feature and additional regression we compare rtn with other published methods on this single model approach did not utilize training and testing running time of each image is about with gpu shows examples of detection results on first we compared rtn with methods mentioned in recent publications cnn based text detection methods were compared including textboxes deeptext fcn cctn seglink and ctpn the prevalent object detection frameworks like faster and are also evaluated table shows rtn achieved the best performance with great margin second we submitted our results to robust reading competition website and compared rtn with other competitors on challenge this task is also evaluated on dataset rtn with single model ranked third performance with slightly margin compared to tencent youtu and precision recall f score conclusions in this paper a deep residual text detection network is proposed based on the prevalent object detection framework first stronger semantic feature is obtained by using deep residual networks and combining feature from different convolutional networks then a vertical proposal mechanism is introduced inrpn inspired by ctpn at last an additional regression system is used to improve localization accuracy table comparison with submissions on competition websites method precision recall tencent youtu rtn baidu idl f score references yin yin huang and hao robust text detection in natural scene images ieee transactions on pattern analysis and machine intelligence vol no pp sun huo jia et al a robust approach for text detection from natural scene images pattern recognitiom vol no pp yin yin et al effective text localization in natural scene images with mser grouping and adaboost international conference on pattern recognition ieee pp karatzas shafait uchida iwamura gomez i bigorda robles mestre mas fernandez mota almaz an almaz an and de las heras icdar robust reading competition international conference on document analysis and recognition icdar pp girshick ross et al convolutional networks for accurate object detection and segmentation ieee transactions on pattern analysis machine intelligence vol no pp ren he girshick and j sun faster towards realtime object detection with region proposal networks advances in neural information processing systems nips simonyan and zisserman very deep convolutional networks for image recognition in iclr he et al deep residual learning for image recognition ieee conference on computer vision and pattern recognition cvpr pp everingham van gool williams winn and pascal visual object classes voc challenge ijcv pp szegedy liu et al going deeper with convolutions ieee conference on computer vision and pattern recognition cvpr pp szegedy vanhoucke ioffe et al rethinking the inception architecture for computer computer vision and pattern recognition cvpr pp zhi weilin tong pan yu detecting text in natrual image with connectionist text proposal network eccv z feng deeptext a unified framework for text proposal generation and text detection in natural images z yao text detection with fully convolutional computer vision and pattern recognition cvpr m liu textboxes a fast text detector with a single deep neural aaai t w huang y qiao j yao accurate text localization in natural image with cascaded convolutional textnetwork technical report march shi baoguang bai xiang belongie serge detecting oriented text in natural images by linking segments computer vision and pattern recognition cvpr jifeng yi he k and jian object detection via fully convolutional conference on neural information processing systems nips p he feature pyramid networks for object detection computer vision and pattern recognition cvpr lin maire belongie hays perona ramanan and zitnick microsoft coco common objects in context w liu d anguelov d erhan c szegedy ssd single shot multibox detector eccv j farhadi you only look once unified object detection ieee conference on computer vision and pattern recognition cvpr mallat a wavelet tour of signal processing academic press girshick fast ieee international conference on computer vision iccv pp
1
fiducial confidence and objective bayesian posterior distributions for a multidimensional parameter dec piero veronese and eugenio melilli bocconi university milano italy abstract we propose a way to construct fiducial distributions for a multidimensional parameter using a conditional procedure related to the inferential importance of the components of the parameter for discrete models in which the nonuniqueness of the fiducial distribution is well known we propose to use the geometric mean of the extreme cases and show its good behavior with respect to the more traditional arithmetic mean connections with the generalized fiducial inference approach developed by hannig and with confidence distributions are also analyzed the suggested procedure strongly simplifies when the statistical model belongs to a subclass of the natural exponential family called conditionally reducible which includes the multinomial and the models furthermore because fiducial inference and objective bayesian analysis are both attempts to derive distributions for an unknown parameter without any prior information it is natural to discuss their relationships in particular the reference posteriors which also depend on the importance ordering of the parameters are the natural terms of comparison we show that fiducial and reference posterior distributions coincide in the models and we characterize the conditionally reducible natural exponential families for which this happens the discussion of some classical examples closes the paper keywords confidence distribution jeffreys prior parameter model multinomial model natural exponential family reference prior introduction fiducial distributions after having been introduced by fisher and widely discussed and criticized in the subsequent years have been de facto brushed aside for a long time and only recently they have obtained new vitality the original idea of fisher was to construct a distribution for a parameter which includes all the information given by the data without resorting to the bayes theorem this is obtained by transferring the randomness from the observed quantity given by the statistical model to the parameter originally fisher considered a continuous sufficient statistic s with distribution function depending on a real parameter let denote the quantile of order of and let s be a realization of if is increasing in is decreasing in the statement s is equivalent to s and thus fisher assumes s as the quantile of order of a distribution which he names fiducial the set of all quantiles s establishes the fiducial distribution function hs so that hs s and hs hs s of course hs and its density hs must be properly modified if is increasing in fisher also provides some examples of multivariate fiducial distributions obtained by a procedure but he never develops a general and rigorous theory this fact along with the problem to cover discrete models the presence of some inconsistencies of the fiducial distribution the marginalization paradox see dawid stone and the difficulties in its interpretation gave rise to a quite strong negative attitude towards fisher proposal in the renewed interest for the fiducial approach a relevant role is played by the generalized fiducial inference introduced and developed by hannig see also hannig et al for a review he provides a formal and mathematically rigorous definition which has a quite general applicability the crucial element of his definition is a equation x g u which links the unknown parameter and the observed data x through a random element u having a known distribution roughly speaking by shifting the randomness of u from x to inverting g with respect to after having fixed x x the distribution given by the statistical model leads to a distribution for the parameter contrary to the original idea of fisher the generalized fiducial distribution is and hannig widely discusses this point applications to different statistical models can be found for instance in hannig et al hannig iyer and wandler hannig other recent contributions to the topic of fiducial distributions are given by taraldsen lindqvist martin liu and veronese melilli henceforth v m in this last paper the authors derive fiducial distributions for a parameter in a discrete or continuous real natural exponential family nef and discuss some of their properties with particular emphasis on the frequentist coverage of the fiducial intervals in the past fiducial distributions have often been associated with confidence distributions even if these latter have a different meaning a modern definition of confidence distribution is given in schweder hjort and singh et al see the book by schweder hjort for a complete and updated review on confidence distributions and their connections with fiducial inference it is important to emphasize that a confidence distribution must be regarded as a function of the data with reasonable properties from a purely frequentist point of view a confidence distribution is conceptually similar to a point estimator as there exist several unbiased estimators several confidence distributions can be provided for the same parameter and choosing among them can be done resorting to further optimality criteria thus the confidence distribution theory allows to compare in a quite general setting formal distributions for the parameter derived by different statistical procedures in this paper we suggest a way to construct a unique distribution for a multidimensional parameter indexing discrete and continuous models following a procedure similar to that used by fisher in some examples we call it fiducial distribution but we look at it simply as a distribution on the parameter space in the spirit of the confidence distribution theory the of the construction is the procedure by conditioning the distribution of the data is factorized as a product of laws and for each of these the fiducial density for a real parameter component possibly conditional on other components is obtained the joint fiducial density for the parameter is then defined as the product of the conditional fiducial densities it is well known that fisher s fiducial argument presents several drawbacks in higher dimensions essentially because one can not recover the fiducial distribution for a function of the parameters starting from the joint fiducial distribution see schweder hjort ch and our approach when it can be applied presents the advantage to construct sequentially the fiducial distribution directly on the parameters of interest and different fiducial distributions can be obtained focusing on different parameters of interest also it should be noticed that a general definition of confidence distribution for a multidimensional parameter does not exist and more attention is given to the construction of approximate confidence curves for specific nested families of regions see schweder hjort ch and sec interestingly our joint fiducial distribution coincides in many cases with the bayesian posterior obtained using the reference prior this fact motivates the second goal of the paper to investigate the relationships between the objective bayesian posteriors and the suggested fiducial distributions objective bayesian analysis see berger essentially studies how to perform a good bayesian inference especially for moderate sample size when one is unwilling or unable to assess a subjective prior under this approach the prior distribution is derived directly from the model and thus it is labeled as objective the reference prior introduced by bernardo and developed by berger bernardo is the most successful default prior proposed in the literature for a multidimensional parameter the reference prior depends on the grouping and ordering of its components and in general no longer coincides with the jeffreys prior this is the reference prior only for a real parameter and it is unsatisfactory otherwise as well known lindley was the first to discuss the connections between fiducial and posterior distributions for a real parameter when a real continuous sufficient statistic exists v m extend this result to real discrete nefs characterizing all families admitting a fiducial prior a prior leading to a posterior coinciding with the fiducial distribution this prior is strictly related to the jeffreys prior we show here that when the parameter is multidimensional this relationship no longer holds and a new one is established with the reference prior in particular we prove results for parameter models and conditionally reducible nefs a subclass of nefs defined in consonni veronese the paper is structured as follows section reviews some basic facts on fiducial and confidence distributions for real nefs and on generalized fiducial distributions the proposal for constructing a multivariate fiducial distribution is presented in section which also discusses the relationships with confidence distributions section the use of the geometric mean of fiducial densities for solving the problem in discrete models section the connections with the generalized fiducial inference and the consistency with the sufficiency principle section section studies the fiducial distributions for conditionally reducible nefs and provides their explicit expression for a particular subclass which includes the multinomial and the negativemultinomial model section analyzes the relationships between the fiducial distributions and the reference posteriors in particular for parameter models section and nefs section characterizing those which admit the fiducial prior section discusses further examples in which fiducial and reference posteriors coincide section concludes the paper presenting some possible asymptotic extensions finally appendix collects some useful technical results on conditionally reducible nefs while appendix includes the proofs of all the results stated in the paper preliminary results the modern definition of confidence distribution for a real parameter of interest see schweder hjort and singh et al can be formulated as follows definition let r be a parametric model for data x x here is the parameter of interest and is a nuisance parameter a function c x r is a confidence distribution for if c x is a distribution function for each x x and c x has a uniform distribution in under where is the true parameter value the relevant requirement in the previous definition is the uniformity of the distribution which ensures the correct coverage of the confidence intervals as seen in section the confidence distribution theory must be placed in a purely frequentist context and allows to compare distributions on the parameter space obtained using different approaches finally the definition of confidence distribution can be generalized by requiring that the uniformity assumption holds only asymptotically strictly linked to the notion of confidence distribution is that of confidence curve defined for each observed x x as the function cc x see schweder hjort this function gives the extremes of confidence vals for any level allowing a fast and clear comparison of confidence distributions with respect to their interval length when the parameter of interest is multidimensional how to extend the definitions of confidence distribution and confidence curve is much less clear and various proposals have been made see schweder hjort and singh et al as detailed in section hannig has proposed the notion of generalized fiducial distribution which is based on a equation x g u because several functions g can generate the same statistical model and not all the resulting fiducial distributions are reasonable in terms of properties or computational tractability hannig sec gives some hints on the choice of a default function in particular if x xn is an independent identically distributed random sample from an absolutely continuous distribution function with density rd he suggests to use xi ui i n where ui are uniform random variables on and is the inverse or generalized inverse of if other regularity assumptions are satisfied the generalized fiducial distribution for can be written as r r x j x x j x where the expression of j x given in hannig formula is j x x det id id d xid qd xij in the numerator of the ratio is the determinant of the matrix whose is xij this procedure leads to the fisher definition of fiducial density when n d hannig example explicitly recognizes the advise of wilkinson that the choice of a fiducial distribution should depend on the parameter of interest and uses the well known example of d independent normal distributions n in which the p parameter of interest is he shows that the default equations xi ui i d lead to a fiducial distribution which has good frequentist properties for inference on the s but very bad ones when the interest is on as already recognized by stein thus hannig suggests an ad hoc alternative equation which leads to a better solution notice that our general procedure suggested in the next section constructs a fiducial distribution starting directly from the parameter of interest and do not required the choice a priori of a data generating function fiducial distributions and their properties with particular emphasis on the frequentist coverage of the fiducial intervals for a discrete or a continuous real regular nef are discussed in v m more specifically consider the sufficient statistic s associated with a sample of size n and denote by s its support let s be the distribution function of s and s exp nm the corresponding density with respect to a measure let a inf s b sup s and define s a b if a otherwise s a b then for s s petrone veronese have proved that inf hs s inf sup sup is a fiducial distribution function for the natural parameter it follows that the fiducial density of is hs hs s z s nm t t t it is important to underline and simple to verify that the distribution function hs is also a confidence distribution only asymptotically in the discrete case according to definition notice that for discrete nefs s s s and s s do not coincide and thus besides hs in one could define a left fiducial distribution as s s for convenience sometimes hs will be called right fiducial distribution a standard way to overcome this is referring to the device see schweder hjort pag which amounts to consider the mixture hsa hs s s s s whose density is the arithmetic mean of hs and instead we will suggest to average hs and using their geometric mean hg s suitably normalized and show that it presents better properties than ha s section and a more direct connection with objective bayesian inference section even if operationally the difference is usually not particularly big table provides the fiducial distributions obtained in v m for some important discrete and continuous nefs which will be used in the forthcoming examples it also establishes the abbreviations used in the paper for the standard distributions table fiducial distributions for some real nefs n sufficient fiducial statistic p hs n distributions i xi known n p p i xi hs ga s p i log xi hs ga n s p i xic hs ga n s p i xi hs p be s nm s i xi hs known ga known pa known we c c known bi m p p be s nm s m known hsg p be s nm s po p i xi hs ga s n ga s n hsg ga s n m p p i xi hs p be nm s p be nm s m known hsg p be nm s the following notations are used ga for a gamma distribution with shape and mean for an distribution if x ga then be for a beta distribution with parameters and bi m p for a binomial distribution with m trials and success probability p m p for a with m successes and success probability p po for the poisson distribuition with mean pa for a pareto distribution with density x we c for a weibull distribution with density exp x c fiducial distributions for multidimensional parameters a natural way to construct a suitable fiducial distribution for a multidimensional parameter is to follow the procedure used by fisher in some examples the of our proposal stems on the factorization of the sampling distribution as a product of conditional laws for each of these the fiducial density for a real component of the parameter possibly conditional on other components is defined it is well known that different factorizations of sampling distributions can produce different joint fiducial distributions see dempster however we do not consider this aspect a drawback of the procedure if it is linked to the inferential importance ordering of the parameter components implied by the factorization for example if a parameter is transformed in such a way that is the parameter of interest and the nuisance the obvious ordering is and a suitable factorization must be defined accordingly see example ctd in this section for an illustration the crucial role played by the ordering of the parameters accordingly to their inferential importance is widely acknowledged by objective bayesian inference in which reference priors are different for different orderings see section in order to construct a fiducial distribution we consider two basic transformations one involving the sample data x xn having a distribution parameterized by d n and one involving given x consider a statistic t tm d m n with density t which summarizes x without losing information on t can be a sufficient statistic or a transformation of x split t in t d d where t d td and d tm and suppose that d is ancillary for as a consequence t t d d p d and all the information on provided by x are included in the conditional distribution of t d given d assume now that there exists a smooth reparameterization from to with ordered with respect to their importance such that t d d d y tk d the density tk d with the corresponding distribution function tk d must be interpreted as the conditional distribution of tk given t t d d parameterized by assuming known in the following we will always assume that all the conditional distribution functions s involved in the analysis are monotone and differentiable in and have limits and when tends to the boundaries of its domain notice that this is always true if belongs to a nef see under these assumptions the joint fiducial density of is obtained as ht d y ht k d tk d and ht k d several applications of this procedure to well known models will be provided in section here we illustrate some interesting features of the fiducial distribution i the existence of an ancillary statistic is not necessary if there exists a sufficient statistic with the same dimension of the parameter m d an important case is m d so that formula and reduce to ht t the original formula suggested by fisher ii if one is only interested in it follows from that it is enough to consider ht td d which depending on all observations does not lose any sample information a typical choice for td is given by the maximum likelihood estimator of and thus when is not sufficient we have to consider the distribution of given the ancillary statistic d similarly if one is interested in it is enough to consider ht ht d and so on iii when an ancillary statistic d is needed the fiducial distribution is invariant with respect to any transformation of d all the sampling distributions are conditional on it and thus any transformation establishes the same constraints see section for an example iv the construction by successive conditioning makes the fiducial distribution invariant under the so called lower triangular transformation of t d for fixed d more precisely we consider a transformation d d such that gk t k d for k to see this assuming for instance gk t k d increasing in tk it is sufficient to show that d d gk t k d gk t k d d d tk tk t t d d it follows immediately that t and lead to the same fiducial distribution v if t d is sufficient for for each k then the conditional distribution of tk given t t d d does not depend on and the fiducial distribution becomes the product of the marginal fiducial distributions of the s as a consequence can be used alone to make inference on and the fiducial distribution does not depend on the inferential ordering of the parameters an important case in which this happens will be discussed in section we close this section establishing the invariance property of the fiducial distribution ht under a lower triangular transformation a transformation from to say which maintains the same decreasing ordering of importance in the components of the two vectors proposition if is a lower triangular continuously differentiable function from to then the fiducial distribution t obtained applying to the model t and the fiducial distribution t obtained applying to the model t t are such that for each measurable a z z ht t a a relationships with confidence distributions given a real nef hs in is an exact or approximate confidence distribution if the observations are continuous or discrete respectively it is possible to verify that the same is true for the marginal fiducial distribution of the main parameter of interest in the more general definition indeed the distribution function of is ht td d so that the first requirement in definition is clearly satisfied thanks to the assumption on the distribution function given after formula for what concerns the uniformity condition assuming that is decreasing in if it is increasing replace with we have for u and arbitrary ht d d u td d u z td d u t d u because by construction the integrand is equal to u for all fixed t d the discrete case the geometric mean of the left and right fiducial densities as mentioned in section for a discrete statistic s with distribution depending on a real parameter we suggest to use the geometric mean of the right and left fiducial where c is the normalizing constant instead of densities hg s c hs hs their arithmetic mean ha s a first justification of the use of the geometric mean of densities is suggested by berger et al who mention its property to be the density closest to hs and with respect to the the divergence as specified in the following proposition we give a simple proof of this fact without resorting to the calculus of variations recall that given two densities p and q having the same support and the same dominating measure the divergence of p from q is defined as r kl q x log q x x x proposition consider two densities and with the same support the density q which minimizes kl kl is given by q pg which is the normalized geometric mean of and furthermore krishnamoorthy lee observe that a distribution for whose aim is to give a synthesis of two fiducial distributions should stochastically lie between them in our setting the extreme distributions are hs and this property is surely satisfied by the arithmetic mean because hs hsa uniformly with respect to for each s belonging to the set for which both hs and can be defined the same inequalities are true for hsg under mild assumptions as usual here we assume that hs is defined as s proposition let r be the probability mass function of a real observation s having a continuous derivative with respect to for each s assume that the function s s s is decreasing on then hs hsg uniformly on the assumptions required in the previous proposition are satisfied by many important models for example we have the following corollary if is the probability mass function of a real nef then hs hsg uniformly on we now discuss the relationship between hsg and hsa proposition let r be the probability mass function of a real observation s satisfying the following assumptions in addition to those stated in proposition lim lim then for each s there exists depending on s such that hsg hsa for and hsg hsa for the result in proposition is important in connection with confidence intervals because it shows that hsg gives for a fixed level a confidence interval smaller than that obtained from hsa see figure graph for an example notice that the assumptions on in proposition are fulfilled by a real nef with natural parameter space r as it occurs in the binomial and poisson models however these assumptions are not necessary to ensure the stated behavior of hsg and hsa that we conjecture to be quite general as the following example shows example consider an sample of size n from a logarithmic distribution with parameter with probability mass function x the sufficient statistic t pn xi t i x log is distributed as n t n t i t t log n n where s t n is the stirling number of the first kind with arguments t and n see johnson et al the distribution of t belongs to a real nef with t decreasing in so that the fiducial distribution function ht for t n n and is ht t t x n j n j j log n for this model s s t n t log pt t j n j log it can be seen that for each t n is decreasing in and lim t x j n t lim t n j nevertheless the fiducial distributions htg and hta behave as stated in proposition see figure graph finally we justify our preference for hsg versus hsa showing that its confidence risk under quadratic penalty as defined in schweder hjort sec is uniformly better for all the important discrete models reported in table the confidence risk figure graph fiducial distributions for a sample from the logarithmic distribution g n t red ha t green ht yellow ht blue graph g confidence curves for ha t green and ht yellow r hs for the mean parameter and a confidence or fiducial distribution hs under quadratic penalty is r hs z dhs varhs where varhs denotes the variance of under hs the expected value with respect to the distribution of s and e hs is the mean of under hs now recalling that for the binomial and the distribution in table assuming m for simplicity we have p and p it is easy to verify that for both these models and the poisson model is the same under hsg and hsa as a consequence a g r hsa hsg varhs varhs which becomes n n and for the three models above respectively all these values are strictly positive for each n uniformly in let us now consider the fiducial distribution for a multivariate parameter defined in for each discrete component of the product starting from tk tk t d d tk d and tk tk t d d it is possible to define a right and a left fiducial distribution respectively and hence their geometric and arithmetic means notice that each nent of involves a parameter and a real observation the remaining quantities being fixed so that the propositions and can be applied multivariate fiducial distributions for discrete observations can thus be obtained combining in the various possible way these univariate distributions in particular we will consider ht obtained as the product of all the right univariate conditional fiducial distributions obtained as the product of all the left univariate conditional fiducial distributions hta defined as the product of the d mixtures hta k d ht k d k d and finally htg corresponding to the density hg t obtained as the product of the d geometric notice that hg coincides with the geometric means hg t t k d ht k d k d mean of all the fiducial densities derived as described above fiducial inference and the sufficiency principle the procedure introduced at the beginning of section gives a generalized fiducial distribution according to hannig if one considers as equation t g u with gk u k d k d tk uk k d m where u is a random vector with a completely known distribution the functions gk can be explicitly obtained iteratively as follows d u u d d d u u u u d d and so on it is interesting to observe that the generalized fiducial distribution r given in does not necessarily satisfy the sufficiency principle this can be verified immediately looking at the example in hannig in which a uniform distribution on is considered and r does not depend on the xi s only through the sufficient statistic s x x n where x i denotes the order statistic despite its simple form this model is highly irregular but the inconsistency with the sufficiency principle of the generalized fiducial distribution r can also occur for more standard models in particular if a real continuous sufficient statistic s for a real parameter exists one could derive two different fiducial distributions starting from s or from the whole sample a simple example of this issue can be easily constructed considering a beta model with parameters and another interesting example is the following example let x xn be an sample from a truncated exponential density x x r this density is not defined for but it can be completed by continuity setting x the distribution function of xi is xi xi so that from we have n j x where s pn xi x s e thus using we obtain r n x s r figure graph fiducial densities r red r green h s blue graph fiducial densities r red r green h s blue figure confidence curves for r red and h s blue which depends on the values of the specific xi s consider now the sufficient statistic p s xi and for simplicity assume n the density of s is s s s s e and the generalized fiducial density reduces to hs s in figure we report the fiducial densities r and hs for different values of and s for s all densities are symmetric with the mode in while the dispersion is increasing in so that the more concentrated fiducial density is obtained for however for s the densities have different modes and are shifted to the left when increases in all cases the fiducial density hs is in the middle of the various cases notice that hs has all the good properties discussed in v m and in particular it is a confidence distribution because the model belongs to a nef the confidence intervals corresponding to hs are slightly smaller than those corresponding to r as can be seen from the confidence curves reported in figure for instance when the confidence intervals are and for hs and r respectively the computation of the fiducial distribution ht defined in is greatly simplified starting with the sufficient statistic instead of the whole sample however when both the alternatives are feasible they seem to lead to the same result in particular the following proposition states that the sufficiency principle is always satisfied by ht when there exists a complete sufficient statistic for the parameter proposition consider the fiducial distribution ht defined in and with t a transformation of the data x xn if s s t is a complete and sufficient statistic of dimension d for such that s g t d d is a lower triangular transformation of t d for fixed d then the fiducial distribution hs for obtained using s instead of t in and coincides with ht notice that the completeness of s is not necessary to satisfy the sufficiency principle as the following example shows example given an sample x of size n from a uniform distribution on it is immediate to verify that the sufficient statistic s x x n is not complete because is a location parameter z x n x is an ancillary statistic and the fiducial distribution for can be obtained starting from the distribution function of x n given z which is x n x n z z x n x n z x thus hs x n z x n x n x n z x if we start directly with x we can consider the distribution function of xn given z where zi xn xi omitting tedious calculations we have xn z xn max zi min zi max zi max zi xn min zi and thus for xn min zi xn max zi hx xn min zi max zi observing that min zi zi unless xn x n and recalling that zi xn xi i it follows that xn zi x n and similarly xn zi x so that hx coincides with hs given in conditionally reducible natural exponential families consider a multivariate natural exponential family whose density with respect to a fixed positive measure is given by d x x exp xk m x xd rd a nef is reducible in the sequel if its joint density can be factorized as a product of d conditional densities each belonging to a real exponential family more precisely if d y xk d y exp xk mk x where is a function from onto furthermore it can be shown that with k d so that the s are variation independent notice that is the natural parameter of the conditional distribution for details on these families with emphasis on enriched conjugate priors and on reference bayesian analysis see consonni veronese and consonni et al respectively both these papers deal in particular with the families having simple quadratic variance function named which include as most interesting cases the multinomial and models see casalis and appendix example multinomial model consider a random vector x distributed according to a multinomial distribution and denote by pk the probability of the outcome xk k p p d with xk n and pk it is well know that the conditional pj xj pk tribution of xk given x x k d is bi n whereas the marginal distribution of is bi n since a binomial distribution is a real nef one can factorize the multinomial distribution as in with x pk log xj log mk x p pj for models belonging to a the construction of the fiducial distribution proposed in section drastically simplifies the existence of a sufficient statistic of the same dimension of the parameter makes the ancillary statistic not necessary while the indexing each conditional distribution with a real parameter implies the independence of the s under the fiducial distribution proposition let s be the sufficient statistic distributed according to a regular crnef on rd parameterized by with coinciding with the natural parameter space of the conditional distribution then for s s with sk k d satisfying conditions similar to those given before hs d y hs k d y sk is a fiducial distribution function on with density hs d y hs k where hs k hs k sk k the s are independent under hs and thus their importance ordering is irrelevant this fact also justifies the simplification in the index notation adopted in notice however that the definition and the interpretation of the s depend on the particular ordering considered for the xk s as seen in example as recalled in section a general definition of confidence distribution does not exist however in our context since hs is constructed as a product of marginal confidence distributions it can be considered as a multivariate possibly asymptotic confidence distribution for some of the examples of section can be reconnected with this framework but here we consider in specific the whose variance function is given in for this class with the exclusion of the secant distribution it is possible to give a simple explicit expression of the fiducial density of recalling the definition of bk given in and setting zk zkk in the specifications of zkk and q appearing in and of bk can be found in appendix proposition consider a sample of size n from a a multinomial or a family on rd if s denotes the sufficient statistic then the right fiducial distribution for has density d d y y x hs hs k sj bk exp sk zk q while for the family with an m dimensional tive multinomial component the right fiducial distribution is given by hs d y h sk m y exp sk exp exp d y sj pm sj exp sk n notice that the discrete components of a basic are with zk so that the left fiducial distribution is obtained by the previous formulas replacing the term sk by sk in and thus it follows that the geometric mean hg s has the same structure in and with sk instead of sk example for the multinomial family because q zk and bk n log k d it easily follows from formula that d d y y sk g g hs k hs pk nn sj b s nn s j k e where b denotes the beta function the fiducial distribution for not always is of particular interest in itself but it can be used as a starting point for the construction of the fiducial distribution for alternative and more relevant parameters we consider here the which is a lower triangular transformation of see so that its fiducial distribution can be directly obtained from that of thanks to proposition corollary the right fiducial distribution for the mean parameter relative to the ordering for the following on rd has density family with m poisson components hs m y s exp d y n n o exp sk which corresponds to the product of m densities ga sk n k m and densities n sk k m multinomial family hs d y k x where for k d and n n pd sj family with r occurrences in the d cell k d x y hs where for k d and pd sj family with an negative multinomial component with r occurrences in the m cell m k y x hs d y m x sj pm r s j sj exp pm sk exp n p where for k m and m sj notice that the pm p density of given m is an rn sj r m while the density of given k m d is a n sk depending only on example inference for the multinomial distribution is usually performed for the parameter p pd since pk the fiducial distribution hg for p is easily derived from noting that the left fiducial density can be obtained replacing sk by sk in and not in which is derived aggregating the hyperparameters it follows that the geometric mean hg s p is given by d k d y x x sk g hs p pj pk pk pk with for k d and n n dirichlet distribution clearly hg s p pd sj this is a generalized in refers to the specific order of importance pd if we change this order the fiducial distribution will change accordingly similarly for the model with r occurrences in the d th cell hg s can be easily computed from observing that zk q and bk log exp connections with objective bayesian inference as mentioned in section if we look at fiducial inference as a way to obtain a distribution on the parameter space of the model without any prior information it appears natural to compare it with objective bayesian inference recall that when a fiducial distribution coincides with a posterior the corresponding prior is called fiducial prior the construction of the fiducial distribution ht defined in is based on the inferential importance ordering of the parameter components this aspect is also crucial in the procedure adopted to construct reference priors see bernardo smith sec the reference prior r for a parameter is generated by successive conditioning established by the importance ordering of its compoq nents as r r it is widely recognized that the dependence of the reference prior on the choice of the parameter of interest is necessary to obtain good frequentist properties such as coverage and consistency for a parameter the reference prior coincides with the jeffreys prior j i where i denotes the fisher information while the jeffreys prior is invariant under a reparameterization of the model the reference prior and thus the reference posterior is generally not invariant unless the transformation from to is lower triangular see datta ghosh thus the reference posterior has the same invariance property of the fiducial distribution proved in proposition recently berger et al recognize the existence of situations in which one is interested simultaneously in all the parameter components of the model or in none of them but a prior and thus a posterior distribution is necessary to perform other inferences such as predictions in these cases an overall prior is needed its determination is an open problem but as they highlight when there exists a common reference prior for all parameters this is the natural choice for the overall prior a similar problem occurs in our context and we will comment on this aspect in the following sections notice that here the fiducial distribution suggested by hannig can be a good choice parameter models for parameter models the fiducial prior exists and coincides with the reference prior assume first that only one parameter is unknown in this case the model admits an ancillary statistic z and in particular we take zi xi or zi xi i n if is a location or a scale parameter respectively proposition let x xn be an sample from a density if is a location or a scale parameter then the fiducial distribution coincides with the bayesian posterior obtained with the jeffreys prior j or j respectively example let x be an sample from the uniform distribution on so that is a scale parameter first notice that s x n is a sufficient statistic for and thus we can obtain directly the fiducial distribution nsn s hs hs s however the same result can be obtained without resorting to the sufficient statistic set w max zn and consider the distribution function of given the ancillary statistic z xn w n w now because w means max xn while for w we have w max xn expression as a function of is equivalent to s appearing in and thus provides the same fiducial distribution it is immediate to verify that it coincides with the jeffreys posterior a case in which the sufficient statistic is not and thus it is necessary to use an ancillary statistic can be found in the previous example trivially hs given in coincides with the bayesian posterior obtained by j consider now a model with a location parameter and a scale parameter both unknown given an sample of size n an ancillary statistic is for example z zn with zj xj j n where is marginally ancillary for then the transformation from x to z allows to write the sampling distribution as z p z note that in specific contexts other transformations could be more appropriate for example in a normal model one p p could use xi s xi z with zj xj j n so that the factorization becomes z p z proposition let x xn be an sample from a density where and are a location and a scale parameter respectively then the fiducial distribution hx for coincides with the bayesian posterior obtained with the reference prior r notice that r is different from j obtained by the jeffreys rule which as already recalled is not suitable for multidimensional parameters furthermore while r does not depend on the ordering of and the fiducial distribution is in general not allowable if the ordering is reversed however hx coincides with the fiducial distribution obtained through other symmetric approaches see hannig and fraser thus the inferential ordering of importance seems irrelevant for this model and hx can be assumed as an overall fiducial distribution exponential families lindley was the first to study the existence of a fiducial prior analyzing in particular the case of continuous real nefs and proving that it exists only for gaussian with known variance and gamma with known shape models a full characterization of the real nefs which admit a fiducial prior is given in v m the following proposition summarizes their results proposition let f be a real nef with natural parameter i a fiducial prior exists if and only if f is an affine transformation of one of the following families normal with known variance gamma with known shape eter binomial poisson and for the three discrete families the fiducial prior exists for all hs and hsg ii when a fiducial prior exists it belongs to the family of conjugate distributions moreover it coincides with the jeffreys prior for continuous nefs and for discrete nefs too if we choose hsg as the fiducial distribution iii the fiducial distribution hs or hsa in the discrete case and the bayesian posterior distribution corresponding to the jeffreys prior have the same edgeworth s expansion up to the term of order the previous results establish a strong connections between jeffreys posteriors and fiducial distributions for real nefs and thus the two different approaches lead in some sense to the same objective inference a discussion about the coverage of the fiducial and the jeffreys intervals and their good frequentist properties in particular when compared with the standard wald intervals is given in v m section consider now a it is easy to verify that the fiducial distribution hs in belongs to the enriched conjugate family defined in consonni veronese section this fact is the to prove the following proposition proposition let s be a sufficient statistic distributed according to a on rd parameterized by then a fiducial prior for exists if and only if the conditional distribution of sk given s s is an affine transformation of one of the following families normal with known variance gamma with known shape parameter binomial poisson and in particular all basic with the exclusion of the hyperbolic secant admit a fiducial prior which belongs to the enriched conjugate family moreover if for the discrete components of these models we consider the geometric mean hg s k then the product of the jeffreys priors computed from the conditional distribution of sk given s s the reference prior and the fiducial prior are all equal example the multinomial distribution is a basic and thus from proposition setting sk n in hg s given in we obtain the fiducial prior d y which coincides with the reference prior and with the product of the jeffreys priors for k d computed on the distribution of xk given x x finally we observe that the fiducial distribution is always an overall fiducial distribution for however the is often not interesting in itself even if in some cases it is strictly related with a more relevant one for example following berger et al consider a multinomial model applied to directional data as it happens for outcomes from an attitude survey in this case the cells are naturally ordered so that it is meaningful to reparameterize the model in terms of the conditional probabilities exp exp k then in induces on an overall fiducial prior which is a product of independent be distributions coinciding with the overall reference prior further examples examples concerning normal models i difference of means consider two independent normal samples each of size n with known common variance and means and respectively the sufficient statistics are the sample sums and with si n i if the parameter of interest is we can reparameterize the joint density of in so that the conditional distribution of given being n depends only on from table the fiducial distribution of is n and thus is n where si because is n arguing as before the fiducial distribution of given is n so that notice that the same joint fiducial distribution is obtained if we consider the ordering or even if we compute the marginal fiducial distributions of and and obtain that of through the rule thus the ordering of the parameter is irrelevant and is an overall fiducial distribution furthermore it coincides with the reference posterior obtained with a constant prior and the marginal distribution of and are both confidence distributions ii many normal means neyman scott problem consider n samples of size two with each xij independently distributed according to a n i n p and let and w the aim is to make inference on the common variance with nuisance parameter this well known example is used to show that the maximum likelihood estimator of is inconsistent because n to obtain the fiducial distribution of first notice that the joint distribution of the sufficient statistics and w can be factorized as qn w for the independence of and w with w ga using table one can easily obtain from w the fiducial distribution for and hence that for which is while that of each given derived from is n as a consequence qn w hw this distribution coincides with the posterior obtained from the order invariant reference prior r and does not present the inconsistency of the likelihood estimator which instead occurs for the posterior distribution obtained from the jeffreys prior j comparison of two poisson rates the comparison of poisson rates and is a classical problem arising in many contexts see for example lehmann romano for a discussion on an unbiased uniformly most powerful test for the ratio given two samples of size n from two independent poisson distributions the sufficient statistics are the sample sums and with si po i reparameterizing the joint density of in we have that the conditional distribution of given is bi and the marginal distribution of is po thus the sampling distribution is a and we can apply using table the fiducial density for derived from the conditional distribution of given is be which implies hg s b from the marginal distribution of and using again table it follows that g g g hg is ga n and thus this joint fiducial distribution is coincides with the reference posterior according to proposition and is an overall distribution for notice that hg is a confidence distribution and that it differs from the fiducial distribution induced on by the two independent marginal fiducial densities for and bivariate binomial a bayesian analysis for the bivariate binomial model has been discussed by crowder sweeting in connection with a microbiological application consider m spores each with a probability p to germinate and denote by r the random number of germinating spores so that r is bi m p if q is the probability that one of the latter spores bends in a particular direction and s is the random number of them the probability distribution of s given r r is bi r q the joint distribution of r and s is called bivariate binomial crowder and sweeting observe that the jeffreys prior j p q p q q is not satisfactory for its asymmetry in p and p while polson wasserman show that this fact does not occur using the reference prior r p q p q q which is the product of the two independent jeffreys priors g the joint fiducial density hg r s q p can be obtained as the product of hr s q derived from the conditional model bi r q of s given r r and hg r derived from the marginal model bi m p of r which does not depend on q thus p and q are independent under hg r s so that it is an overall fiducial distribution because for the binomial model the fiducial prior is equal to the jeffreys prior see proposition it follows immediately that hg r s q p coincides with the reference posterior all previous conclusions hold even if we consider the alternative parametrization pq p q pq ratio of parameters of a trinomial distribution bernardo ramon perform the bayesian reference analysis for the ratio of two multinomial parameters presenting some applications in particular they discuss the case of distributed according to a trinomial distribution with parameters n and p and provide the joint reference prior for with the parameter of interest then they derive the marginal reference posterior for which is x r to find the fiducial distribution of we reparameterize the trinomial model in the conditional distribution of given t t is bi t so that by table the fiducial density for is be t and hg t coincides with from the marginal distribution of t which is bi n it is possible to derive the fiducial density hg t n t n t g g so that the joint fiducial density is hg t ht which coincides with the joint reference posterior conclusions and final remarks we have suggested a way to construct a fiducial distribution which depends on the inferential importance ordering of the parameter components our proposal appears to be quite simple to apply and even if it is not so general as the theory suggested by hannig has some advantages in connection with the modern confidence distribution theory and it is strictly related to objective bayesian analysis in complex models an exact analysis is generally not possible but approximate results can be derived working with asymptotic distributions in v m starting from the sufficient statistic an expansion up to the first order of the fiducial distribution for the mean parameter of a real nef is provided this result can be extended to arbitrary regular models starting from the maximum likelihood estimator of the parameter when the maximum likelihood estimator is not sufficient a better fiducial distribution can be obtained using an ancillary statistic as suggested in section to this aim the magic formula given by which provides an approximation of the conditional distribution of the maximum likelihood estimator given an ancillary statistic can be fruitfully adopted furthermore these asymptotic results appear to be strictly connected with the theory of matching priors priors that ensure approximate frequentist validity of posterior credible set notice that also these priors crucially depend on the inferential ordering of the parameters see tibshirani and datta mukerjee however a normal approximation of the fiducial distribution when it can be established and is enough for the analysis can be proved to be these type of results will be discussed in a forthcoming paper acknowledgements this research was supported by grants from bocconi university appendix useful results on some technical aspects related to are the following a nef is a if and only if the principal k k matrix of the variance function does not depend on for k d the fisher information matrix relative to the is diagonal with the element depending only on k the cumulant transform mk x of the conditional density is given by mk x x akj xj bk for some functions akj and bk the conditional expectation of xk given x x is linear in x because it is the gradient of the parameter depends on only through k because from x using and it can be checked that d x auk and m d x bk as a consequence of the first part of there exists a function gk such that gk of course all the previous formulas hold for k d with the understanding that components that lose meaning for a specific k are set to zero a nef has a simple quadratic variance function sqvf if the element of its matrix seen as a function of the mean parameter p k can be written as vij lij cij where q is a real constant and l k k d and c are constant d d symmetric matrices any can be obtained via a nonsingular affine transformation from one of the basic families q multinomial q n positive integer q r positive integer q and q see casalis for a detailed description of these distributions the element of the variance function v of a basic is vij d x zik cij where zij zji i j d are constants the values of zii for the basic nefsqvfs together with other technical details are given in the proof of corollary proofs proof of proposition by the standard rule applied to the first integral in it is enough to show that t ht where is the jacobian of the transformation from to now from we have t d y ht k d d y tk d while d y because the transformation from to is lower triangular it follows from the last two formulas and the chain rule that t d y tk d where is the distribution function of tk given t t t t in the parameterization the equality follows by applying to the model parameterized by proof of proposition let pg x x x where c constant then z r x x x is the normalizing z q x q x kl kl log q x x log q x x x x z z q x q x q x x log q x x log x x cpg x z q x log g q x x log c log p x because c does not depend on q it follows that the functional in achieves its minimum equal to log c if and only if kl q pg proof of proposition we only prove that hs hsg the other inequality can be shown in the same way using and we can write hg s hs c c p s hs hs c s hs q hs hs s hs c by hypothesis is decreasing and thus from hg s is also decreasing on this is a sufficient condition for hs hsg see shaked shanthikumar theorem proof of corollary let s exp nm be the probability mass function with respect to a measure of a real nef with the natural parameter fixing s we can write s s nm exp nm s t nm exp nm x t nm exp t s s nm the elements in the sum are continuous and increasing functions of in both intervals for which and where m thus is decreasing in these intervals moreover is equal to zero for positive for and negative for because the denominator of is hs which is positive then is decreasing over all and from proposition the result follows proof of proposition in order to prove the proposition it is sufficient to show that there exist and in g a g such that ha s hs i with hs hs for and g ha s hs otherwise see shaked shanthikumar proof of theorem g thus we analyze the sign of ha s hs we can write q g ha h h h hs s s s s c hs c g so that the sign of the difference ha s is a function of only first notice that rp by a standard property of the arithmetic and geometric means c hs r hs for all after some straightforward algebra it can be seen g that ha s hs when and only when c c or with and moreover we have g a g ha s hs for and hs hs for or by assumption is decreasing on from to so that there exist and with and satisfying the sufficient condition stated at the beginning of the proof proof of proposition first notice that if we use for constructing the fiducial distribution the sufficient statistic s which has the same dimension of the parameter we do not need an ancillary statistic furthermore t is a transformation of x and thus s is a function of t t d t d but since s is complete it is stochastically independent of t d by basu s theorem as a consequence s k is also independent of t d and thus sk sk s s sk sk s s d d from the lower triangular transformation s g t d d we have that sk gk tk t d with gk invertible with respect to tk so that assuming gk increasing becomes gk tk t t d sk t t d d tk sk t t d t t d d tk tk t t d d which proves the proposition proof of proposition because each conditional distribution of xk given x x belongs to a nef with natural parameter using we have that hs k is a distribution function for the result follows from the postulated independence among the s proof of proposition formulas and derive by a direct application of to the conditional distributions of the different families for a detailed description of the involved see consonni veronese proof of theorem proof of corollary first notice that the fiducial distribution hs can be more easily obtained via a double transformation namely s hs where the jacobian for and d x zk q d m det v exp see smith pag for the proportionality relationship and consonni et al prop for the equality we consider now each family family with m poisson components we have q zk log and bk exp for k m while zk and bk for k m d where is the known variance of the normal q components then from it follows that m and thus using the result follows multinomial family using the relationships in example gives n pd qd and using we obtain family we have q r zk log pd pd and bk log e for k e log log r pd q then from it follows that r k and thus using the result follows family with an m dimensional p component we have q r zk and log r p k m and r m zk and k in this case it is convenient to compute the fiducial density of directly from observing that the jacobian of the transformation from to is m m m x y x r k and using the previous expression of the density follows proof of proposition let x be an sample of size n with xi xi f xi is a location parameter and consider the transformation zi xi i n whose jacobian is one then setting z zn r hx hz zn f t r f t qn f t qn f t zi dt zi dt using now the substitution m in the previous two integrals and recalling that j we obtain qn f zi m dm f m r qn f zi m dm f m qn j f xi m m dm r qn j f w m dm z j dm the result relative to the scale parameter follows recalling that the model x f can be transformed in a model with location parameter setting y log x and log in this case a constant prior on is equivalent to a prior on proportional to proof of proposition let x be an sample of size n with xi xi f xi i n and notice that the absolute value of the jacobian of the transformation from x to z with z zn and zj xj j n is furthermore the reference prior r is and can be written as r r where r and see steel working conditionally on we can thus apply proposition to conclude that the reference posterior and the fiducial distribution for given coincide it remains to show that z r r p z r r z z f n zi y f f corresponds to the fiducial density z we have z r r z x t z dtdw wzi qn dtdw f f f z p pz z where the density of pz z does not depend on the parameters because z is ancillary assuming and using the transformation m v t v which implies t m w with jacobian the fiducial distribution z in becomes r r f v f v pz z qn f zi v dmdv taking the derivative with respect to it is immediate to see that the fiducial density for coincides with the posterior distribution given in if and applying to the integral the same transformation used in the previous case we have r r r r f f f z p z v f qn f v pz z wzi qn f dtdw zi v dmdv so that again the derivative with respect to of z leads to the following lemma will be used in the proof of proposition lemma consider a on rd with the diagonal element in the fisher information matrix given by ikk ak bk then the reference prior r for is r d y d y ak where is the jeffreys prior obtained from the conditional distribution of xk given x x proof of lemma first observe that k is a transformation of k and that the information matrix i of a is diagonal see appendix points and from and the element of i is x ikk k log xk x x j under the assumption in the proposition we can write ikk k ak from datta ghosh it follows that the reference prior on is and is given by the last product in consider now the jeffreys prior on obtained from xk this is tional to the square root of xk x x xj ak x where again the last equality holds by the assumption in the proposition thus the product of the d jeffreys priors is equal to and the result holds proof of proposition due to the independence of the s a fiducial prior for exists if and only if there exists a fiducial prior for each because the conditional distribution of sk given s s belongs to a real nef with natural parameter the result of the first part of the proposition follows from proposition the first statement of the second part of the proposition follows checking directly the form of the conditional distributions of the basic and using again proposition the second statement follows from the remark stated before the proposition and from lemma references o on a formula for the distribution of the maximum likelihood estimator biometrika berger o the case for objective bayesian analysis bayesian analysis berger o bernardo ordered group reference priors with application to a multinomial problem biometrika berger bernardo sun overall objective priors bayesian analysis bernardo reference posterior distributions for bayesian inference stat soc ser b bernardo ramon an introduction to bayesian reference analysis inference on the ratio of multinomial parameters the statistician bernardo smith bayesian theory wiley chichester casalis the simple quadratic natural exponential families on rd ann statist consonni veronese conditionally reducible natural exponential families and enriched conjugate priors scand stat consonni veronese reference priors for exponential families with simple quadratic variance function multivariate anal crowder sweeting bayesian inference for a bivariate binomial distribution biometrika datta ghosh some remarks on noninformative priors amer statist assoc datta ghosh on the invariance of noninformative priors ann statist datta mukerjee probability matching priors higher order asymptotics lecture notes in statistics springer new york dawid stone the basis of fiducial inference ann statist dempster further examples of inconsistencies in the fiducial argument ann statist steel j reference priors for the general model statist prob lett fisher a inverse probability proceedings of the cambridge philosophical society fisher a the fiducial argument in statistical inference ann eugenics vi fisher a statistical methods and scientific inference hafner press new york fraser on fiducial inference ann math statist smith exponential and bayesian conjugate families review and extensions with discussion test hannig j on generalized fiducial inference statist sinica hannig j generalized fiducial inference via discretization statist sinica hannig j iyer fiducial intervals for variance components in an unbalanced normal mixed linear model amer statist assoc hannig iyer wang fiducial approach to uncertainty assessment accounting for error due to instrument resolution metrologia hannig iyer lai lee generalized fiducial inference a review and new results american statist assoc johnson kemp a kotz univariate discrete distributions wiley new york krishnamoorthy lee inference for functions of parameters in discrete distributions based on fiducial approach binomial and poisson cases statist plann inference lehmann romano testing statistical hypotheses springer new york lindley fiducial distributions and bayes theorem stat soc ser b martin liu inferential models a framework for posterior probabilistic inference amer statist assoc petrone veronese feller operators and mixture priors in bayesian nonparametrics statist sinica polson wasserman prior distributions for the bivariate binomial biometrika schweder hjort confidence and likelihood scand stat schweder hjort confidence likelihood and probability london cambridge university press shaked shanthikumar stochastic orders springer new york singh xie strawderman combining information through confidence distribution ann statist stein an example of wide discrepancy between fiducial and confidence intervals ann math statist taraldsen lindqvist fiducial theory and optimal inference ann statist tibshirani noninformative priors for one parameter of many biometrika veronese melilli fiducial and confidence distributions for real exponential families scand stat wandler hannig j a fiducial approach to multiple comparisons statist plann inference wilkinson on resolving the controversy in statistical inference stat soc ser b
10
source forager a search engine for similar source code vineeth david bingham ben david and thomas grammatech jun ithaca new york usa email vkashyap melski university of usa email bingham liblit reps spend a significant amount of time searching for to understand how to complete correct or adapt their own code for a new context unfortunately the state of the art in code search has not evolved much beyond text search over tokenized source code has much richer structure and semantics than normal text and this property can be exploited to specialize the process for better querying searching and ranking of results we present a new engine named source forager given a query in the form of a function source forager searches a code database for similar functions source forager preprocesses the database to extract a variety of simple code features that capture different aspects of code a search returns the k functions in the database that are most similar to the query based on the various extracted code features we tested the usefulness of source forager using a variety of queries from two domains our experiments show that the ranked results returned by source forager are accurate and that functions can be reliably retrieved even when searching through a large code database that contains very few functions we believe that source forager is a first step towards muchneeded tools that provide a better experience index search similar code program features i introduction in this age of software proliferation it is useful to be able to search large corpora effectively for code with desired developers routinely use code search as a learning and debugging tool for tasks such as looking for existing functionality in a code base determining how to use an api or library gathering information about what code is intended to do etc search techniques are not always precise enough for code because they focus purely on strings in the code supported in part by a gift from rajiv and ritu batra by afrl under darpa muse award and by the office of the vice chancellor for research and graduate education with funding from the wisconsin alumni research foundation any opinions findings and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the sponsoring agencies reps has an ownership interest in grammatech which has licensed elements of the technology reported in this publication in this paper the term search is used in the sense of google namely to retrieve documents that are related to a specified query search is not used in the sense of finding an occurrence of a string or pattern in a given document comments complete or partial names of functions and variables and so on text search largely ignores code structure and semantics what the code does and how it does it a approach can cause searching to be imprecise relevant code fragments may be missed while many spurious matches may be returned recent search techniques allow users to specify certain aspects of code semantics in addition to the textual query some techniques allow users to specify structural requirements such as that the search target should have nested loops others specify context such as that the search target should implement a particular interface yet others specify sets of pairs additional semantic information can improve search accuracy however existing techniques share the following shortcomings the techniques do not provide a unified way of specifying semantics for the search query each technique has its own specification of the semantic aspects of the code that it uses each technique is closely married to its chosen semantic aspect which is deeply ingrained into the implementation of the search technique this tight coupling makes it hard to extend these techniques to model additional semantic aspects we propose a search technique for finding similar source code that addresses these shortcomings unified query specification our mechanism takes code fragments as queries various kinds of semantic information can be extracted from the query and used by the search this approach provides a unified mechanism for code search searching code using code fragments moreover the same techniques for extracting semantic information are used on both queries and elements of the corpus being searched leading to greater consistency extensibility our technique uses a vector of extracted from elements in the corpus capture various aspects of the syntax and semantics of a program each such aspect is called a and provide a unified interface for querying this approach also makes our search technique extensible it is easy to introduce more that model additional aspects of the code int binsearch int x int v int n int low high mid low high n while low high mid low high if x v mid high mid else if x v mid low mid found match else return mid no match return of various weights results weight determination neighbor search query corpus program elements feature extraction engine code database fig example program that implements a binary search over a sorted integer array fig overview of the source forager architecture in addition to being useful on its own right as a developer offline phase population of the source forager database tool search can serve as an important building in this phase source forager analyzes a given code corpus block for automated program repair and program synthesis and populates a code database with rich information about the ability to find other code similar to a query can help each of the functions in the code corpus source forager automated tools learn from the similar code and fix bugs or extracts several different kinds of information about each perform code completion tasks on the query function we refer to each of the different kinds of information the main contributions of source forager are as a describes our different in the ability to perform code searches using code detail a is some specific value observed fragments as queries the searches and answers of source for a given thus each function has one featureforager are both based on a query formalism that is close observation for each for example one of our to the concepts that developers are already familiar with is numeric literals the corresponding a architecture that uses multiple code featureobservation is the set of all the numeric constants used in classes simultaneously the architecture is extensible althe function for the implementation code given lowing easy addition of new code which in fig the numeric literals is the set enhances the dimensions along which code is searched a mechanism for automatically selecting useful code featurea feature extraction engine consists of several feature extracclasses to be employed in code search of a given query given tors which collect a given function s into no a priori domain information about the query a note that the elements of the a technique to the relative importance of different when it is known can be such as sets multisets trees maps etc that a query belongs to a specific domain for which suitable the number of determines the length of the training data is available organization the remainder of the paper is organized the feature extractors operate on a code corpus and popinto four sections gives an overview of our approach and ulate a code database each element of the code database algorithms describes the methods in detail presents consists of a function from the corpus along with its our experimental results discusses related work extracted if numeric literals is employed as one of the then one element of a function s ii overview is the set of numeric constants source forager is a search engine for finding similar source the code database also has access to several similarity code it takes an input query as source text then functions one for each the similarity function searches a database for similar code for a given takes any two returning a ranked list of results the units of code about belonging to that and returns a value between which source forager can reason about are called program and a higher value indicates greater similarity between elements in its current incarnation program elements are two for example the similarity function functions that is both queries and results are for numeric literals is the jaccard index given two sets functions and the jaccard index is given by fig provides an architectural overview of source forager source forager has two stages an offline phase to populate its simjacc code database and an online phase the second implementation integrates our infrastructure with which is an database implemented in the in are serialized into efficient data structures by has access to similarity functions implemented in for all it implements the search for the k functions most similar to the query by scanning all the in the database comparing each of them to the query and maintaining a priority queue of size k that keeps track of the k given a query and relative weights for different can find the functions in a code database containing functions in under seconds on a single machine with intel ghz cores and gb ram effort is underway by the developers of to make a distributed version which would allow source forager to search large code databases without taking a big performance hit a large code database can be split into p smaller units that can each be searched in parallel and the sorted k results from each of the p units can be merged using a merge algorithm int bins int key int array int min int max if max min return else int midpoint int floor if array midpoint key return bins key array midpoint max else if array midpoint key return bins key array min midpoint else return midpoint fig example source forager result for the query in fig this result is a recursive implementation of binary search online phase search for similar code in the online search phase source forager takes a query and uses the same infrastructure to obtain the that corresponds to the query this infrastructure reuse creates a consistent representation and view of code throughout the infrastructure for each featureclass in the a weight is assigned to determine the importance of that this weight determination is based on which configuration source forager is run with sections and provide an overview of the different configurations a combined similarity function is defined on any two by combining the similarity functions with the weight assignment using a weighted average that is sim wc b c c simcombined a cl w c extensible architecture source forager s architecture allows for easy extension to add a new one implements a feature extractor that determines the for any given function and a corresponding similarity function we currently implement our feature extractors using however source forager is not tightly coupled with codesonar any processing tool can be used to implement a feature extractor the for all existing are represented with container data structures such as lists maps and trees all similarity functions work at the level of container data structures and thus are available to be reused with any additional feature extractors furthermore source forager is not tied to having functions as the only kind of program element the underlying architecture is also not limited to and thus source forager can be to perform code searches of programs written in other languages where and are two ncl is the total number of the length of each simc is the similarity function for c and are the for c in and b respectively and wc is the weight assigned to the of the query is compared with each of the in the code database using this combined similarity function and the k functions that is with the highest similarity scores to the query are returned as results for some configurable limit k fig shows an example source forager result when the code in fig is used as query we have two implementations of source forager the first one is a version in which the code database is implemented as a large json object and the various similarity functions and the algorithm for are implemented in python this implementation allows for easier and quicker experimentation with new ideas we use this version for the experiments reported in iii code search in this section we first describe the different and the accompanying similarity functions that are employed in source forager we then describe two configurations of source forager the first configuration selects a subset of the on a basis for performing code search this configuration is useful when no additional information is available regarding a code query the second configuration the relative importance of for a specific domain ahead of time using techniques this configuration is useful when the domain of the code query is known seq table i a brief overview of the different employed in source forager the marked all use jaccard index eq as the similarity function the similarity functions used for the remaining accompany their descriptions in negate loop seq brief description coupling types used and operations performed on the types skeleton tree structure of loops and conditionals decorated skeleton tree structure of loops conditionals and operations weighted nl terms processed natural language terms in code graph cfg bfs cfg subgraphs of size bfs used for generating subgraphs graph cfg bfs cfg subgraphs of size bfs used for generating subgraphs graph cfg dfs cfg subgraphs of size dfs used for generating subgraphs graph cfg dfs cfg subgraphs of size dfs used for generating subgraphs modeled library calls calls made to modeled libraries unmodeled library calls calls made to unmodeled libraries library calls calls made to libraries type signature input types and the return type local types types of local variables numeric literals numeric data constants used string literals string data constants used comments associated comment words seq loop seq cond seq cond seq cond seq cond a skeleton tree b decorated skeleton tree fig for the example program in fig the ast is further abstracted by retaining only the loops for while do while and conditionals if else switch operationally the feature extractor can be realized as a tree transducer that drops all ast nodes that are not loops or conditionals sequences of loops or conditionals are encapsulated within a sequence node and empty sequences are dropped from the the intuition behind using this for code search is that similar functions tend to have similar loop and conditional structures fig shows the skeleton tree for the example code in fig the similarity function used for skeleton tree featureobservations is based on tree edit distances let dr be a rough approximation of the distance between two trees only based on their sizes and similarity functions table i summarizes source forager s below we further describe these and their associated similarity functions coupling the for this consists of the types of variables operated on in the function coupled with the operations performed on those types the is a set of type operation pairs primitive types are paired with the builtin arithmetic logical and relational operations for example int types such as classes are paired with the operations on them including direct and indirect field accesses and method calls for example the pair bar indicates that the field foo of an aggregate data type bar is accessed the intuition behind including this is that similar functions tend to use similar pairs for the example in fig the operation coupling extracted is the set int int int int int int int int skeleton tree the for this featureclass is based on the abstract syntax tree ast of a function dr size max size size further let dt be a fixed distance threshold which we set to we obtain an approximate distance between two trees dt as follows dr ed pre pre dt max ed post post max size size if dr dt otherwise here pre t is the sequence obtained by performing a traversal of the tree t post t is the sequence obtained by performing a traversal of the tree t and ed is the word edit distance between the sequences and the similarity function used for skeleton tree is then computed as simtree dt an exact computation has quartictime complexity in the size of the trees being compared we instead use a fast of edit distance that gives our similarity function complexity overall note that we also use a further rough approximation based on just the size of the trees if one of the two trees being compared is at least twice as large as the other we found that using these approximations as opposed to the exact based similarity made no discernible difference in the quality of the final search results obtained but made a big difference in performance more than faster in our tests decorated skeleton tree this is similar to the skeleton tree except that instead of retaining just the loop and conditional structure in the most operations and are also retained from the ast we discard some common operations such as assignment and because they cause excessive bloat the intuition behind including this is that similar functions use similar operations in structurally similar locations fig shows the decorated skeleton tree featureobservation for the example code in fig the similarity function used is simtree from eq weighted nl terms the for this consist of various nl terms in source code such as function name comments local variable names and parameter names of a function such nl terms after extraction are subjected to a series of standard nl preprocessing steps such as splitting words with or camelcase stemming lemmatization and removing singlecharacter strings and removal discards both typical english stop words such as the and and is as well as stop words specialized for code such as fixme todo and xxx additionally we use a greedy algorithm for splitting terms into multiple words based on dictionary lookup this splitting is to handle the case where programmers choose identifiers that combine multiple words without or camelcase after nl we compute a term frequencyinverse document frequency score for each nl term we consider each function as a document and compute the per project we give terms an inflated score more than other terms because these often provide significant information about functions purposes the intuition behind including this is that similar functions tend to have similar vocabulary the for the example in fig is bin search high low found mid match the similarity function for two observations of weighted nl terms uses cosine similarity ai bi simnl a b q q b ai i a b c d a b c d a b c d fig an example and its corresponding adjacency matrix serializing the adjacency matrix entries yields binary digits or in decimal node ordering in the adjacency matrix is the traversal order of cfg we implement multiple featureclasses based on subgraphs of the control flow graph cfg of a function given the cfg of a function we begin either a bfs traversal or a search dfs traversal at a node until k nodes are traversed a subgraph of the cfg involving these k nodes is extracted if fewer than k nodes are reachable from a node including itself then such a is thrown away we repeat this process for every node in the cfg extracting at most n subgraphs of size k where n is the size of the cfg we represent a graph of size k as a k integer which is a representation of a representation of the graph obtained by concatenating each of the matrix rows in order thus from each function s cfg we extract a multiset of shapes fig shows an example of converting a into a integer in this manner we implement the following four based on the value of k and the traversal strategy chosen graph cfg bfs k traversal strategy is bfs graph cfg bfs k traversal strategy is bfs graph cfg dfs k traversal strategy is dfs graph cfg dfs k traversal strategy is dfs for the example in fig the extracted for the graph cfg bfs is the multiset the intuition behind including these is that similar functions tend to have similar structures the similarity function used for these is based on the generalized jaccard index between two multisets and min i i max here i iterates over all the unique elements in and is the number of times i appeared in the multiset calls to library functions we implement three featureclasses that extract calls to various kinds of library functions modeled library calls codesonar models a large range of library functions for performing static analysis on code for this calls made to any of these modeled library functions are extracted unmodeled library calls calls made to any unmodeled library functions are extracted for this is calls to a function not modeled by codesonar and whose definition is not available in the source code here n is the total number of words in the universe a and b are vectors with scores and the i th index ai is the value for the i th word library calls for this calls to b dynamic selection functions whose definitions are available in a directory difcombining can be beneficial for code search ferent from the caller function are extracted we use such however the that are useful for performing a functions as a heuristic for identifying libraries code search may vary from one query to another for example the intuition behind including the above three featureconsider a query function containing of just code classes is that similar code tends to call the same library a significant number of functions in our are functions for each of these three the featuredevoid of loops and and all such functions look values are sets of library functions called a library function identical to the query function with respect to the skeleton is represented as tuple it includes the name of the function tree thus performing a code search with this together with the file name containing the function s declaraquery by including the skeleton tree can lead to tion for example if a function calls strcpy and strncpy results on the other hand if a query function then the corresponding to modeled library has an unusual loop and conditional structure that is idiomatic calls for that function is strcpy strncpy to the computation being performed then the skeleton tree would be useful in code search other instances type signature for this the featureof the same distinctive structure from the code database would observations consist of the type signature of the function have high similarity scores to the query function the argument types and the return type of a function together thus it is useful to select automatically on a the argument types and the return type form a multiset of basis for code search this configuration of source types for the example code in fig the forager is called intuitively a for a corresponding to type signatures is int int int int given query is selected for code search if the corresponding type signatures define a function s interface for interaction is sufficiently with with the rest of the code similar code tends to have similar respect to the overall distribution for that interfaces and therefore type signatures could help with code search to prepare for the dynamic selection on a perthe generalized jaccard index eq is used as the query basis we take following steps offline similarity function for this from the code database we retrieve a random sample s local types for this the featureof random sampling gives an inexpensive observations consist of the set of types of all the local variables estimate of distributions across the entire the intuition behind using local variable types in code search code database is that similar code creates and operates on variables of similar types for the example code in fig the local types we calculate a similarity threshold for each c by computing pairwise similarity scores on the featureobservation is int observations for c in s and taking the sum of means constants we implement two that extract and standard deviations of the similarity scores two featureconstants from a function observations for c are considered similar if their similarity numeric literals this is described in score is above the similarity threshold for string literals for this a online when a query is posed we take the following steps is the set of all the literal strings used in a function the intuition behind using sets of constants in code search is for each c which can be performed in parallel we compare the query s for c with all that similar code typically uses similar constants other for c in sample s of size nsamp comments for this the featureand count the number of similar observations consist of the comments associated with a function the comments are represented as a set of words we select the c for code search if it is not too tuniq here tuniq is a threshold common that is if the intuition behind using comments in code search is that samp the comments in similar pieces of code are likely to use that indicates a is sufficiently unique in a similar vocabulary for the example code in fig the the sample for example tuniq indicates that any comments is found match no that is similar to less than of the combining using several featuresample is considered distinctive enough classes in combination allows source forager to obtain good to warrant inclusion results in a fairly robust manner by using different each is assigned a weight of exactly or dimensions of the code for example consider the exactly based on whether the is selected search implementation in fig we see that variables named in the above process these weights are used for combining mid low high are used that there are two conditionals similarities for code search eq and the knested inside a single loop and that an integer division and search is carried out between the query integer operation is performed when put together these observations are hallmarks of a did a brief study of distributions for the skeleton tree over our corpus which revealed this data point implementation function and the functions in the code database as described in to obtain the k functions most similar to the query table ii task categories used for queries in similar gives the number of similar functions that were manually found for a given task category partial reports how many function pairs moss considered to be potential clones significant reports the of function pairs with at least code overlap weight generation note that does not need any additional knowledge about the query however if we know ahead of time that a query belongs to a specific domain and we have information available regarding what constitutes similar code in that domain then we can use techniques to learn good weights for eq for that domain ahead of time and use these weights for code search with all future queries in that domain given a particular data set with labeled similar code we generate weights by training a binaryclassification support vector machine svm we do not train using raw code text or even raw sets of because we use the svm training process to generate relative weights for similarity scores in eq we train the svm on these similarity scores directly the similarity scores for all between two functions are assembled into a similarity vector the svm is then trained on examples of similarity vectors for both similar and dissimilar functions each labeled accordingly this technique allows us to optimize ahead of time how these are relatively weighted in a code search by using the same similarity functions that are employed in code search of a query our svm uses a linear classifier which allows a convenient interpretation of internal weights the final step is to extract these internal weights and normalize them relative to the sum of their magnitudes truncating negative weights these normalized weights are then used directly as weights in eq provides more details about the corpus and training process of course it is not obvious that weights obtained by training for classification purposes are useful in ranking results for queries measures the effectiveness of this strategy in practice moss detected task category binary search edit distance insertion sort knapsack modular exponentiation non recursive depth first search red black tree left rotate similar partial significant a task involves searching for relevant documents from a group of documents that include both relevant and documents in the case of source forager documents are functions documents are also known as distractors which leads naturally to the following question how much does source forager s performance degrade as we increase the number of distractors in the code base being searched b experimental setup and methodology source forager uses codesonar an engine to analyze corpora and implement feature extractors codesonar handles projects with tens of millions of lines of code codesonar also exposes a wealth of information about a program through apis source forager s feature extractors are implemented as codesonar plugins that use these apis consequently source forager inherits codesonar s requirement that programs must be compilable to be analyzable tasks our experiments assess source forager s performance under various configurations tasks are set up as follows for each query function there is a set of known relevant functions that are similar to the query the relevant functions are treated as ground truth the relevant functions are then mixed with many functions as distractors and together they form the code database used in the experiment source forager then searches the code database for similar functions we compute informationretrieval statistics based on the ranking of the functions in the returned results queries we use two query sets for the tasks representing two domains one called represents algorithmic code queries for we created seven tasks outlined in table ii and manually curated a total of functions that each accomplish one of the seven tasks the functions were mostly obtained from github and were written by a variety of programmers none of whom are authors of this paper the functions that accomplish a specific task have been manually vetted to be iv experimental evaluation this section outlines the research questions we seek to answer through experiments describes the setup and methodology used in the experiments and presents the results of the experiments a research questions our experiments were designed to answer the following research questions how do the individual described in perform in tasks relative to each other does combining using dynamic selection improve source forager s performance does combining using supervised learning further improve source forager s performance when the query domain is known similar to each other we thus have a total of base queries we use these sets of functions as queries and the desired search results and consider them to be an appropriate proxy for the queries performed and search results expected by users in the algorithm domain we have made the labeled queries available for to make sure that the similar functions we found were not all clones of each other we ran them through the moss detector given a group of programs moss reports program pairs that may be clones along with an overlap percentage table ii reports moss s findings run using default settings in this table partial overlap represents any pair that moss reports as possible clones while significant overlap counts only possible clones with at least overlap observe that many function pairs marked manually as being similar are not just clones of each other thus recognizing similar function pairs in this corpus is a nontrivial challenge the second query set we use is called and represents code queries from systems programming we looked at three implementations of the standard c library musl libc diet libc and uclibc from these we define function categories corresponding to functions that all three implementations provide we assume that within the same function category the three libc implementations are for this domain we have queries for example musl libc s sprintf is labeled to be similar to diet libc s sprintf and uclibc s sprintf and dissimilar to everything else distractor functions the distractor functions have been taken from the openly available muse corpus and mainly consist of code from fedora source packages srpms our feature extractors currently require compilable code which fedora srpms provide due to the large size of the distractorfunction corpus over we have not manually vetted all of the distractor functions to be sure that they are irrelevant to the queries issued it is possible that some distractor functions are indeed relevant to some queries so our retrieval statistics are with the exception of the experiments reported in fig all experiments use distractors retrieval statistics we compute mean average precision map as the retrieval statistic as is common in information retrieval map is typically used to measure the quality of ranked retrieval results because map takes into account the rank of the relevant documents in the retrieved results map provides a measure of quality across all recall levels map is the mean of the average precision computed for each query the average ap for each query is given by p k r k where n is the total number of documents searched r is the number of documents marked relevant to the query p k is the precision when k documents are requested and r k is when the k th retrieved document is relevant and otherwise that is ap is the average precision at all the points when a new relevant document is retrieved in a ranked result list the best map score that can be achieved is when for each query the r relevant documents appear as the top r search results weights we applied the techniques discussed in on and to provide labeled on which to train an svm each instance in our training set is generated by comparing two functions a and b yielding a single similarity vector that consists of similarity scores for each the binary classification for each training instance is if a and b are implementations of the same function otherwise we use liblinear to train the svm to classify these function comparisons this process takes roughly twenty milliseconds using this technique we are able to achieve over accuracy under once the svm is trained we extract and normalize its internal weights for use in code search for the configuration described below within each domain the dataset is divided into multiple folds of and pairs the weights extracted from the training set are used to obtain map scores on the test set that is weights are trained on a subset of a given domain or and tested using queries from a different subset of the same domain for the configuration described below is used to train weights for queries from and source forager configurations our experiments run source forager under many configurations each configuration is defined by the weight wc assigned to each of the featureclasses c given in table i these weights are used in eq for performing the code search for each query the weight wc corresponding to c is weights corresponding to all other are set to for each query for all c wc giving equal importance to all for all queries for each query a subset of are selected and given equal weights as described in the dynamic selection of adds a small overhead to each for each query a new random configuration is used as follows a random subset of the is selected and the selected are given equal weights repeat this process times with different random selections and report mean results over these trials for each query use weights learned for the domain that the query belongs to as described in and above for each query use weights learned for the domain that the query does not belong to our naive python implementation adds an average overhead of seconds per query for dynamic selection of currently the selection decision on each is done sequentially instead of in parallel as suggested in at the url http note that unlike the other configurations the and configurations permit weights to give different importance levels to different finding when the domain of a query is known and training data is available combining multiple featureclasses using weights derived from supervised learning is the most effective strategy for code search results and discussion the configuration tests whether the weights learned from one domain are useful in a different the left side of fig shows how each individual featuredomain the rightmost two bars in fig show that it is class performs on the tasks in isolation this hard to derive a single set of relative weights that experiment addresses the solo weighted work well for queries in both domains thus in the absence of nl terms performs the best individually on both domain information about the query is preferred and thus fig shows how source forager s result quality scales with increasing sizes this experiment addresses finding if we were to drive source forager using source forager is used in the and only one weighted nl terms is the best configurations for this experiment as one would expect map option however fig shows that the performance of the scores decline as distractors proliferate however consider different varies considerably depending on that relevant sets contain just to items competing against the query set this variance suggests that distractor sets that are up to five orders of magnitude larger different are important for different kinds of queries finding resilient map scores indicate that source forager returns results even when distractors outnumber relevant items by several orders of magnitude asks whether multiple can be usefully combined and whether is a good way to do such a combination a manner in which featureclasses can be combined is the configuration which represents a baseline to compare against other configurations the configuration selects different subsets of the on a basis as a sanity check for the selections performed by we also compare it with the configuration which randomly selects subsets for every query the right side of fig shows that performs better on both and when compared to and also outperforms each of the solo configurations from the left side of fig threats to validity the issue of whether evaluation benchmarks are appropriate is a potential threat to the validity of any information retrieval system we mitigate this threat for source forager in several ways first we use benchmark queries from two different domains and second we use the moss plagiarism detector to show that our manually labeled set of relevant functions in are not trivial clones of each other third we draw the and data sets from code written by arbitrary programmers not artificial programs written by us can be combined in various ways to perform code searches we have explored part of the vast space of all such combinations and our results speak only to those we have tried we find that the map scores of the configuration dynselect on both and are good we designed the experiments with and configurations to test whether the selections made by are indeed necessary and useful and find that they are finding in the absence of any additional information about the query combining multiple and dynamically selecting on a basis is the most effective strategy for code search addresses the scenario where the domain of a query is known and additional information is available regarding that domain as described in the configuration tests source forager under this scenario the relative importance of for a given domain in the form of weights wc for each also makes code search more efficient by eliminating any overhead in selection the right side of fig shows that svmweights outperforms all other configurations related work engines several popular codesearch tools grep over tokenized source code github searchcode open hub etc while these tools are useful they fall short in many use cases as they do not exploit the rich semantics of code for example the top search results for the term dfs on c code projects in github yields function declarations macro names and include directives that mention dfs but that are not actually useful the sourcerer engine combines search techniques with information about relations among programming entities like packages classes methods and fields of numbers following a configuration in this section indicates the map scores of that configuration on and respectively library calls unmodeled library calls map score comments string literals numeric literals local types type signatures modeled library calls graph cfg dfs graph cfg dfs graph cfg bfs graph cfg bfs weighted nl terms decorated skeleton tree skeleton tree coupling fig information retrieval performance with distractors the left side of the plot from coupling through comments uses the configuration with the given as the only feature the right side of the plot from through uses the various other source forager configurations that leverage multiple simultaneously although no map score is exactly zero several are below and therefore round to in the labels above map score strathcona returns relevant java code examples to developers learning to use complex frameworks it uses several heuristics based on hierarchies method calls and type uses source forager could also use the applicable heuristics from sourcerer and strathcona as featureclasses but additionally demonstrates how to search using more complex structures such as decorated skeleton trees and codegenie proposes code search in which the user supplies a set of unit tests for the code nent they want to find codegenie leverages sourcerer to with perform search test cases refine these results with source forager could be used as a replacement for sourcerer with to perform similar code search in codegenie with stollee et al perform code search based on logical characterizations of programs behaviors obtained via symbolic execution a query consists of concrete pairs for the desired code fragment while this approach precisely captures the semantics of the corpus elements it does not imnumber of distractor functions mediately handle some common programming constructs such as loops and global variables it also restricts the size of the fig impact of the number of distractor functions on map scores using dynprogram elements in the corpus because symbolic execution select and for all and queries the horizontal axis is on a log scale of larger elements may lead to path explosion source forager can easily be extended to use pairs as an additional featureclass in scenarios where the above restrictions are acceptable sourcerer also uses fingerprints that capture some xsnippet and parseweb are specialized structural information about the code such as depth of loop engines xsnippet looks specifically for code that instantiates nesting and presence or absence of certain language constructs objects of given type in a given context parseweb has a similar queries in sourcerer are and are powered by lucene focus on code sequences that instantiate objects codify http as opposed to the search extracts and stores a large amount of metadata for each symbol by source forager in a program and provides a user interface for querying that metadata codify aids in understanding and browsing code the goal of source forager s code search is different from the above to find source code similar to a query detection source forager s code searches differ from the typical clone detection problem in that we are interested in finding code that has both semantic and syntactic similarity therefore we use a range of that span from syntactic to semantic source forager s notion of similarity does not neatly fall into any of the definitions of standard clone types search finding similar machine code is useful in finding known vulnerabilities in code for which source code is not available the primary difference in code search at the and is that machine code has poorer syntactic semantic and structural information available compared to source code as a result while there is some overlap between techniques research on search is focused on tackling different problems such as how to do search across different cpu architectures compiler optimizations compilers operating systems etc rosenblum et al train svms with features extracted from source code in the attempt to classify programs by author source forager builds on this idea by training an svm with similarity scores derived from and then extracting internal weights from the trained svm to strengthen the combined similarity function used for code search references sadowski stollee and elbaum how developers search for code a case study in found of softw linstead bajracharya ngo rigor lopes and baldi sourcerer mining and searching software data mining and knowledge discovery vol no pp apr reiss code search in int conf on softw pp stollee elbaum and dobos solving the search for source code trans on softw engineering and methodology vol no may sahavechaphan and claypool xsnippet mining for sample code in conf on prog systems languages and applications pp begel codifier a search user interface in workshop on interaction and inf retrieval lemos bajracharya ossher morla masiero baldi and lopes codegenie using to search and reuse source code in int conf on automated softw holmes and murphy using structural context to recommend source code examples in int conf on softw crockford introducing json apr online available http jermaine the pliny database online available http zhang and shasha simple fast algorithms for the editing distance between trees and related problems siam j vol no guha jagadish koudas srivastava and yu approximate xml joins in int conf on management of data acm pp nltk project stopwords corpus mar online available http feild binkley and lawrie an empirical comparison of techniques for extracting concept abbreviations from identifiers in proc iasted int conf on software engineering and applications sea citeseer khoo mycroft and anderson rendezvous a search engine for binary code in proceedings of the working conference on mining software repositories pp guyon and elisseeff an introduction to variable and feature selection mach learn vol pp mar schleimer wilkerson and aiken winnowing local algorithms for document fingerprinting in int conf on management of data pp eta labs musl libc online available https diet libc contributors diet libc online available https andersen uclibc online available https leidos holdings muse corpus apr online available http fan chang hsieh wang and lin liblinear a library for large linear classification journal of machine learning research vol pp lemos bajracharya ossher masiero and lopes a approach to code search and its application to the reuse of auxiliary functionality j information and software technology vol no apr ke stollee gouse and brun repairing programs with semantic code search in int conf on automated softw pp thummalapenta and xie parseweb a programmer assistant for reusing open source code on the web in int conf on automated softw roy cordy and koschke comparison and evaluation of code clone detection techniques and tools a qualitative approach sci comput may david and yahav code search in executables in proceedings of the acm sigplan conference on programming language design and implementation ser pldi new york ny usa acm pp eschweiler yakdan and discovre efficient identification of bugs in binary code in network and dist syst security pewny garmany gawlik rossow and holz crossarchitecture bug search in binary executables in security and privacy sp ieee symposium on ieee pp rosenblum zhu and miller who wrote this code identifying the authors of program binaries in proceedings of the european conference on research in computer security pp
6
this is a of the conference paper accepted at the ieee winter conference on applications of computer vision wacv towards robust deep neural networks with bang andras rozsa manuel and terrance boult vision and security technology vast lab university of colorado colorado springs usa jan arozsa mgunther tboult abstract machine learning models including deep neural networks are vulnerable to small perturbations that cause unexpected classification errors this unexpected lack of robustness raises fundamental questions about their generalization properties and poses a serious concern for practical deployments as such perturbations can remain imperceptible the formed adversarial examples demonstrate an inherent inconsistency between vulnerable machine learning models and human perception some prior work casts this problem as a security issue despite the significance of the discovered instabilities and ensuing research their cause is not well understood and no effective method has been developed to address the problem in this paper we present a novel theory to explain why this unpleasant phenomenon exists in deep neural networks based on that theory we introduce a simple efficient and effective training approach batch adjusted network gradients bang which significantly improves the robustness of machine learning models while the bang technique does not rely on any form of data augmentation or the utilization of adversarial images for training the resultant classifiers are more resistant to adversarial perturbations while maintaining or even enhancing the overall classification performance a mnist samples and their distortions yielding misclassifications b samples and their distortions yielding misclassifications figure i mproving robustness via bang this figure demonstrates the enhanced robustness against perturbations generated via the adversarial generation method on mnist digits and samples displayed in top rows of a and b underneath the raw test images we show their distorted versions formed by the smallest perturbations that change the correctly classified class labels of the test samples the second rows of a and b present perturbations that we obtained on regularly trained learning models while the last rows show examples that we generated on networks trained via our batch adjusted network gradients bang approach as indicated by most of the perturbations being highly perceptible the learning models trained with bang have become more robust to adversarial perturbations introduction machine learning is broadly used in various vision applications and recent advances in deep learning have made deep neural networks the most powerful learning models that can be successfully applied to different vision problems the recent performance gain is mainly the result of improvements in two fields namely building more powerful learning models and designing better strategies to avoid overfitting these advancements are then leveraged by the use of larger datasets and massive computing although deep neural networks dnns achieve performance in a wide range of tasks the generalization properties of these learning models were questioned by szegedy et al when the existence of adversarial examples was revealed dnns are capable of learning feature embeddings that enable them to be successfully adapted to different problems they were generally considered to generalize well and hence expected to be robust to moderate distortions to their inputs surprisingly adversarial examples formed by applying imperceptible perturbations to otherwise correctly recognized inputs can lead machine learning models including art dnns to misclassify those samples often with high confidence this highly unexpected and intriguing property of machine learning models highlights a fundamental problem that researchers have been trying to solve to explain why adversarial examples exist several controversial explanations were proposed as hypothesized in adversarial instability exists due to dnns acting as linear classifiers that allow even imperceptibly small perturbations applied to inputs to spread among higher dimensions and radically change the outputs this belief was challenged in where by analyzing and experimenting with dnns trained to recognize objects in more unconstrained conditions it was demonstrated that those classifiers are only locally linear to changes on the recognized object otherwise dnns act nonlinearly after performing various experiments gu et al concluded that adversarial instability is rather related to intrinsic deficiencies in the training procedure and objective function than to model the problem addressed in this paper is not only about preventing attacks via adversarial examples the focus is on the overall robustness and generalizability of dnns this fundamental problem of deep learning has recently received increasing attention by researchers considering learning models applied to computer vision tasks the classification of many incorrectly or uncertainly recognized inputs can be corrected and improved by small perturbations so this is a naturally occurring problem for vision systems in this paper we introduce our theory on the instability of machine learning models and the existence of adversarial examples evolutionary stalling during training network weights are adjusted using the gradient of loss evolving to eventually classify examples correctly ideally we prefer broad flat regions around samples to achieve good generalization and adversarial robustness however after a training sample is correctly classified its contribution to the loss and thus on forming the weight updates is reduced as the evolution of the local decision surface stalls the correctly classified samples can not further flatten and extend their surroundings to improve generalization therefore as the contributions of those correctly classified training samples to boundary adjustments are highly decreased compared to other batch elements samples can end up being stuck close to decision boundaries and hence susceptible to small perturbations flipping their classifications to mitigate evolutionary stalling we propose our batch adjusted network gradients bang training algorithm we experimentally evaluate robustness using a combination of and adversarial perturbations and random distortions the paper explores the impact of bang parameters and architectural variations such as dropout on instability and adversarial ness in conclusion we validate our theory by experimentally demonstrating that bang significantly improves the robustness of deep neural networks optimized on two small datasets while the trained learning models maintain or even improve their overall classification performance related work deep neural networks dnns achieve high performance on various tasks as they are able to learn generalization priors from training data szegedy et al showed that machine learning models can misclassify samples that are formed by slightly perturbing correctly recognized inputs these adversarial examples are indistinguishable from their originating counterparts to human observers and their unexpected existence itself presents a problem the authors introduced the first technique that is capable of reliably finding adversarial perturbations and claimed that some adversarial examples generalize across different learning models a computationally cheaper adversarial example generation algorithm the fast gradient sign fgs method was presented by goodfellow et al while this approach also uses the inner state of dnns it is more efficient as fgs requires the gradient of loss to be calculated only once the authors demonstrated that by using adversarial examples generated with fgs implicitly in an enhanced objective function both accuracy and robustness of the trained classifiers can be improved in their paper focusing on adversarial machine learning kurakin et al proposed new algorithms extending the fgs method to target a specific class and to calculate and apply gradients iteratively instead of a single gradient calculation via fgs the authors compared the effect of different types of adversarial examples used for implicit adversarial training and found that the results vary based upon the type of the applied adversarial examples rozsa et al introduced the approach which is capable of efficiently producing multiple adversarial examples for each input they demonstrated that using samples explicitly with higher magnitudes of adversarial perturbations than the sufficient minimal can outperform regular adversarial training the authors also presented a new metric the perceptual adversarial similarity score pass to better measure the distinguishability of original and adversarial image pairs in terms of human perception as the commonly used or norms are very sensitive to small geometric distortions that can remain unnoticeable to us pass is more applicable to quantify similarity and the quality of adversarial examples although adversarial training both implicit and explicit was demonstrated to decrease the instability of learning models forming those examples is still computationally expensive which limits the application of such techniques furthermore considering the various adversarial generation techniques utilizing certain types of those samples might not lead to improved robustness to adversarial examples of other techniques alternatively zheng et al proposed their stability training as a lightweight and still effective method to stabilize dnns against naturally occurring distortions in the visual input the introduced training procedure uses an additional stability objective that makes dnns learn weights that minimize the prediction difference of original and perturbed images in order to obtain general robustness and not rely on any class of perturbations the authors applied gaussian noise to distort the training images gu et al conducted experiments with different network topologies and training procedures to improve the robustness of dnns the authors proposed the deep contractive network dcn which imposes a layerwise contractive penalty in a dnn the formulated penalty aims to minimize output variances with respect to perturbations in inputs and enable the network to explicitly learn flat invariant regions around the training data based on positive initial results they concluded that adversarial instability is rather the result of the intrinsic deficiencies in the training procedure and objective function than of model topologies luo et al proposed a technique that selects and uses only a of the image during classification as the authors demonstrated the negative effect of foveated perturbations to the classification scores can be significantly reduced compared to entire perturbations graese et al showed that transformations of the normal image acquisition process can also negate the effect of the carefully crafted adversarial perturbations while these preprocessing techniques can alleviate the problem posed by adversarial images they do not solve the inherent instability of dnns in other words these methods treat the symptoms and not the disease in summary a wide variety of more or less efficient approaches were proposed in the literature that all aim at improving the robustness and generalization properties of dnns but none of those proved to be effective enough approach in this section we first briefly describe our intuition about why the unexpected adversarial instability exists in machine learning models afterwards we present our simple and straightforward modification in the training procedure that aims to optimize weights in a way that the resulting dnns become more robust to distortions of their inputs intuition during training some inputs in the batch are correctly and others are incorrectly classified in general the calculated loss and thus the gradient of loss for the misclassified ones are larger than for the correctly classified inputs of the same batch therefore in each training iteration most of the weight updates go into learning those inputs that are badly predicted on the other hand the correctly classified samples do not have a significant impact on advancing decision boundaries and can remain in the positions close to what they obtained when becoming correctly classified due to this evolutionary stalling samples with low gradients can not form a flatter more invariant region around themselves consequently samples of those regions remain more susceptible to adversarial perturbations even a small perturbation can push them back into an incorrect class by increasing the contribution of the correctly classified examples in the batch on the weight updates and forcing them to continue improving decision boundaries it is reasonable to think that we can flatten the decision space around those training samples and train more robust dnns implementation the core concept of our batch adjusted network gradients bang approach is a variation of batch normalization however rather than trying to balance the inputs of the layers we seek to ensure that the contributions on the weight updates are more balanced among batch elements by scaling their gradients let us dive into the details and introduce our notations we use to formulate bang in short we scale the gradients of batch elements that will be used to compute the weight updates in each training iteration let us consider a network fw with weights w in a layered structure having layers y l where l l with their respective weights w l fw xi y l y y xi for a given input xi the partial derivatives of the loss e fw xi with respect to the output of layer y l are l l xi l for simplicity we leave out the structure of the weights w l in layers and the structure of the layer outputs which can be either for fully connected layers or threedimensional for convolutional layers with bang our goal is to balance gradients in the batch by scaling up those that have lower magnitudes in order to do so we determine the highest gradient for the batch having n inputs xi i n at given layer y l in terms of norm we use that as the basis for balancing the magnitudes of gradients in the batch weight updates are calculated after scaling each derivative in the batch with the learning rate l l max k i n l l k where l l l k max i n l k as a key parameter for our approach l specifies the degree of gradient balancing among batch elements while the l exponent might appear a little complex and ambiguous its sole purpose is to scale up gradients with small magnitudes more than others having larger norms assuming that the regular backward pass combines the gradients of the batch elements by calculating l n n x l l x n l n i l which is normally scaled with the learning rate and then used to update weights after combining with the previous weight update scaled with momentum bang produces l l n x l n i l where l is the second set of parameter s of our approach used for scaling in general l acts as a local learning rate that can play a more important role in future work throughout our experiments we keep bang parameters fixed for all layers l and l which will actually just modify the original learning rate note that although our approach changes the actual calculation of weight updates for the layers there is no impact on the backpropagation of the original gradient down the network finally we implemented bang by applying small modifications to the regular training procedure with negligible computational overhead experiments to evaluate our approach we conducted experiments on the slightly modified versions of lenet and quick models distributed with caffe namely after running preliminary experiments with bang we added a dropout layer to both model architectures that serves multiple purposes we observed that bang tends to cause overfitting on the trained lenet networks and the resultant models made very confident classifications even when they misclassified the test images while the additional dropout layer alleviates both problems the adjusted network architectures also result in improved classification performances with both regular and bang training after obtaining learning models with regular and bang training we assess and compare the robustness of those classifiers in two ways it is important to note that we do not select the best training models based on their performance on the validation set for these evaluations but we simply use the models obtained at the last training iteration as our primary goal is to measure the evolving robustness we believe that this decision leads to a fairer comparison however the classification performance of the selected models are not optimal finally we would like to mention that we conducted experiments to discover the effectiveness of bang used for regularly trained models and found that the robustness of the resultant networks are not even comparable to those that we trained from scratch first we evaluate the adversarial vulnerability against two adversarial example generation methods the gradientbased fast gradient sign fgs method and the approach although the latter is capable of forming multiple adversarial perturbations for each input we only target the most similar class with the approach referred to as we aim to form adversarial perturbations for every correctly classified image from the mnist or test set respectively we consider an adversarial example generation attempt successful if the direction specified by either fgs or leads to a misclassification where the only constraint is that the discrete pixel values are in range of course this limitation means that the formed perturbations may or may not be adversarial in nature as they can be highly perceptible to human observers we compare the adversarial robustness of classifiers by collecting measures to quantify the quality of the produced adversarial examples for this purpose we calculate the perceptual adversarial similarity score pass of original and adversarial image pairs and we also determine the norms of adversarial perturbations although the norm is not a good metric to quantify adversarial quality in terms of human perception it can demonstrate how far the actual perturbed image is from the original sample second we quantify how the robustness of the learning models evolve during training by applying a more general approach for a given pair of classifiers where one was regularly trained while the other was obtained by bang training we add a certain level of random noise to test images from each class that are correctly classified by both networks at all tested stages and compute the proportion of perturbed images that are classified differently than the originating one while the previously described test assessing the adversarial vulnerability explores only two directions specified by the fgs method and the approach applying random distortions to each inspected image for every noise level gives us a more general evaluation although experimenting with random noise is more universal as it does not rely on any specific adversarial generation technique small random perturbations that cause misclassifications are hard to find and hence the collected table l e n et t raining this table highlights the difference between lenet models obtained by using regular and bang training accuracy on the mnist test set the achieved success rates of fgs and adversarial example generation methods with pass scores and norms of the produced examples on the mnist test set are listed id accuracy a accuracy b fgs success rate c pass figure l e n et m odels t rained with bang these plots summarize our results on lenet models trained with bang using combinations of and we tested a grid of those two parameters where with step size and with step size we trained a single model with each combination and show a the obtained accuracy on the mnist test set b the achieved success rates by using fgs and c the mean pass score of adversarial examples on the mnist test images each solid green line represents the level of regularly trained learning models for better visual representation we applied interpolation results are qualitatively not as good as explicitly forming adversarial perturbations furthermore in order to evaluate the stability of the trained classifiers we distorted the images with gaussian noise far beyond the noise level that can be considered imperceptible or adversarial lenet on mnist we commenced our experiments by evaluating bang on the lenet model optimized on the mnist dataset mnist contains images overall used for training for validation and the remaining for testing the tested network originally has four layers two convolutional and two fully connected extended with one additional dropout layer that we optimize without changing the hyperparameters distributed with caffe the learning model is trained with a batch size of for iterations using the inverse decay learning rate policy with an initial learning rate of since our training procedure has two parameters defined in equation and introduced in equation we trained lenet models with parameter combinations from a grid and evaluated the accuracy and adversarial vulnerability of the trained classifiers the results of the conducted experiments are visualized in figure we also show accuracies and metrics indicating adversarial robustness in table for some models obtained with regular training and optimized with bang training as we can see in table fgs success rates achieved by regular training can be dramatically decreased by bang the rate drops from above to below almost every single failed adversarial example generation attempt is due to blank gradients the gradient of loss with respect to the original image and its label contains only zeros which means that methods utilizing that gradient of loss can not succeed as we increase or in other words as we balance the contributions of batch elements more by scaling up gradients with lower magnitudes the resultant classifiers become more resistant to adversarial generation methods although the success rates obtained by the method remain relatively high the a regular training b bang training c absolute improvement figure l e n et robustness to r andom d istortions these plots show the evolving robustness of lenet models a obtained with regular training from table b trained with bang from table and c displays the improvement after identifying test images per class that are correctly classified by both networks at every iterations we perturb each times by adding the level of gaussian noise specified by the standard deviation and test the networks at several stages of training the plots show the percentage of distortions yielding misclassifications for better visual representation we applied interpolation ities of examples degrade significantly on lenet models trained with bang compared to the regular training as displayed in figure c this degradation is highlighted by both decreasing pass scores and by the significantly increased norms of perturbations listed in table with respect to the achieved classification performances we find that there can be a level of degradation depending on the selected values for and this phenomenon can be seen in figure a it is partially due to random initializations and can be the result of overfitting or our decision to evaluate all networks at training iterations still we can observe that bang can yield improved classification performance over regular training paired with improved robustness as listed in table additionally we conducted experiments to quantify and compare how the robustness to random perturbations evolves during training for this general approach we selected to test two classifiers from table optimized with regular training and trained with bang we can see in figure a that the regularly trained model is initially highly susceptible to larger distortions but as the training progresses it becomes more stable and settles at approximately with respect to the strongest class of gaussian noise that we formed by using standard deviation of pixels contrarily the classifier trained with bang maintains significantly lower rates throughout the whole training as shown in figure b and after iterations only of the strongest distortions can alter the original classification the absolute improvements are displayed in figure c we also evaluated training with bang on the quick model of caffe trained on the dataset consists of images training images and images used for both validation and testing purposes the network architecture originally has five layers three convolutional and two fully connected that we extended with one dropout layer and the learning model is trained with a batch size of for iterations epochs we use a fixed learning rate of that we decrease by a factor of after epochs and once again after another epochs due to the different nature of training we slightly adjusted bang parameters specifically as the classification performance is significantly worse than achieved by lenet on mnist yielding proportionately more incorrectly classified samples in each we applied lower local learning rates and higher values for scaling furthermore we found that scaling incorrectly classified inputs less than correct ones has beneficial effects on robustness hence we applied of the specified values on the incorrectly classified batch elements similarly to our conducted experiments on lenet we trained classifiers on with all possible combinations of and parameters of a grid and then measured the accuracy and adversarial vulnerability of each of those networks the results are visualized in figure and for some models obtained with regular training and optimized with bang training we show accuracies and metrics indicating adversarial robustness in table as we can see in table fgs success rates achieved by regular training are significantly decreased by bang the rate drops from approximately to where again the majority of the failed adversarial example generation attempts are due to blank gradients figure b shows that as we increase the classifiers become more resistant to adversarial generation methods the higher levels of success rates in comparison to lenet might table t raining this table shows the difference between classifiers obtained using regular and bang training the accuracy on the test set the achieved success rates of fgs and adversarial example generation methods with pass scores and norms of the formed examples on the test images are listed id accuracy a accuracy b fgs success rate c pass figure bang m odels these plots summarize our results on models trained with bang using combinations of and we tested a grid of those two parameters where with step size and with step size we trained a single model with each combination and show a the obtained accuracy on the test set b the achieved success rates by fgs and the c mean pass score of adversarial examples on the test images each solid green line represents the level of regularly trained learning models for better visual representation we applied interpolation ply be due to the fact that the classifiers trained on are less accurate therefore learning the incorrect samples of the batch still has a large contribution on weight updates while the success rates achieved by remain high the quality of adversarial examples degrades significantly compared to regular training this degradation is highlighted by both decreasing pass scores shown in figure c and by the significantly increased norms of adversarial perturbations listed in table finally as shown in table we can train classifiers with bang that slightly outperform models of regular training in terms of classification accuracy of course the achieved overall performance depends on the chosen parameters as depicted in figure a finally we ran experiments to better quantify and compare how the robustness of the trained classifiers to random perturbations evolves during training similarly to our experiments on lenet we selected two classifiers from table for testing trained regularly and optimized with bang we can see in figure a that the regularly trained model is highly susceptible to larger distortions its robustness does not improve during training and finally achieves with respect to the strongest class of gaussian noise that we formed by using standard deviation of pixels contrarily the model trained with bang remains more robust throughout training epochs as shown in figure b and at the end of the strongest distortions change the original classification the absolute improvements are visualized in figure c we can conclude that although bang enhanced robustness to random perturbations the results are less impressive in comparison to lenet at least with respect to the strongest distortions conclusion in this paper we introduced our theory to explain an intriguing property of machine learning models namely the regular training procedure can prevent samples from forming flatter and broader regions around themselves this evolutionary stalling yields samples remaining close to a regular training b bang training c absolute improvement figure robustness to r andom d istortions these plots show the evolving robustness of models a obtained with regular training from table b trained with bang from table and c displays the improvement after identifying test images per class that are correctly classified by both networks at every second epoch we perturb each times with the level of gaussian noise specified by the standard deviation and test the networks at different stages of training the plots show the percentage of distortions yielding misclassifications for better visual representation we applied interpolation cision boundaries and hence being susceptible to imperceptibly small perturbations causing misclassifications to address this problem we proposed a novel approach to improve the robustness of deep neural networks dnns by slightly modifying the regular training procedure our approach does not require additional training data neither adversarial examples nor any sort of data augmentation to achieve improved robustness while the overall performance of the trained network is maintained or even enhanced we experimentally demonstrated that optimizing dnns with our batch adjusted network gradient bang technique leads to significantly enhanced stability in general by balancing the contributions of batch elements on forming the weight updates bang allows training samples to form flatter more invariant regions around themselves the trained classifiers become more robust to random distortions and as we demonstrated with the fast gradient sign fgs method and the approach where we targeted the closest scoring class they are also less vulnerable to adversarial example generation methods to visualize the advancement achieved by bang training in terms of improved adversarial robustness in figure correctly classified mnist and test images are presented along with adversarial examples formed via the approach on dnns trained regularly and with bang while bang helps to mitigate adversarial instability learning models can maintain or even improve their overall classification performance our proposed approach achieves these results with negligible computational overhead over the regular training procedure although we managed to achieve good results on two dnns trained on different datasets we found that bang parameters needed to be adjusted to these problems to obtain better results exploring the effect of different rameters on different layers and changing the contributions of correctly and incorrectly classified batch elements can be considered future work will focus on having a better understanding of bang enhancing the algorithm to be more and exploring its application for training dnns on datasets while some might argue that a similar balancing effect can be achieved by distillation carlini et al demonstrated that defensive distillation is not effective to improve adversarial robustness the effectiveness of bang to adversarial perturbations obtained via various adversarial example generation techniques likely varies as kurakin et al observed for adversarial training and further research needs to explore that in summary we can conclude that the adversarial instability of dnns is closely related to the applied training procedures as was claimed by gu et al and there is a huge potential in this research area to further advance the generalization properties of machine learning models and their overall performances as well acknowledgments this research is based upon work funded in part by nsf and in part by the office of the director of national intelligence odni intelligence advanced research projects activity iarpa via iarpa r d contract no the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements either expressed or implied of the odni iarpa or the government the government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation thereon references carlini and wagner defensive distillation is not robust to adversarial examples arxiv preprint fawzi fawzi and frossard fundamental limits on adversarial robustness in international conference on machine learning icml workshop on deep learning fawzi and frossard robustness of classifiers from adversarial to random noise in advances in neural information processing systems goodfellow shlens and szegedy explaining and harnessing adversarial examples in international conference on learning representation iclr graese rozsa and boult assessing threat of adversarial examples on deep neural networks in ieee international conference on machine learning and applications icmla gu and rigazio towards deep neural network architectures robust to adversarial examples in international conference on learning representation iclr workshops he zhang ren and j sun deep residual learning for image recognition in ieee conference on computer vision and pattern recognition cvpr hein and andriushchenko formal guarantees on the robustness of a classifier against adversarial manipulation in advances in neural information processing systems ioffe and szegedy batch normalization accelerating deep network training by reducing internal covariate shift in international conference on machine learning icml jia shelhamer donahue karayev j long girshick guadarrama and darrell caffe convolutional architecture for fast feature embedding in international conference on multimedia acm keskar mudigere nocedal smelyanskiy and tang on training for deep learning generalization gap and sharp minima arxiv preprint krizhevsky and hinton learning multiple layers of features from tiny images kurakin goodfellow and bengio adversarial machine learning at scale in international conference on learning representation iclr lai pan liu and yan simultaneous feature learning and hash coding with deep neural networks in ieee conference on computer vision and pattern recognition cvpr lecun cortes and burges the mnist database of handwritten digits lecun jackel bottou cortes denker drucker guyon muller sackinger simard et al learning algorithms for classification a comparison on handwritten digit recognition neural networks the statistical mechanics perspective lin yang hsiao and chen deep learning of binary hash codes for fast image retrieval in ieee conference on computer vision and pattern recognition cvpr workshops j long shelhamer and darrell fully convolutional networks for semantic segmentation in ieee conference on computer vision and pattern recognition cvpr luo boix roig poggio and zhao foveationbased mechanisms alleviate adversarial examples arxiv preprint ouyang wang zeng qiu luo tian li yang wang loy et al deformable deep convolutional neural networks for object detection in ieee conference on computer vision and pattern recognition cvpr peck saeys goossens and roels lower bounds on the robustness to adversarial perturbations in advances in neural information processing systems rozsa rudd and boult are facial attributes adversarially robust in international conference on pattern recognition icpr rozsa rudd and boult adversarial diversity and hard positive generation in ieee conference on computer vision and pattern recognition cvpr workshops srivastava hinton krizhevsky sutskever and salakhutdinov dropout a simple way to prevent neural networks from overfitting journal of machine learning research jmlr szegedy liu jia sermanet reed anguelov erhan vanhoucke and rabinovich going deeper with convolutions in ieee conference on computer vision and pattern recognition cvpr szegedy zaremba sutskever bruna erhan goodfellow and fergus intriguing properties of neural networks in international conference on learning representation iclr vinyals toshev bengio and erhan show and tell a neural image caption generator in ieee conference on computer vision and pattern recognition cvpr yang yan lei and li convolutional channel features in ieee international conference on computer vision iccv zhang chen and saligrama efficient training of very deep neural networks for supervised hashing in ieee conference on computer vision and pattern recognition cvpr zheng y song leung and goodfellow improving the robustness of deep neural networks via stability training in ieee conference on computer vision and pattern recognition cvpr
1
neural domain adaptation for biomedical question answering georg dirk and mariana hasso plattner institute august bebel strasse potsdam germany language technology lab dfki berlin germany abstract jun factoid question answering qa has recently benefited from the development of deep learning dl systems neural network models outperform traditional approaches in domains where large datasets exist such as squad questions for wikipedia articles however these systems have not yet been applied to qa in more specific domains such as biomedicine because datasets are generally too small to train a dl system from scratch for example the bioasq dataset for biomedical qa comprises less then factoid single answer and list multiple answers qa instances in this work we adapt a neural qa system trained on a large dataset squad source to a biomedical dataset bioasq target by employing various transfer learning techniques our network architecture is based on a qa system extended with biomedical word embeddings and a novel mechanism to answer list questions in contrast to existing biomedical qa systems our system does not rely on ontologies parsers or entity taggers which are expensive to create despite this fact our systems achieve results on factoid questions and competitive results on list questions introduction question answering qa is the task of retrieving answers to a question given one or more contexts it has been explored both in the opendomain setting voorhees et as well as settings such as bioasq for the biomedical domain tsatsaronis et the bioasq challenge provides factoid and list questions questions with one and several answers respectively this work focuses on answering these questions for example which drugs are included in the regimen fluorouracil epirubicin and cyclophosphamide we further restrict our focus to extractive qa qa instances where the correct answers can be represented as spans in the contexts contexts are relevant documents which are provided by an information retrieval ir system traditionally a qa pipeline consists of namedentity recognition question classification and answer processing steps jurafsky these methods have been applied to biomedical datasets with moderate success zi et the creation of datasets such as squad rajpurkar et have recently enabled the development of neural qa systems wang and jiang xiong et al seo et al weissenborn et al leading to impressive performance gains over more traditional systems however creating qa datasets for more specific domains such as the biomedical would be very expensive because of the need for domain experts and therefore not desirable the recent success of deep learning based methods on qa datasets raises the question whether the capabilities of trained models are transferable to another domain via domain adaptation techniques although domain adaptation has been studied for traditional qa systems blitzer et and deep learning systems chen et ganin et bousmalis et riemer et kirkpatrick et it has to our knowledge not yet been applied for neural qa systems to bridge this gap we employ various main adaptation techniques to transfer knowledge from a trained neural qa system fastqa weissenborn et al to the biomedical domain using the much smaller bioasq dataset in order to answer list questions in addition to factoid questions we extend fastqa with a novel answering mechanism we evaluate various transfer learning techniques comprehensively for factoid questions we show that mere reaches results which can further be improved by a forgetting cost regularization riemer et on list questions the results are competitive to existing systems our manual analysis of a subset of the factoid questions suggests that the results are even better than the automatic evaluation states revealing that many of the incorrect answers are in fact synonyms to the answer related work traditional question answering traditional factoid and list question answering pipelines can be subdivided into recognition question classification and answer processing components jurafsky such systems have also been applied to biomedical qa such as the oaqa system by zi et al besides a number of features they incorporate a rich amount of biomedical resources including a parser entity tagger and thesaurus to retrieve concepts and synonyms a logistic regression classifier is used both for question classification and candidate answer scoring for candidate answer generation oaqa employs different strategies for general questions choice questions and quantity questions neural question answering neural qa systems differ from traditional approaches in that the algorithm is not subdivided into discrete steps instead a single model is trained to compute an answer directly for a given question and context the typical architecture of such systems wang and jiang xiong et seo et can be summarized as follows embedding layer question and context tokens are mapped to a vector space for example via glove embeddings pennington et and optionally character embeddings seo et encoding layer the token vectors are processed independently for question and context usually by a recurrent neural network rnn interaction layer this layer allows for interaction between question and context representations examples are wang and jiang and coattention xiong et answer layer this layer assigns start and end scores to all of the context tokens which can be done either statically wang and jiang seo et or by a dynamic decoding process xiong et fastqa fastqa fits into this schema but reduces the complexity of the architecture by removing the interaction layer while maintaining performance weissenborn et instead of one or several interaction layers of rnns fastqa computes two simple features for each token which are appended to the embedding vectors before the encoding layer we chose to base our work on this architecture because of its performance faster training time and reduced number of parameters unsupervised domain adaptation unsupervised domain adaptation describes the task of learning a predictor in a target domain while labeled training data only exists in a different source domain in the context of deep learning a common method is to first train an autoencoder on a large unlabeled corpus from both domains and then use the learned input representations as input features to a network trained on the actual task using the labeled source domain dataset glorot et chen et another approach is to learn the hidden representations directly on the target task for example training optimizes the network such that it computes hidden representations that both help predictions on the source domain dataset and are indistinguishable from hidden representations of the unlabeled target domain dataset ganin et these techniques can not be straightforwardly applied to the question answering task because they require a large corpus of biomedical pairs albeit no answers are required supervised domain adaptation in contrast to the unsupervised case supervised domain adaptation assumes access to a small amount of labeled training data in the target domain the simplest approach to supervised domain adaptation for neural models is to the network on data from the source domain and then its parameters on data from the target domain the main drawback of this approach is catastrophic forgetting which describes the phenomenon that neural networks tend to forget knowledge its performance in the source domain drops significantly when they are trained on the new dataset even though we do not directly aim for good performance in the source domain measures against catastrophic forgetting can serve as a useful regularizer to prevent progressive neural networks combat this issue by keeping the original parameters fixed and adding new units that can access previously learned features rusu et because this method adds a significant amount of new parameters which have to be trained from scratch it is not if the target domain dataset is small riemer et al use but add an additional forgetting cost term that punishes deviations from predictions with the original parameters another approach is to add an loss which punishes deviation from the original parameters kirkpatrick et al apply this loss selectively on parameters which are important in the source domain model our network architecture is based on fastqa weissenborn et a neural qa system because the network architecture itself is exchangeable we treat it as a black box with subtle changes at the input and output layer as well as to the decoding and training procedure these changes are described in the following see figure for an overview of the system input layer in a first step words are embedded into a highdimensional vector space we use three sources of embeddings which are concatenated to form a single embedding vector glove embeddings glove vectors pennington et these are start probabilities pstart end probabilitiesp p end end probabilities probabilities pend sigmoid softmax endscores scoresee s s end end scores yend start scores ystart extractive qa system biomedical embeddings glove embeddings character embeddings question type features context embeddings question embeddings figure network architecture of our system for biomedical question answering at its core it uses an extractive neural qa system as a black box we use fastqa weissenborn et the embedding layer is modified in order to include biomedical word embeddings and question type features the output layer is adjusted to add the ability to answer list questions in addition to factoid questions word vectors trained on billion tokens from web documents the vectors are not updated during training character embeddings as used in fastqa weissenborn et and proposed originally by seo et al we employ a convolutional neural network which computes word embeddings from the characters of the word biomedical embeddings vectors trained using mikolov et on about million pubmed abstracts pavlopoulos et these vectors are specific to the biomedical domain and we expect them to help on biomedical qa as an optional step we add entity tag features to the token embeddings via concatenation entity tags are provided by a entity tagger based on the umls metathesaurus the entity tag feature vector is a bit vector that for each of the umls semantic types states whether the current token is part of an entity of that type this step is only applied if explicitly noted finally a encoding of the question type factoid or list is appended to all the input vectors with these embedding vectors as input we invoke fastqa to produce start and end scores for each of the n context tokens we denote start i scores by ystart and end scores conditioned on a i j predicted start at position i by yend with start index i n and end index j i n output layer in our adapted output layer we convert the start and end scores to span probabilities the computation of these probabilities is independent of the question type the interpretation however depends on the question type while for factoid questions the list of answer spans is interpreted as a ranked list of answer candidates for list questions answers above a certain probability threshold are interpreted as the set of answers to the question n given the start scores ystart ystart and end i n scores yend yend we compute the start and end probabilities as follows i pistart ystart i pi end softmax yend where x is the sigmoid function as a consequence multiple tokens can be chosen as likely start tokens but the network is expected to select a single end token for a given start token hence the softmax function finally the probability that a given span i j answers the question i j i is pi j span pstart pend this extension generalizes the fastqa output layer such that multiple answer spans with different start positions can have a high probability allowing us to retrieve multiple answers for list questions decoding given a trained model start probabilities can be obtained by running a forward pass and computing the start probability as in equation for the top starts we compute the end probabilities as given by eq from the start and end probabilities we extract the top answer spans ranked by pi j span as a simple step we remove duplicate strings and retain only those with the highest probability for factoid questions we output the most likely answer spans as our ranked list of answers for list questions we learn a probability cutoff threshold t that defines the set of list answers a i j j span t we choose t to be the threshold that optimizes the list score on the respective development set domain adaptation our training procedure consists of two phases in the phase we train the model on squad using a token score as the training objective as by weissenborn et al we will refer to the resulting parameters as the base model in the phase we initialize the model parameters with the base model and then continue our optimization on the bioasq dataset with a smaller learning rate forgetting cost regularization to avoid catastrophic forgetting during as a means to regularize our model we optionally add an additional forgetting cost term lf c as proposed by riemer et al it is defined as the loss between the current predictions and the base model s predictions weight regularization we also add an loss term which penalizes deviations from the base model s parameters note that a more advanced approach would be to apply this loss selectively on weights which are particularly important in the source domain kirkpatrick et the final loss is computed as lf inal loriginal cf c lf c where cf c and are hyperparameters which are set to unless otherwise noted experimental setup datasets squad squad rajpurkar et is a dataset of questions with relevant contexts and answers that sparked research interest into the development of neural qa systems recently the contexts are excerpts of wikipedia articles for which workers generated pairs because of the large amount of training examples in squad it lends itself perfectly as our source dataset bioasq the bioasq challenge provides a biomedical qa dataset tsatsaronis et consisting of questions relevant contexts called snippets from pubmed abstracts and possible answers to the question it was carefully created with the help of biomedical experts in this work we focus on task b phase b of the bioasq challenge in which systems must answer questions from snippets these questions can be either questions summary questions factoid questions or list questions because we employ an extractive qa system we restrict this study to answering factoid and list questions by extracting answer spans from the provided contexts the bioasq training dataset contains questions of which are factoid and are list questions the questions have snippets on average each of which are on average tokens long we found that around of the factoid questions and around of the list questions have at least one extractable answer for questions with extractable answers answers spans are computed via a simple substring search in the provided snippets all other questions are ignored during training and treated as answered incorrectly during evaluation training we minimize the loss for the gold standard answer spans however for multiple answer spans that refer to the same answer synonyms we only minimize the loss for the span of the lowest loss we use the adam kingma and ba for optimization on squad with a learning rate starting at which is halved whenever performance drops between checkpoints during the phase we continue optimization on the bioasq dataset with a smaller learning rate starting at during both phases the model is regularized by variational dropout of rate gal and ghahramani evaluation the official evaluation measures from bioasq are mean reciprocal rank mrr for factoid questions and score for list questions for factoid questions the list of ranked answers can be at most five entries long the score is measured on the gold standard list elements for both measures the details can be found at http string matches are used to check the correctness of a given answer a list of synonyms is provided for all answers if the system s response matches one of them the answer counts as correct for evaluation we use two different finetuning datasets depending on the experiment which contains all questions of the first three bioasq challenges and which additionally contains the test questions of the fourth challenge is used as the training dataset for the fifth bioasq challenge whereas was used for training during the fourth challenge because the datasets are small we perform and report the average performance across the five folds we use the larger dataset except when evaluating the ensemble and when comparing to participating systems of previous bioasq challenges all models were implemented using tensorflow abadi et with a hidden size of because the context in bioasq usually comprises multiple snippets they are processed independently in parallel for each question answers from all snippets belonging to a question are merged and ranked according to their individual probabilities results domain adaptation in this section we evaluate various domain adaptation techniques the results of the experiments are summarized in table baseline as a baseline without transfer learning experiment trains the model on bioasq only because the bioasq dataset by itself is very small a dropout rate of was used because it worked best in preliminary experiments we observe a rather low performance which is expected when applying deep learning to such a small dataset experiments and evaluate the pure approach our base model is a system trained on squad only and tested on bioasq experiment for experiment we the base model on the training set we observe that performance increases significantly especially on list questions this increase is expected because the network is trained experiment factoid mrr list training on bioasq only training on squad only on bioasq on bioasq biomedical embeddings on bioasq entity features on bioasq squad on bioasq forgetting cost cf c on bioasq loss on original parameters table comparison of various transfer learning techniques in experiment the model was trained on bioasq only in experiment the model was trained on squad and tested on bioasq we refer to it as the base model in experiment the base model parameters were on the bioasq training set experiments evaluate the utility of domain dependent word vectors and features experiments address the problem of catastrophic forgetting all experiments have been conducted with the dataset and on and list questions which are not part of the squad dataset for the first time overall the performance of the model on both question types is much higher than the baseline system without transfer learning features in order to evaluate the impact of using biomedical word embeddings we repeat experiment without them experiment we see a factoid and list performance drop of and percentage points respectively showing that biomedical word embeddings help increase performance in experiment we append entity features to the word vector as described in section even though these features provide the network with knowledge we found that it actually harms performance on factoid questions because most of the entity features are only active during with the small dataset we conjecture that the performance decrease is due to catastrophic forgetting we continue our study with techniques to combat catastrophic forgetting as a means to regularize training during in experiment of table we the base model on a mixture of bioasq and squad questions bioasq questions have been upsampled accordingly this form of joint training yielded no significant performance gains experiment regularizes the model via an additional forgetting cost term as proposed by riemer et al and explained in section we generally found that this technique only increases performance for factoid questions where the performance boost was largest for cf c the fact that the forgetting loss decreases performance on list questions is not surprising as predictions are pushed more towards the predictions of the base model which has very poor performance on list questions experiment adds an loss which penalizes deviations from the base model s parameters we found that performance decreases as we increase the value of which shows that this technique does not help at all for the sake of completeness we report results for the lowest value that yielded a significant drop in performance ensemble model ensembles are a common method to tweak the performance of a machine learning system ensembles combine multiple model predictions for example by averaging in order to improve generalization and prevent we evaluate the utility of an ensemble by training five models on the dataset using crossvalidation each of the models is evaluated on the test data data which is not included in during application we run an ensemble by averaging the start and end scores of individual models before they are passed to the sigmoid softmax functions as defined in eq and in table we summarize the average performance of experiment factoid mrr list average best ensemble table performance of a model ensemble five models have been trained on the dataset and tested on the test questions we report the average and best single model performances as well as the ensemble performance the five models the best performance across the five models and the performance of the ensemble we observe performance gains of percentage points on factoid questions and a less than percentage point on list questions relative to the best single model this demonstrates a small performance gain that is consistent with the literature comparison to competing bioasq systems because the final results of the fifth bioasq challenge are not available at the time of writing we compare our system to the best systems in last year s challenge for comparison we use the best single model and the model ensemble trained on see section we then evaluate the model on the batches of last year s challenge using the official bioasq evaluation tool each batch contains questions of which only some are factoid and list questions note that the results underestimate our system s performance because our competing system s responses have been manually evaluated by humans while our system s responses are evaluated automatically using string matching against a potentially incomplete list of synonyms in fact our qualitative analysis in section shows that many answers are counted as incorrect but are synonyms of the answer the results are summarized in table and compared to the best systems in the challenge in each of the batches and question type categories with our system winning four out of five batches on factoid questions we consider it in biomedical factoid question answering especially when considering that our results might be higher on manual evaluation the results on list questions are slightly worse but still very last year s results are available at http competitive this is surprising given that the network never saw a list question prior to the finetuning phase due to small test set sizes the sampling error in each batch is large causing the single model to outperform the model ensemble on some batches qualitative analysis in order to get a better insight into the quality of the predictions we manually validated the predictions for the factoid questions of batch of the fourth bioasq challenge as given by the best single model see table there are in total factoid questions of which have as the gold standard answer a span in one of the contexts according to the official bioasq evaluation only questions are predicted correctly the gold standard answer is ranked highest however we identified answers which are not counted as correct but are synonyms to the gold standard answer examples include disease instead of cmt disease tafazzin instead of tafazzin taz gene and instead of beta glucocerebrosidase in total we labeled questions as correct and questions as having their correct answer in the top predictions in the following we give examples of mistakes made by the system questions are presented in italics in the context we underline predicted answers and present correct answers in boldface we identified eight questions for which the semantic type of the top answer differs from the question answer type some of these cases are completely wrong predictions however this category also includes subtle mistakes like the following in which yeast chromosome does the rdna cluster reside the rdna cluster in saccharomyces cerevisiae is located kb from the left end and kb from the right end of chromosome xii here it predicted a yeast species the rdna cluster is located in but ignored that the question is asking for a chromosome another type of mistakes is that the top answer is somewhat correct but is missing essential information we labeled four predictions with this category like the following example batch factoid mrr best participant single ensemble best participant list single avg ensemble table comparison to systems on last year s fourth bioasq challenge for factoid and list questions for each batch and question type we list the performance of the best competing system our single model and ensemble note that our qualitative analysis section suggests that our factoid performance on batch would be about twice as high if all synonyms were contained in the gold standard answers how early during pregnancy does cffdna testing allow sex determination of the fetus gold standard answer to week of gestation or first trimester of pregnancy given top answer in summary to our judgment of questions are answered correctly and of questions are answered correctly in one of the top answers these are surprisingly high numbers considering low mrr score of of the automatic evaluation table poor prior to which is due to the lack of list questions in squad we believe that large scale corpora for list questions would enhance performance further unsupervised domain adaptation could be an interesting direction for future work because the biomedical domain offers large amounts of textual data some of which might even contain questions and their corresponding answers we believe that leveraging these resources holds potential to further improve biomedical qa in this paper we described a deep learning approach to address the task of biomedical question answering by using domain adaptation techniques our experiments reveal that mere in combination with biomedical word embeddings yield performance on biomedical qa despite the small amount of training data and the lack of feature engineering techniques to overcome catastrophic forgetting such as a forgetting cost can further boost performance for factoid questions overall we show that employing domain adaptation on neural qa systems trained on datasets can yield good performance in domains where large datasets are not available discussion and future work the most significant result of this work is that results in biomedical question answering can be achieved even in the absence of feature engineering most competing systems require structured resources such as biomedical ontologies parsers and entity taggers while these resources are available in the biomedical domain they are not available in most domains our system on the other hand requires a large qa dataset biomedical word embeddings which are trained in an unsupervised fashion and a small biomedical qa dataset this suggests that our methodology is easily transferable to other domains as well furthermore we explored several supervised domain adaptation techniques in particular we demonstrated the usefulness of forgetting cost for factoid questions the decreased performance on list questions is not surprising because the model s performance on those questions is very conclusion acknowledgments this research was supported by the german federal ministry of education and research bmbf through software campus project genie references abadi ashish agarwal paul barham eugene brevdo zhifeng chen craig citro greg s corrado andy davis jeffrey dean matthieu devin et al tensorflow machine learning on heterogeneous distributed systems arxiv preprint john blitzer mark dredze fernando pereira et al biographies bollywood and blenders domain adaptation for sentiment classification in acl volume pages konstantinos bousmalis george trigeorgis nathan silberman dilip krishnan and dumitru erhan domain separation networks in advances in neural information processing systems pages minmin chen zhixiang xu kilian weinberger and fei sha marginalized denoising autoencoders for domain adaptation arxiv preprint yarin gal and zoubin ghahramani dropout as a bayesian approximation representing model uncertainty in deep learning arxiv preprint yaroslav ganin evgeniya ustinova hana ajakan pascal germain hugo larochelle laviolette mario marchand and victor lempitsky training of neural networks journal of machine learning research xavier glorot antoine bordes and yoshua bengio domain adaptation for sentiment classification a deep learning approach in proceedings of the international conference on machine learning pages dan jurafsky speech language processing pearson education india diederik kingma and jimmy ba adam a method for stochastic optimization arxiv preprint james kirkpatrick razvan pascanu neil rabinowitz joel veness guillaume desjardins andrei a rusu kieran milan john quan tiago ramalho agnieszka et al overcoming catastrophic forgetting in neural networks proceedings of the national academy of sciences page tomas mikolov ilya sutskever kai chen greg s corrado and jeff dean distributed representations of words and phrases and their compositionality in advances in neural information processing systems pages ioannis pavlopoulos aris kosmopoulos and ion androutsopoulos continuous space word vectors obtained by applying to abstracts of biomedical articles http jeffrey pennington richard socher and christopher manning glove global vectors for word representation in empirical methods in natural language processing emnlp pages http pranav rajpurkar jian zhang konstantin lopyrev and percy liang squad questions for machine comprehension of text arxiv preprint metthew riemer elham khabiri and richard goodwin representation stability as a regularizer for improved text analytics transfer learning https andrei a rusu neil c rabinowitz guillaume desjardins hubert soyer james kirkpatrick koray kavukcuoglu razvan pascanu and raia hadsell progressive neural networks arxiv preprint minjoon seo aniruddha kembhavi ali farhadi and hannaneh hajishirzi bidirectional attention flow for machine comprehension arxiv preprint george tsatsaronis georgios balikas prodromos malakasiotis ioannis partalas matthias zschunke michael r alvers dirk weissenborn anastasia krithara sergios petridis dimitris polychronopoulos et al an overview of the bioasq largescale biomedical semantic indexing and question answering competition bmc bioinformatics ellen m voorhees et al the question answering track report in trec volume pages shuohang wang and jing jiang machine comprehension using and answer pointer arxiv preprint dirk weissenborn georg wiese and laura seiffe making neural qa as simple as possible but not simpler arxiv preprint caiming xiong victor zhong and richard socher dynamic coattention networks for question answering arxiv preprint yang zi zhou yue and eric nyberg learning to answer biomedical questions oaqa at bioasq acl page
9
uncertainty marginal price transmission reserve and market clearing with robust unit commitment aug hongxing ye member ieee yinyin ge mohammad shahidehpour fellow ieee zuyi li senior member ieee increasing penetration of renewable energy in recent years has led to more uncertainties in power systems these uncertainties have to be accommodated by flexible resources upward and downward generation reserves in this paper a novel concept uncertainty marginal price ump is proposed to price both the uncertainty and reserve at the same time the energy is priced at locational marginal price lmp a novel market clearing mechanism is proposed to credit the generation and reserve and to charge the load and uncertainty within the robust unit commitment ruc in the market we derive the umps and lmps in the robust optimization framework ump helps allocate the cost of generation reserves to uncertainty sources we prove that the proposed market clearing mechanism leads to partial market equilibrium we find that transmission reserves must be kept explicitly in addition to generation reserves for uncertainty accommodation we prove that transmission reserves for ramping delivery may lead to financial transmission right ftr underfunding in existing markets the ftr underfunding can be covered by congestion fund collected from uncertainty payment in the proposed market clearing mechanism simulations on a system and the ieee system are performed to illustrate the new concepts and the market clearing mechanism index marginal price cost causation robust unit commitment financial transmission right generation reserve transmission reserve n omenclature indices i l t m n mi k indices for generators lines and time intervals index for buses index of bus where unit i is located index of the worst point for uncertainty functions and sets f u cip cii l g m k symbol for the optimal value of a variable feasible set for uc and dispatch uncertainty set cost related to dispatch and uc for unit i lagrangian function set of units located at bus m set of the indices for up down km t km t set of indices k for upward and downward umps at bus m time t constants nd nt number of buses and time intervals dm t aggregated equivalent load fl transmission line flow limit m shift factor for line l with respect to bus m pimin pimax minimum and maximum generation outputs riu rid limits between sequential intervals riu rid limits for uncertainty accommodation um t bound for uncertainty is the k th worst uncertainty vector in k rnd nt t r ftr amount from bus m to n variables ii t unit status indicators yi t zi t unit and indicators pi t generation dispatch inj pm t net power injection t uncertainty at bus m time t z optimal value of problem sp given x y z i p t generation pos t transmission capacity reserve in positive direction neg t transmission capacity reserve in negative direction inj t net power injection change lagrangian multipliers lagrangian multipliers u k e t marginal prices t for energy price t is the u up ump for kth uncertainty point t for upward u down ump t for downward ump up qi t qdown i t upward and downward generation reserves t charge for uncertainty source t i t t credits to generation reserve for unit i and transmission reserve for line l at time t this work is supported by the national science foundation grant i ntroduction the early version of this work was available on arxiv july titled market clearing for uncertainty generation reserve and n modern power systems uncertainties grow significantly transmission reserve the authors are with the galvin center for electricity with the increasing penetration of renewable energy innovation at illinois institute of technology chicago il usa email ms lizu source res such as wind power generation they pose c ieee http citation information doi ieee transactions on power systems i new challenges for the operation of electricity markets in the market dam the unit commitment uc and economic dispatch ed problems considering uncertainties have become a focus of research in recent years the objective of the uc problem is to find the least cost uc solution for the second day while respecting both and constraints by fixing the uc variables the ed problem is established the locational marginal price lmp and reserve price are then obtained as byproducts of the ed problem when considering the uncertainties the generation from uncontrollable res are uncertain parameters in the optimization problem recently robust uc ruc is proposed to address the issues of uncertainty the largest merit is that the uc solution can be immunized against all the uncertainties in predefined set the key idea of the ruc is to determine the optimal uc in the first stage which leads to the least cost for the worst scenario in the second stage however this approach is conservative and the robust ed red is absent authors in combined the stochastic and robust approach using a weight factor in the objective function to address the conservativeness issue employed the affine policy ap to formulate and solve the red problem a ruc is proposed to incorporate the latest information in each stage where ap is also used to overcome the computational challenge recently we reported a new approach which tries to bridge the gap of ruc and red in dam the main difficulty for pricing is that red is absent in the traditional ruc on the other hand a large number of works on pricing reserves exists within the uc framework considering contingencies and stochastic security they are normally modeled as problem in the reserve is cleared on zonal levels instead of countable contingency scenarios or single additional scenario for reserve the infinite continuous uncertainties are considered in the ruc and the reserves are fully deliverable in infinite scenarios in this paper we propose a novel mechanism to price the energy uncertainty and flexibility simultaneously based on the ruc in an explicit price signal is derived for pricing the uncertainty as the ed solution obtained is robust both marginal impacts of the uncertainty and flexibility are reflected in these prices in the proposed mechanism reserve costs are allocated to uncertainty sources generation reserves also called flexibilities in this paper are the key factor for the robust optimization approaches they are entitled to proper credits based on their contribution to uncertainty management according to the market equilibrium analysis market participants price takers can get the maximal profit by following the s dispatch instruction the generation reserve and its deliverability are the main focus in the definition of lmp in are employed to derive the energy price the new concept uncertainty marginal price ump is proposed to define the marginal cost of immunizing the next increment of uncertainty at a specific location load and generation are a pair and they are priced at lmp uncertainty and flexibility generation reserve are another pair and they are priced at ump both lmps and umps may vary with the locations due to transmission congestions limited by the transmission capacity and power flow equations sometimes the uncertainties at certain buses can not be mitigated by the cheapest generation reserve and expensive generation reserve which is deliverable has to be kept in the system therefore uncertainty sources are charged and generation reserves are credited based on umps at the corresponding buses as the transmission reserve is kept within the ruc framework the congestion component may exist in both the energy price and reserve price even if the physical limit of the line is not reached yet in the base case scenario lmp congestion costs are distributed to financial transmission right ftr holders in the existing market according to the lmp difference and the ftr amount the revenue inadequacy occurs when the lmp congestion cost collected is smaller than the credit distributed to ftr holders which is also called ftr underfunding this has been a serious issue in recent years in the industry we reveal that transmission reserve will be another reason for ftr underfunding when physical transmission limit is adopted in simultaneous feasibility test sft for ftr market this conclusion is applicable to any robust uc framework for dam the main contributions of this paper are listed as follows the novel ump for uncertainties and generation reserves as well as lmp for energy are derived within a robust uc framework the derivation is for uncertainties set with interval and budget constraints the general concepts still apply when other uncertainty sets are modeled it is revealed that transmission capacities have to be reserved for uncertainty accommodation and the transmission reserves may cause ftr underfunding because of the deficiency of energy congestion revenues based on existing market rules a new market clearing mechanism is proposed to credit the generation and reserve and to charge the load and uncertainty the payment collected from uncertainty sources can exactly cover the credits to generation reserves and transmission reserves effectively resolving the ftr underfunding issue the rest of this paper is organized as follows derivation of the lmp and ump is presented in section ii so is the market clearing mechanism for charge and credit based on lmp and ump case studies are presented in section iii section iv concludes this paper ii ruc and m arket c learing one motivation of this work is to price the uncertainty and allocate the cost of uncertainty accommodation to the uncertainty source as the uncertainty source is charged the uncertainty payment it has the incentive to reduce the uncertainty with ump we can follow the cost causation principle which is normally required in the market design to charge the uncertainty sources cost causation principle is described as require that all approved rates reflect to some degree the costs actually caused by the customer who must pay them in kn energy ferc cir another important motivation is to provide a theory that supports the application of the ruc in the dam clearing although the are studied extensively the only application of the ruc now is for the reliability assessment commitment rac in the dam there are several reasons why they are not applied in the dam clearing first the computation burden of ruc is much larger than the standard uc second as the objective is the cost of the scenario the solution is criticized on over conservatism third no economic dispatch and prices are available within the ruc framework recently with the new achievements in the algorithms models and computing application the first two obstacles are being addressed with great promises this paper tries to clear the last obstacle with the new model adopting ruc in the market clearing can give clear price signals for the uncertainties and reserves on the other side it is also easier for the solution to pass the robustness test which is a ruc in rac to our best knowledge this is the first work on pricing energy uncertainties and reserves within the robust optimization framework in dam hence we focus on illustrating the concept with the following assumptions network loss is ignored shift factor matrix is constant uncertainty is from load and res contingency is ignored the uncertainty budget set can be truly formulated by the ruc and red desire to get the optimal uc and ed solution in the scenario they can the flexible resources such as adjustable load demands and generators with fast ramping capabilities to follow the load when deviation occurs or uncertainty is revealed consistent with the robust literature the uncertainty set is modeled as u rnd nt t t um t t x t t u m t m ruc min c i x c p p ax bp b n f x p u such that o cx dp d the basic idea of the above model is to find a robust uc and ed for the scenario the uc x and dispatch p are immunized against any uncertainty u when uncertainty occurs it is accommodated by the generation adjustment please refer to appendix a for the detailed formulation x p ax bp b cx dp d k and z max min s s n r s s sp o s d cx dp where k is the index set for uncertainty points which are dynamically generated in sp with iterations please refer to appendix b for the detailed formulation it should be noted that is the extreme point of u variable is associated with the objective function in sp is to find the worst point in u given x p the procedure is k k z define feasibility tolerance while z do solve mp obtain optimal solve sp with x p get solution z k k k k k end while once the procedure is converged we also get the optimal uc and ed solution by solving mp similar to traditional lmp calculation we fix the binary variables as then a convex linear programming problem red can be formed as xx cip pi t red min p x t t t t k t k t t k t t i x dm t m pi t t pimax t t t pimin t pi t pi riu t pimin t t t pi rid t pimin t t x inj m pm t fl t m x inj m pm t fl t m x x k t t k m i k pi t t t pimax t k k t t t pimin t k k t riu t t k k t rid t k pi t i x p min c i x c p p mp t t is the budget parameter and assumed as an integer it is noted that all the flexible resources are modeled as generators in this paper the ruc is formulated according to the model in t column and constraint generation ccg based method is used to solve the above model problem mp and sp are established as follows x inj inj k k t m pm t t fl t k m k t x m inj inj k m pm t t fl t k where are the constraints for the base ed and are constraints for different extreme points in u denotes the load balance after the generation adjustments respects capacity limits and ramping limits network constraints are denoted by inj inj k the pm t and t are defined as inj pm t x m and inj k t x m pi t dm t t k t t t k respectively b marginal prices in this section marginal prices for the energy uncertainty and generation reserve are derived based on the lagrangian function denote the lagrangian function for red as l p which is shown in appendix according to the definition of marginal price the lmp for energy at bus m is p t xx x k k m t t m t t l l e t it is observed that the impact of the uncertainty is also reflected in the lmp the new concept ump for dam is defined as the marginal cost of immunizing the next unit increment of uncertainty for an extreme point of u the ump is x p k k m t t k t l u k both the uncertainty and generation reserve are priced at t u k in the derivation of t the worst point is the only concern therefore the general principles in this paper still work when u is replaced with other sets it should be noted that is intermediate price signals in order to get the aggregated umps the following new sets are u k defined based on the sign of t u k t up u k km t k t u k down km t k t the aggregated upward and downward umps are defined as u up t x up t u k t u down t x u k t down t respectively in the following context we will show how the aggregated umps are used market clearing mechanism with lmp and ump the charges and credits for the market participants become clear and fair in the dam energy clearing is straightforward the basic principle related to uncertainty and flexibility is that those who cause uncertainties uncertainty sources such as res pay based on ump and those who contribute to the management of uncertainties uncertainty mitigators such as generators or storage with ramping capabilities get paid energy payment and credit lses pay based on the amount of the load and lmp the energy payment from the e lse at bus m at t is t dm t it should be noted that res is entitled to the credit due to the negative load modeled in ruc generator i located at bus mi is entitled to the credit e p for energy production i t i t charge to uncertainty source the uncertainty source can be charged as x u k t t t the uncertainty source pays based on the marginal price and the worst point the uncertainty source is charged only u k when t is and it may have to pay more when the uncertainty becomes larger the uncertainty point t can be upward t or downward t we have the following lemma regarding the relation between the u k signs of t and t u k lemma if t then t if t then u k t please check appendix for the proof when the budget set is adopted the extreme point t t um t so the uncertainty charge in can also be written as according to lemma and u up u down t t um t t t thus upward and downward uncertainties are charged separately it should be noted that we still need to use when other uncertainty sets are used credit to generation reserve only resources that can provide deliverable generation reserve are entitled to credits if i g m then the credits can be formulated as x u k k t t i t in other words generation reserve is paid the ump at the bus u k where it is located if t then the associated credit k is zero no matter what the value of t is similar to the uncertainties the generation reserves can be in either upward or downward direction denote the upward generation reserve down as qup i t and the downward generation reserve as qi t n o t pimax pi t riu t qup min i i t n o down qi t max t pimin pi t we also have the following lemma regarding the relation down k between qup and t i t qi t k to lemma if i g m then the optimal solution t problem red is u k qup if t k i t t u k qdown if m t i t and u k k k k k t t t t t please check appendix for the proof the credit to generation reserve i located at bus m can be rewritten as according to lemma and u up up u down down i t t qi t t qi t shows that the upward and downward generation reserves are credited separately flexible resources may receive credits for both the upward and downward generation reserves simultaneously always holds even if other uncertainty sets are modeled in ruc transmission reserve and revenue adequacy some transmission capacities are reserved according to the solution to red these transmission reserves are used to ensure the ramping deliverability when the uncertainty is revealed as shown in and it is noted that they are determined automatically in red and kept explicitly without explicit transmission reserve requirement constraints just like the scheduled generation reserve the scheduled transmission reserves in positive direction and negative direction are x pos inj t fl m pm t neg t fl m x inj m pm t m respectively they are always an important issue related to the transmission reserve is the credit entitled to the financial transmission right ftr holders ftr is a financial instrument used to hedge congestion cost in the electricity market where participants are charged or credited due to the transmission congestion within the robust framework the effective transmission capacity for scenario is different from the physical limit which is used in the simultaneous feasibility test sft for ftr market in the existing market the ftr credit is funded by the energy congestion cost which is the net payment of energy however the energy congestion cost may not be sufficient to fund the ftr credit we argue that the transmission reserve becomes a new reason for ftr underfunding in any framework to guarantee the ramping deliverability pos neg theorem if transmission reserve t and t are kept for line l at time t in dam then the maximum ftr underfunding associated with line l at time t is pos neg k k t t t t due to the deficiency of energy congestion cost uncertainty payment res credit energy payment trans res credit lmp cong cost energy credit ftr credit fig money flow of the proposed market clearing mechanism where uncertainty sources make the uncertainty payment and lses make the energy payment please check appendix for the proof from the ftr holder s point of view is the credit due to the transmission reserve therefore we also call transmission reserve credit and denote it as pos neg k k t t t t t k k at most one of t and t is for transmission the credit to positiveptransmission reserve is zero for line l at pos k time t when either t or t theorem if red is feasible then uncertainty payment can exactly cover generation reserve credit and transmission reserve credit and the revenue adequacy is always guaranteed in the proposed market clearing mechanism please check appendix for the proof theorem reveals that ftr underfunding issue can occur within the existing market structures as long as the transmission reserve is even if the lmps are calculated based on other approaches theorem shows that the new market clearing mechanism overcomes the ftr underfunding issue the money flow of the proposed market clearing mechanism is depicted in energy payment collected based on lmp is distributed to ftr holders as lmp congestion cost and generators as energy credit on the other hand the payment collected based on ump is distributed to ftr holders as transmission reserve credit and flexible resources as generation reserve credit the lmp congestion cost and transmission reserve credit can exactly cover the ftr credit which is calculated based on the lmp difference and ftr amount market equilibrium in this section we characterize the competitive market equilibrium model in the electricity industry the partial market equilibrium model is often employed where market participants are price takers the energy is cleared according to uncertainty and generation reserve are cleared according to without loss of generality consider unit i located at bus its profit maximization problem can be formulated as u up up u down down e x pi t t t qi t t qi t pmpi max pi t pi t t where the decision variable is pi t given the price signal u up u down e t t t as proved in appendix unit i is not inclined to change its power output level as it can obtain the maximum profit by following the iso s dispatch e instruction t price signal t provides the incentives for u k unit i to dispatch power output to t and price t gives incentives for unit i to maintain the generation reserve for uncertainty hence the dispatch instruction t and price u up u down e signal t t t constitute a competitive partial equilibrium discussions down k as the pi t and t or qup i t qi t are coupled by k k and the opportunity cost t t is enough to provide the incentives for i to keep the generation level at t k k including t t in the generation reserve price has several benefits firstly generation reserves provided by different units are priced fairly generation reserve prices are the same for the units at the same bus and they may vary with locations if line congestions exist secondly higher generation reserve price attracts investment for flexible resources thirdly it is consistent with the existing reserve pricing practice in fact generation reserve price is consistent with the ump therefore the uncertainties and flexibilities are also treated fairly at the same bus the upward and downward umps are obtained according to respectively the uncertainty sources are charged according to the generation reserves are credited according to u k k defined in and t the price signal t are intermediate variables for market clearing the proposed ump may be even if the uncertainty at a bus is zero this is similar to the lmp which may also be for the bus without load the market clearing mechanism proposed in this paper follows the cost causation principle for the cost allocation in reality it may be controversial to allocate the reserve cost to uncertainty sources however we argue that it would be fair and must be done when the res penetration level is high an extreme case is when the loads are all supplied by res there has been study showing it is possible that the increasing res penetration can cause higher system operation cost this issue can not be handled by the existing market clearing mechanism in which loads pay for the additional system reserve that is required to accommodate the uncertainty from res in other words loads are actually providing subsidies to res when the res penetration level is low the subsidies can help the growth of the res however when the res penetration level is high these growing subsidies will cause serious fairness issue on the other hand with ump as the stimulating price signals res will have the incentives to improve its forecast techniques and reduce its uncertainty in the ideal case when its uncertainty approaches zero res will no longer pay following the existing practice the uc variables are fixed during the marginal price derivation hence the uplift issue which exists in the real market still remains in the proposed market clearing mechanism although the uc variables are fixed the lmp and reserve price in the real market can provide effective signals for the investment of generation and transmission as well as consumption strategy of electricity similarly the uncertainty impact is not only reflected in uc but also in the ed within the ruc model in this paper hence the proposed lmp and ump can also provide signals for the investment of flexibilities generation transmission and demand the pricing for uncertainties proposed in this paper is not in conflict with the pricing for traditional reserves which are mainly prepared for the contingencies the traditional reserve prices can be derived in the framework by adding extra traditional reserve constraints and the corresponding reserve costs can still be allocated to lses it is observed that the credit in is the sum of credits for all extreme points that is because the related constraints may be binding for multiple extreme points and the dual variables shadow prices for these constraints work together in the dual problem the traditional price for energy and reserve also has similar form when multiple contingencies are modeled although only one scenario will happen in reality we still need to consider the worst scenario defined in uncertainty set and keep enough reserves in dam that is because dam is a financial market and the lmp and ump are the financially binding prices this is similar to the existing market model considering contingencies even if the contingency seldom occurs they are still modeled for market clearing and the contingencies are reflected in lmp and reserve price the issue of price multiplicity still exists in the proposed model because problem red is a linear programming lp problem however the price is unique with the nondegeneracy assumption for simplicity we have considered a auction in the proposed model by introducing the demand bids we can formulate a auction and the general principles in this paper will still apply iii c ase s tudy a system and the ieee system are simulated to illustrate the proposed market clearing mechanism in the system the basic ideas of ump are presented within the robust optimization framework ftr underfunding issue is illustrated and a comparison between the ump and traditional reserve price are presented in the ieee system the ump related products are presented for different uncertainty levels the behaviors and impacts of flexible sources are analyzed by an energy storage example system a system is studied in this section the diagram is shown in fig the unit data and line data are shown in table i and table ii respectively table iii presents the load and uncertainty information column base load shows the hourly forecasted load assume that the load distributions are and for bus bus and bus respectively t and t in table iii are the bounds of the uncertainties at bus and bus respectively the uncertainty bounds at other buses are table iv m arginal c osts at d ifferent g eneration l evels h fig diagram for system table i u nit data for the bus s ystem p min p max a b c p min p max generation level mw fuel cost ap bp c ru rd ramping rate cu cd cost t on t off min time h it is assumed that the relative forecasting errors increase with hours uncertainty t and t also respect t t t m x t t m where denotes the uncertainty interval at a single bus and represents the uncertainty the and are the budget parameters for the single bus and system respectively lmp and ump consider the case where the ccg based approach converges after iterations table ii l ine data for the bus s ystem from to x capacity mw table iii l oad and u ncertainty data for the bus s ystem mw time h base load t t time h base load mar cost mar cost mar cost table v g eneration and r eserve mw ru rd cu cd t on t off t t t up up up qdown qdown qdown hence k given the uc solutions the problem red can be solved by commercial lp solver the marginal prices are then obtained as byproducts the generation outputs are presented in table v at hours it can be observed that supplies most of the loads at hour which is mw according to the bid information in table iv is much more expensive than and hence the output of is relatively small and at the low level of its capacity the upward and downward generation reserves provided by the three units are also listed in table these data can be obtained directly from eqs and given the generation output pi t although the remaining generation capacity of is mw the upward reserve is limited by its upward ramping rate mw in the meantime the upward reserve provided by is limited by its generation capacity although it has more remaining ramping capacity min mw table vi shows the extreme points obtained in the ccgbased approach the intermediate price signals for these points u k t are also presented it can be observed that the worst point is always obtained at the extreme point of the uncertainty set for example at hour the is mw it is exactly the upper bound of the uncertainty at hour at bus the data in table vi also verifies lemma the intermediate umps u k t have the same sign as the uncertainties t at the same bus the lmps aggregated upward umps and aggregated downward umps at hour are shown in table vii it is noted that umps still exist at buses without uncertainties buses this is similar to lmps which also exist at buses where net power injections are the lmps vary with locations which indicates that the line congestion exists table vi e xtreme p oints of u ncertainty s et t t t t t t t t t table vii lmp and ump at h our u up u down price price price bus the load at bus has to pay the highest lmp the umps are also different at various locations the highest upward ump at hour is also located at bus with these prices the market participants can be paid and credited the lmp paid to is on bus which is larger than its marginal cost in the same time the upward ump is on bus which is exactly the difference between the lmp and s marginal cost hence is the ump setter on bus the umps provide important price signals on the planning of renewable energy sources and storages for example the ump at bus is relatively small so it is an ideal location for renewable energy sources in terms of payment for uncertainties in contrast the ump at bus is large which may attract the investment for storages or generation plants with large ramping rates comparison with existing lmps and reserve prices the motivation of this part is to compare the proposed clearing scheme with the existing one however as the reserve is not robust in the traditional scheme we can not compare them fairly with the observation that the transmission constraints are the most challenging one in the robust uc framework we drop these constraints in this subsection and add reserve constraints as follows ii t pimin qdown i t pi t max qup t i t pi t ii t pi u ii t qdown qup i t i t ri ii t t x x up qdown qi t i t rt i i down where qup are the largest upward and downward i t and qi t reserves respectively rt and are reserve requirements refer to for more details on the reserve formulations in the experiment is set to and the reserve requirements rt and are set to the lower and upper bounds of the uncertainty in respectively the results are as expected the optimal solutions of the ruc and the standard uc with explicit reserve constraints are the same lmps calculated in the ruc and uc also have the same values the umps calculated in the proposed mechanism are also exactly the same as the reserve prices in standard uc two things are verified with these results first without transmission constraints the solution to standard uc can easily be robust by adding reserve constraints second the proposed lmps and umps are consistent with lmps and reserve prices in the existing market when the transmission constraints are dropped when considering transmission constraints the generation reserve can not be guaranteed at bus levels in the traditional a hour bus b hour fig upward ump blue bar and reserve price red bar with network constraint uc model for simplicity we assume that the buses are in a zone consider the case where the upward ump and reserve price at hour are depicted in fig it is observed umps at bus and bus are lower than the traditional reserve prices in the same time the umps at bus and bus is higher than the traditional reserve prices the differences are caused by the congestion of line for reserve delivery it is worth mentioning that the lmp differences in two models are within at hour the prices illustrated in fig reveals another trends that the ump may be higher than the traditional reserve prices at hour the zonal reserve price is while the umps are nonzeros at bus and because the constraint related with reserves in the ruc is stronger than the one in traditional uc model consequently more expensive resources are used in ruc which also generally leads to higher umps ftr underfunding when the generation schedules at hour are and the power flow of line is which is smaller than its physical limit of the transmission reserve mw is kept to guarantee the delivery of the generation reserve the binding constraint for line causes lmp differences hence the ftr holder gets credits consider a set of ftr amounts it can be verified that the ftr amounts satisfy the sft in the ftr market then the total credit for the ftr holders is however the congestion cost in the dam is it means that the lmp congestion cost collected is not enough to cover the ftr credit the ftr underfunding value is the revenue residues after ump settlement is it exactly covers the ftr underfunding in this scenario therefore the revenue is adequate at hour ieee system the simulations are performed for the ieee system with thermal units and branches in this section the peak load is the detailed data including generator parameters line reactance and ratings and load profiles can be found at http two cases are studied in this section the uncertainty levels and load levels are changed to analyze the simulation results in the system level the impact of transmission line capacity on prices is also studied table viii o peration c ost and ump payment up grc oc op cost un payment res credit rev res oc up and grc an energy storage is installed at a specified bus with high ump to show the potential application of umps load level fig uncertainty payment up generation reserve credit grc and operation cost oc with different load levels upward ump lmp price price case we assume that the uncertainty sources are located at buses the budget parameter is set to in this section the buslevel uncertainty budget parameter changes from to and the bound of the uncertainty is the base load the simulation results are shown in table viii it can be observed that the total operation cost increases with increasing it indicates that a larger uncertainty level may increase the operation cost the columns un payment and gen res credit denote the total payment from uncertainty sources and credit to generation reserves respectively the lowest payment is and the highest one is on the other hand the credit entitled to the generation reserves is also a monotonically increasing function of when the generation reserves have the highest credit the last column rev shows that the revenue residues related to umps it can be observed that the residue is always positive fig in the next page depicts the heat map for the upward umps from bus to bus in hours the xaxis represents time intervals and the represents bus numbers the color bar on the right shows different colors for various ump values for example the is denoted by the blue color at the bottom and the is represented by the dark red color on the top of the color bar it can be observed that the uncertainty sources have unique umps at some intervals such as hours and so on it indicates that there is no transmission reserve in these hours on the other hand the umps at hour vary dramatically with different locations the highest upward ump is around and the lowest one is around according to the data shown in fig the high ump at bus may attract investment of flexible resources such as energy storages in terms of generation reserve credit and bus is an attractive location for the investment of renewable energy sources in terms of uncertainty payments fig shows the uncertainty payment and generation reserve credit with respect to load levels the base load level is set at higher loads in general lead to more uncertainty payments and generation reserve credits it is also consistent with the heat map of umps in fig where umps at peak load hours are high it suggests that the generation reserves also become scarce resources when load levels are high the transmission line capacity plays an important role in the price calculation fig shows the lmps and upward umps at hour with respect to increasing capacity of line the prices at buses and are depicted when the line capacity increases from to lmp at line capacity line capacity bus bus bus fig lmp left and upward ump right at hour with respect to increasing capacity of line bus decreases from to and that at bus also drops to from the upward umps at bus and bus also drop by and respectively in contrast the lmp and upward ump at bus which is connected to line remain at and respectively it shows that the change of line capacity may only have impacts on the prices at some buses when the line capacity further increases to from the changes of lmps and umps at bus and bus are within and there is still no change at bus it means that the additional can not help deliver cheaper energy and reserves to bus and bus these results are also consistent with the analysis of the traditional lmps case as discussed in case the upward ump on bus is high at hour assume that an energy storage is installed at bus a simple model for the energy storage is formulated as follows et ptd ptc et e max itd rd ptc itc rc itd itc ent where et denotes the energy level ptd and ptc represent the discharging and charging rates and itd and itc are the indicators of discharging and charging as the ump is the major concern in this section we use simplified parameters for bus bus bus bus bus bus bus bus bus bus bus bus bus bus bus bus bus bus bus hour hour hour hour bus hour hour hour hour b with storage at bus a without storage at bus fig heat map for upward umps different colors represent various umps figure a depicts the umps from bus to bus in hours without the energy storage at bus figure b depicts the new umps after the energy storage is sited at bus storage the discharging efficiency and charging efficiency are set to the capacity e max and initial energy level are set to mwh and mwh respectively the maximal charging rate rd and discharging rate rc are set to by siting the energy storage we can lower the new operation cost to from the payment collected from the uncertainty sources becomes and the credit to generation reserves decreases to compared to the data in table viii the energy storage also helps to reduce the payment related to umps the storage is entitled to generation reserve credit fig depicts the new upward umps after the installation of the energy storage compared to that in fig the upward ump for hour at bus decreases a lot the umps for hour and are also lower it suggests that sitting the energy storage at bus effectively lower the generation reserve price the simulation results demonstrate that flexible resources can lower the umps and umps provide the investment signal at locations where generation reserves are scarce resources prices within the new market scheme as the reserve fees are paid by uncertainty sources many potential applications on umps are open as umps are unified prices of uncertainties and reserves it is interesting to investigate the optimal strategy for the one who is the uncertainty source as well as reserve provider in the market the wind generation company with energy storages the umps derived in this paper also provide an important price signal for the investment of flexible resources when the upward ump or downward ump at a bus is high the investor can get more return in terms of generation reserves another potential future research on ump is to study how to determine the budget uncertainty set in the market modeling the traditional spinning reserve for the contingency is also our future work in this paper the demand bids are not considered we have forecasted load forecasted res and uncertainty of load and res for market clearing with a singlesided model in an extended model we can have demand bids forecasted res and uncertainty of load and res for market clearing the forecasted load forecasted res uncertainty of load and res can be used in rac iv c onclusions a novel market model in this paper clears uncertainty energy and generation reserve simultaneously within the ruc framework in dam the uncertainty sources are charged and the generator reserve providers are credited based on the proposed ump the ump formulation is derived within a robust optimization framework we also characterize the market equilibrium for the new market clearing mechanism as the market clearing mechanism is established within the robust optimization framework the robustness of the dispatch is guaranteed the optimal reserves for uncertainty accommodation are obtained in the model the ump proposed in this paper can effectively address the issue on how to charge and credit the uncertainties and generation reserve fairly in the market with res our study also shows that traditional pricing mechanism within ruc framework may lead to ftr underfunding the proposed market clearing mechanism can address this issue our study shows load serving entities can have lower energy a ppendix a d etailed f ormulation for p roblem ruc ruc min x y z i p x pi t x ii t pimin m t x m i xx i cip pi t cii ii t dm t m x m pi t dm t fl t pi t ii t pimax t pi t pi riu yi t pimin yi t t t pi rid zi t pimin zi t t minimum time limit inj t and n f x y z i p u such that x x t t m m i ii t pimin pi t t ii t pimax t zi t riu yi t t x inj t t t t m o inj inj m pm t j fl t x m the basic idea of the above model is to find a robust uc and dispatch for the scenario in the scenario denotes the load balance constraint represents the transmission line constraint denotes the unit capacity limit constraint denote the unit ramping limits ii t yi t and zi t are the indicators of the unit being on and shutdown respectively units also respect the minimum time constraints which are related to these binary variables the uc and dispatch solution are immunized against any uncertainty u when uncertainty occurs it is accommodated by the generation adjustment t generation dispatch is also enforced by the capacity limits models the ramping rate limits of generation adjustment t in fact the right and left hand sides of can correspond to a response time which is similar to the or reserves in the literatures stands for the network constraint after uncertainty accommodation a ppendix b d etailed f ormulation for p roblem mp and sp mp x i min x y z i p t cip pi t cii ii t i minimum time limit x k t t k m k ii t pimin pi t t ii t pimax t k k t k t riu yi t t k rid x m inj k t zi t k k t t t k m and inj inj k m pm t t fl k t x sp max min xx m t sm t m t n r s s x x t t m t sm t t i x m m x inj inj t fl t m pm t t t m t sm t m t sm t t o where k is the index set for uncertainty points which are dynamically generated in sp with iterations it should be k noted that is the extreme point of u variable t is k associated with the objective function in sp is the summation of slack variables m t and sm t which evaluates the violation associated with the solution x y z i p from mp m t and sm t are also explained as uncertainties generation or load shedding due to system limitations a ppendix c l agrangian f unction for p roblem red please check equation in the next page a ppendix d p roofs for lemmas and theorems a proof of lemma u k with a small perturbation proof consider t t to t we replace t with t in red u k as the t then the optimal value to problem red increases it means that there are violations for the original optimal solution pi t to problem red with t hence the optimal solution pi t to problem red can not be immunized against the uncertainty t it contradicts with the robustness of the solution pi t therefore if t then u k u k t similarly if t then t b proof of lemma proof assume i g m according to the kkt condition p k t at the optimal point we have k k k k t t t t x l k k t t m u k k k then holds if t then t t as k k k k t t t and t are according to the complementary conditions for and at least one of and is binding hence n o k t min t pimax pi t riu t u k holds similarly the other equation holds when t l p xx x x cip pi t dm t pi t t pi t t pimax t t pimin pi t t t m t i i i min min t pi t pi riu t pi t t t pi pi t rid t pi t t t i x x x x inj inj k t m pm t fl t m pm t fl t t t m m m i l t x k k max k min k t pi t t ii t pi t ii t pi pi t t i t x k k k k t rid t t riu t t t i t x xx x inj inj k inj inj k k k t m pm t t fl t m pm t t fl m m t l proof of theorem proof the energy congestion cost at t is x e e t dm t t pi t m x m m xx x x k t k t t m l x x pos k t t fl t m l t x x l x k t t x neg t fl x k k t t t l pos neg k k t t t t l pos neg t t t t l x t x k k t t t l pos neg k k t t t t l t x inj pm t fl fl the first equality holds following the definition of net power injection the second equality holds according to and p inj p following and m m t the third equality pholds k the sign change of t and l t in the third equality direction according is because of the of power flow to the complementary conditions the third term in the fourth equality must be zero based on the following three cases pos t t and t the second term in the last equality corresponds to the credits to ftr holders can be written as x e e t t inj e t pm t if t then and t neg if t and t then t x m t t l xx k k m t t x l x l n l t l t l xx k k n t t l x k n m t t x x k n m t t l x x x k k t t t t fl l the first equality holds according to the inequality is true as the amount of respects x m n fl according to the sft for ftr market the of the inequality is the first term in the last equality of based on and the maximum difference between the ftr credit and the energy congestion cost is equal to the transmission reserve credit that is the maximum ftr underfunding is proof of theorem proof according to theorem the ftr underfunding value is due to the deficiency of the energy congestion cost therefore we need to prove that the money collected from uncertainty sources can cover the ftr underfunding and credits to generation reserve without loss of generality we consider the payment collected from uncertainty sources at time t for x u k t t m m l xx m x k t l m l k k m t t m xx m m k m t l x p p x k t x m l inj fl m pm t pos k t t l k k m t t can be reformulated similarly hence the second equality holds the third equality holds from therefore xx xx xx t t i t m l m t i t l t holds that is the uncertainty payment covers the generation reserve credit and transmission reserve credit then following the energy congestion cost shown in xx xx e t dm t t m t x x i e p i t i t t x x t e t m t x x i t t i e t holds that is the total payments collected from loads and uncertainty sources can cover the total credits to energy generation reserve and ftr holders so the revenue adequacy of the proposed market clearing mechanism is guaranteed proof of competitive equilibrium down proof pi t and qup i t qi t are coupled by constraints and according to we can rewrite generation reserve credit as x u k u up up u down down k t qi t t qi t t t k k k k k t t t t t k k t ii t pimax pi t t pi t t pimin d k t riu t t ri substituting into problem pmpi we can decouple pi t down and qup i t qi t in fact we also get all terms related to pi t in lagrangian l p for problem red since the problem red is a linear programming problem the saddle point t which is the optimal solution to red is also the optimal solution to pmpi consequently unit i is not inclined to deviate its output level as it can obtain the maximum profit by following the iso s dispatch instruction u k e t therefore dispatch t and price signal t t constitute a competitive partial equilibrium x the first equalitypholds to according to p according k k and the m l m t t in the second line can be rewritten as x x xx k k k m t t pi t dm t t fl x x k k k m t t t m l x xx x k k k k k t t m t t m i l m pos neg k k t t t t l x x u k pos neg k k k t t t t t t m m l x u k pos neg k k k t t t t t t i l r eferences shahidehpour yamin and li market operations in electric power systems forecasting scheduling and risk management ed press zheng and litvinov zonal reserve modeling and pricing in a energy and reserve market ieee trans power vol no pp may jiang wang and guan robust unit commitment with wind power and pumped storage hydro ieee trans power vol no pp jiang zhang li and guan network constrained robust unit commitment problem eur oper vol no pp bertsimas litvinov x sun zhao and zheng adaptive robust optimization for the security constrained unit commitment problem ieee trans power vol no pp zeng and zhao solving robust optimization problems using a generation method operations research letters vol no pp sep ye and li robust unit commitment and dispatch with recourse cost requirement ieee trans power doi early access zhao and guan unified stochastic and robust unit commitment ieee trans power vol no pp warrington goulart mariethoz and morari reserves for power systems ieee trans power vol no pp jabr adjustable robust opf with renewable energy sources ieee trans power vol no pp lorca a sun litvinov and zheng multistage adaptive robust optimization for the unit commitment problem operations research vol no pp ye and li robust unit commitment with recourse cost requirement in proc ieee power energy soc general meeting july pp wang shahidehpour and li reserve requirements in joint energy and ancillary services auction ieee trans power vol no pp arroyo and galiana energy and reserve pricing in security and electricity markets ieee trans power vol no pp bouffard galiana and conejo with stochastic ii case studies ieee trans power vol no pp aganagic and waight spot pricing of capacities for generation and transmission of reserve in an extended poolco model ieee trans power vol no pp aug schweppe tabors caraminis and bohn spot pricing of electricity kluwer academic publishers norwell ma pjm manual on financial transmission rights pjm tech access march online available http pjm options to address ftr underfunding pjm tech access may online available https hogan financial transmission right formulations tech online available http formulations hogan contract networks for electric power transmission journal of regulatory economics vol no pp ye wang and li mip reformulation for problems in robust scuc ieee trans power early access wang and fu fully parallel stochastic securityconstrained unit commitment ieee trans power early access papavasiliou oren and rountree applying high performance computing to stochastic unit commitment for renewable energy integration ieee trans power vol no pp may chao peck oren and wilson transmission rights and congestion management the electricity journal vol no pp zheng and litvinov on ex post pricing in the electricity market ieee trans power vol no pp whinston green et microeconomic theory oxford university press new york vol ellison tesfatsion loose and byrne project report a survey of operating reserve markets in us electric energy regions sandia natl labs publications hogan multiple prices electricity market design and price manipulation the electricity journal vol no pp li and shahidehpour unit commitment for simultaneous clearing of energy and ancillary services markets ieee trans power vol no pp ye ge liu and li transmission line rating attack in twosettlement electricity markets ieee trans smart grid vol no pp may hongxing ye s received his degree in information engineering in and degree in systems engineering in both from xi an jiaotong university china and the degree in electrical engineering from the illinois institute of technology chicago in his research interests include optimization in power systems electricity market renewable integration and system security in smart grid he is outstanding reviewer for ieee transactions on power systems and ieee transactions on sustainable energy in he received sigma xi research excellence award at illinois institute of technology in yinyin ge s received the degree in automation and degree in systems engineering from xian jiaotong university china she also received degree in electrical engineering at illinois institute of technology chicago her research interests are power system optimization and modeling pmu applications in smart grid monitoring visualization and state estimation for distribution systems mohammad shahidehpour f received his degree from the university of missouri in in electrical engineering he is currently the bodine chair professor and director of the robert galvin center for electricity innovation at the illinois institute of technology chicago he is the founding of ieee transactions on smart grid he is a member of us national academy of engineering nae zuyi li sm received the degree from shanghai jiaotong university shanghai china in the degree from tsinghua university beijing china in and the degree from the illinois institute of technology iit chicago in all in electrical engineering presently he is a professor in the electrical and computer engineering department at iit his research interests include economic and secure operation of electric power systems cyber security in smart grid renewable energy integration electric demand management of data centers and power system protection
3
quiver mutations and boolean reflection monoids feb bing duan li and luo abstract in everitt and fountain introduced the concept of reflection monoids the boolean reflection monoids form a family of reflection monoids symmetric inverse semigroups are boolean reflection monoids of type a in this paper we give a family of presentations of boolean reflection monoids and show how these presentations are compatible with mutations of certain quivers a feature of the quivers in this paper corresponding to presentations of boolean reflection monoids is that the quivers have frozen vertices our results recover the presentations of boolean reflection monoids given by everitt and fountain and the presentations of symmetric inverse semigroups given by popova surprisingly inner by diagram automorphisms of irreducible weyl groups or boolean reflection monoids can be constructed by sequences of mutations preserving the same underlying diagrams as an application we study the cellularity of semigroup algebras of boolean reflection monoids and construct new cellular bases of such cellular algebras using presentations we obtained and inner by diagram automorphisms of boolean reflection monoids key words boolean reflection monoids presentations mutations of quivers inner by diagram automorphisms cellular semigroups cellular basis mathematics subject classification introduction in their influential work on cluster algebras fomin and zelevinsky associated mutations of matrices definition with mutations of quivers proposition the quivers whose underlying graphs are dynkin diagrams play an important role in the cluster algebra theory as they appear in the finite type classification it is well known that a finite irreducible crystallographic reflection group w or a finite irreducible weyl group w can be classified by dynkin diagrams whose vertex set is in correspondence with a family s of simple reflections and for which there is an edge labeled respectively between vertices i and j if and only if si sj e respectively si sj e si sj e where si sj s e is the identity element of w see let be a dynkin diagram and a quiver be a quiver whose underlying diagram is in barot and marsh gave presentations of the reflection group determined by and showed that these presentations are compatible with mutation of quivers more precisely barot and marsh introduced some additional relations cycle relations corresponding to chordless cycles arising in quivers of finite type for each quiver q mutation equivalent to a quiver they first defined an abstract group w q by generators corresponding to vertices of q and relations and then proved that w q motivated bing duan li and luo by barot and marsh s work the similar presentations of affine coxeter groups braid groups artin groups and weyl groups of algebras have been considered in respectively let v be a euclidean space with standard orthonormal basis vn and v an irreducible crystallographic root system which is in turn classified by dynkin diagrams in everitt and fountain introduced the concept of reflection monoids the boolean reflection monoid m b of type formed from the weyl group w for classical root system and the boolean system b is a family of reflection monoids symmetric inverse semigroups are boolean reflection monoids of type a note that the root systems of types bn and cn give rise to the same weyl group so we only concern the classical weyl group w bn in everitt and fountain provided a presentation of the boolean reflection monoid m b for bn or dn one of the aims in present paper is to obtain new presentations of boolean reflection monoids and show how these presentations are compatible with mutation of certain quivers let respectively be the dynkin diagram with n respectively n n vertices where the first n respectively n n vertices are mutable vertices and the respectively n n vertex is a frozen vertex which is shown in the column of table in practice the label is left on an edge only if its weight is greater than and the edge is left unlabelled if its weight is let and q any quiver mutation equivalent to a quiver we define an inverse monoid m q from q see section and then we show that m q m b see theorem and proposition this implies that boolean reflection monoids can also be classified by see table in the diagrams corresponding to generators of irreducible weyl groups affine coxeter groups braid groups artin groups have no frozen vertices in present paper the diagrams corresponding to generators of boolean reflection monoids have frozen vertices type of boolean reflection monoids generators n m b bn n m bn b dn n m dn b table boolean reflection monoids and dynkin diagrams in proposition of everitt and fountain proved that the symmetric inverse semigroup in is isomorphic to the boolean reflection monoid of type so we recover the presentation of the symmetric inverse semigroup in defined in the presentation corresponds exactly to the presentation determined by dynkin diagram quiver mutations and boolean reflection monoids moreover we also recover everitt and fountain s presentations of boolean reflection monoids defined in section of these presentations can be obtained from any quiver by a finite sequence of mutations we show in theorem that the inner automorphism group of boolean reflection monoid m b is naturally isomorphic to w w we further study the actions of inward mutations surprisingly inner by diagram automorphisms of finite irreducible weyl groups and boolean reflection monoids can be constructed by a sequence of mutations preserving the same underlying diagrams see theorem and theorem respectively as an application we study the cellularity of semigroup algebras of boolean reflection monoids it is well known that hecke algebras of finite type algebras the brauer algebra the algebras and partition algebras are cellular see recently the cellularity of semigroup algebras is investigated by east wilox guo and xi and ji and luo respectively by applying geck s and east s results we show that semigroup algebras of boolean reflection monoids are cellular algebras see proposition moreover we construct new cellular bases of such cellular algebras by presentations we obtained and inner by diagram automorphisms of boolean reflection monoids the results and methods of this paper have applications in several lines of research which will be studied in future work including automorphisms of boolean reflection monoids hecke algebras of boolean reflection monoids coxeter arrangement monoids braid inverse monoids and algebraic monoids the paper is organized as follows in section we recall some notations and background knowledge which will be useful to us in section building on barot and marsh s work we further study inner by diagram automorphisms of irreducible weyl groups theorem and cellular basis of group algebras of irreducible weyl groups in section we state our main results theorem and proposition which show that presentations of boolean reflection monoids are compatible with mutations of quivers we recover the presentations of boolean reflection monoids given by everitt and fountain and the presentations of symmetric inverse semigroups given by popova moreover we characterize inner by diagram automorphisms of boolean reflection monoids by the method of mutations theorem furthermore we study the cellularity of semigroup algebras of boolean reflection monoids and give new cellular bases of such cellular algebras in section we consider the way of mutations of quivers and the oriented cycles appearing in them in section we find an efficient subset of the relations sufficient to define the inverse monoid m q the last section section we prove our main result theorem preliminaries mutation of quivers let q be a quiver with finitely many vertices and finitely many arrows that have no loops or oriented given a quiver q let i be the set of its vertices and qop its opposite quiver with the same set of vertices but with the reversed orientation for all the arrows if there are q arrows pointing from a vertex i to a vertex j then we draw an arrow from i to j with a weight wij q we will frequently draw an arrow with no label if wij bing duan li and luo for each mutable vertex k of q one can define a mutation of q at k due to fomin and zelevinsky this produces a new quiver denoted by q which can be obtained from q in the following way i the orientations of all edges incident to k are reversed and their weights intact ii for any vertices i and j which are connected in q via a oriented path going through k the quiver mutation affects the edge connecting i and j in the way shown in figure where the weights c and are related by c ab where the sign before c is if i j k form an oriented cycle in q in q and is otherwise here either c or may be equal to which means no arrows between i and i k a c k j i k a j figure quiver mutation iii the rest of the edges and their weights in q remain unchanged two quivers and are said to be mutation equivalent if there exists a finite sequence of mutations taking one to the other we write to indicate that is mutation equivalent to the underlying diagram of a quiver q is a undirected diagram obtained from q by forgetting the orientation of all the arrows we call a quiver q connected if its underlying diagram is connected every node is reachable it is obvious that dynkin quivers are connected quivers it was shown in theorem that there are only finitely many quivers in the mutation classes of dynkin quivers we call a cycle in the underlying diagram of a quiver a chordless cycle if no two vertices of the cycle are connected by an edge that does not itself as shown in proposition or see proposition all chordless cycles are oriented in the mutation classes of dynkin quivers cellular algebras and cellular semigroups let us first recall the basic definition of cellular algebras introduced by graham and lehrer let r be a commutative ring with identity definition an associative a is called a cellular algebra with cell datum m c i if the following conditions are satisfied is a finite partially ordered set associated with each there is a finite set s t m of a m of indices and there exists an cs t i is an of a with i i which sends cs t to ct s for each s t m and each a a x acs t ra s s t mod a s quiver mutations and boolean reflection monoids where ra s s r is independent of t and a is the of a generated by t s t m cellular algebras provide a general framework to studying the representation theory of many important classes of algebras including hecke algebras of finite type algebras the brauer algebra the algebras and partition algebras see recently the cellularity of semigroup algebras is investigated by east wilox guo and xi and ji and luo respectively in the following we shall recall some basic notions and facts from the theory of semigroups let s be a semigroup for any a b s define a l b s a s b a r b as bs a j b s as s bs h l r and d l r l r r l where s is the monoid obtained from s by adding an identity if necessary a semigroup s is said to be inverse if for each element s s there exists a unique inverse s such that s s and if s is a finite inverse semigroup then d j for any s t s define ds dt if and only if s s ts let s be an inverse semigroup with the set e s of idempotents let d be a of suppose that ek d e s choose ak such that aj r ej for each j then d rak put hd and by green s lemma for each x d we have x ai j for unique i j k and g hd using east s symbol in let ei ej g d be the element x then ej ei d for more detail knowledge of semigroups the reader is referred to a semigroup s is said to be cellular if its semigroup algebra r s is a cellular algebra in east proved the following theorem theorem theorems let s be a finite inverse semigroup with the set e s of idempotents if s satisfies the following conditions for each d the subgroup hd is cellular with cell datum md cd id the map i r s r s sending e f g d to f e id g d is an antihomomorphism then s is a cellular semigroup with cell datum m c i where d d with partial order defined by d d if d d or d d and m d e s e e s d s md for d and d d e s f t m d for d and c c e s f t e f cs t d e s f t m d by the definition of cellular algebras we have the following corollary corollary suppose that a is a cellular algebra over r with cell datum m c i for any s t and is an automorphism of a let c s t cs t m m and i then m c i is a cellular basis of proof since cs t s t m m is an of a and is an automorphism of a c s t s t m m is also an of a it follows from the definition of i that i i and i c s t c t s bing duan li and luo for each s t m and each a a x acs t ra s s t mod a s where ra s s r is independent of t and a is the of a generated by t s t m for each a a there exists a b a such that a b then x bcs t rb s s t mod a ac s t b cs t s x rb s s c s t mod a s where rb s s r is independent of t therefore m c i is a cellular basis of a as required some new results of irreducible weyl groups let v be a euclidean space with a standard orthonormal basis vn let v be a root system and the set of simple roots in for each the associated simple reflection is si then the finite irreducible weyl group w can be generated by s si and the number of reflections in w is equal to the number of positive roots in we refer the reader to for more information about weyl groups root systems and reflection groups barot and marsh s results it is well known that the finite irreducible crystallographic reflection groups or the irreducible weyl groups have been classified by dynkin diagrams see let be a dynkin diagram and i the set of its vertices let be the finite irreducible weyl group determined by we say that a quiver is a quiver whose underlying diagram is in barot and marsh gave presentations of the construction works as follows let q be a quiver mutation equivalent to a quiver barot and marsh defined an inward mutation at vertex k as follows sk si sk if there is an arrow i k in q possibly weighted ti si otherwise for two vertices i j of q one defines i and j are not connected i and j are connected by an edge with weight mij i and j are connected by an edge with weight i and j are connected by an edge with weight definition let w q be the group with generators si i i subjecting to the following relations e for all i si sj mij e for all i j quiver mutations and boolean reflection monoids for any chordless cycle c in q where either all of the weights are or we have e where e is the identity element of w q one of barot and marsh s main results in is stated as follows theorem theorem a the group w q does not depend on the choice of a quiver in the mutation class of q in particular w q for each quiver q mutation equivalent to a quiver inner by diagram automorphisms of irreducible weyl groups let w be a coxeter group defined by a set s of generators and relations and of definition we call the pair w s a coxeter system in what follows given any two coxeter systems w and w we say that there exists an automorphism of w we mean that there is an automorphism aut w such that if such an automorphism can always be chosen from inn w the group of inner automorphisms of w then w is called strongly rigid in case w is strongly rigid the group aut w has a very simple structure see corollary of aut w inn w diag w where diag w consists of diagram automorphisms of the unique coxeter diagram corresponding to w the following lemma is well known lemma let w be a finite group generated by a finite set s of simple reflections then the set of all reflections in w is w w s s in table i bannai computed the center z w of an irreducible weyl group w the longest element in w is a central element of w except for an n k the following important notation was introduced by franzsen in definition definition an inner by diagram automorphism is an automorphism generated by some inner and diagram automorphisms in aut w the subgroup of inner automorphisms is a normal subgroup of aut w therefore any inner by diagram automorphism can be written as the product of an inner and a diagram automorphism the following two lemmas collect together some facts from which will be useful later lemma proposition if w is weyl group of type an then aut w an w an if n while aut w w so for n any automorphism of weyl group of type an maps reflections to reflections furthermore any automorphism of that does preserve reflection is inner lemma proposition propositions proposition let w be weyl group of type bing duan li and luo a any automorphism of w bn for n that does preserve reflections must be inner b for k aut w w that is all automorphisms of w map reflections to reflections c all automorphisms of w or w are inner d all automorphisms of weyl groups that preserve reflections are inner by diagram automorphisms the following theorem reveals a connection between inner by diagram automorphisms of irreducible weyl groups and quiver mutations theorem let q be a quiver and w q the corresponding weyl group generated by a set s of simple reflections then is an inner by diagram automorphism of w q if and only if there exists a sequence of mutations preserving the underlying diagram such that s can be obtained from q by mutations in particular all the reflections in w q can be obtained from q by mutations proof through observation every variable obtained by mutations must be some reflection of the corresponding weyl group the sufficiency follows from the fact that all automorphisms of weyl groups that preserve reflections are inner by diagram automorphisms see lemmas and to prove necessity assume without loss of generality that the vertex set of q is n and is an inner by diagram automorphism of w q note that diagram automorphisms of any dynkin diagram keep the underlying dynkin diagram then relabelling the vertices of q if necessary there exists an inner automorphism of w q such that s s it is sufficient to prove that s can be obtained from q by mutations and the sequence of mutations preserves the underlying diagram let g sir w q be a reduced expression for g where sik s k we assume that s gsg in the following we shall use induction to prove that can be obtained from q by a sequence of mutations preserving the underlying diagram step we mutate firstly q at the vertex twice then we get a quiver which has the same underlying diagram with q moreover the set s becomes sn we keep vertices of having the same label as vertices of q step we then mutate t r at the vertex it twice note that the variable corresponding to the vertex it of is sit then we get a quiver qt which has the same underlying diagram with q moreover the set of generators in becomes sit sit sit ssit we keep vertices of qt having the same label as vertices of step we repeat step until we get the quiver qr by induction it is not difficult to show that the set of generators in qr is sir ssir gsg s quiver mutations and boolean reflection monoids finally every reflection in w is conjugate to a simple reflection by lemma so we assume that a reflection is of the form gsi where g sik w is a reduced expression for some subset ik n by the same arguments as before we mutate the sequence ik ik starting from q we can obtain the reflection gsi remark in types an bn n and all inner by diagram automorphisms of the corresponding weyl groups are inner automorphisms w is strongly rigid for an n if w s is a coxeter system of w then for any inner by diagram automorphism of w w s is also a coxeter system of w suppose that q is a quiver mutation equivalent to a quiver let w q be the corresponding weyl group with a set s of generators then theorem holds for q cellular basis of group algebras of irreducible weyl groups in geck proved hecke algebras of finite type are cellular algebras let be any inner by diagram automorphism of irreducible weyl groups see theorem by lemma we can obtain new cellular basis of group algebras of irreducible weyl groups main results of boolean reflection monoids let respectively be the dynkin diagram with n respectively n n vertices where the first n respectively n n vertices are mutable vertices and the respectively n n vertex is a frozen vertex which is shown in figure the label is left on an edge only if its weight is greater than and the edge is left unlabelled if its weight is we shall always assume that is one of and a quiver is said to be a quiver if the underlying diagram of such quiver is figure classical dynkin diagrams and with a frozen vertex bing duan li and luo boolean reflection monoids in everitt and fountain introduced reflection monoids see the boolean reflection monoids are a family of reflection monoids symmetric inverse semigroups are boolean reflection monoids of type a let v be a euclidean space with standard orthonormal basis vn and let v be a root system and w the associated weyl group of type a partial linear isomorphism of v is a vector space isomorphism y z for two vector subspaces y z of v any partial linear isomorphism of v can be realized by restricting a full isomorphism to some subspace we will write gy for the partial isomorphism with domain y and effect that of restricting g to y we denote by m v respectively gl v the general linear monoid respectively general linear group on v consisting of partial linear isomorphisms respectively linear isomorphisms of v in m v gy hz if and only if y z and is in the isotropy group gy g gl v v for all v y moreover gy hz gh y z and gy g gy let us recall the notation system in v for a group g gl v introduced in definition definition let v be a real vector space and g gl v a group a collection s of subspaces of v is called a system in v for g if and only if v s gs s that is gx s for any g g and x s if x y s then x y for j x n let x j m rvj v and b x j j x with x then b is a boolean system in v for w where bn or dn for example the weyl group w on the subspaces x j b is just g x j x where g induces an isomorphism between w and the symmetric group sn on the set x note that b is not a system for any of the exceptional w definition definition let g gl v be a group and s the system in v for the monoid of partial linear isomorphisms given by g and s is the submonoid of m v defined by m g s gx g g x s if g is a reflection group then m g s is called a reflection monoid let g be the reflection group w for bn or dn and s the boolean system in v for g then m g s is called a boolean reflection monoid in general we write m b instead of m w b and call m b the boolean reflection monoid of type recall that everitt and fountain gave a presentation of the boolean reflection monoid m b for bn or dn in section of quiver mutations and boolean reflection monoids lemma everitt and fountain s presentations of boolean reflection monoids are shown as follows m b si sj mij e for i j n si si for i i m bn b si sj mij e for i j n si si for i m dn b si sj mij e for i j n si si for i i here mij is defined in inverse monoids determined by quivers let i be the set of vertices of a quiver q with a frozen vertex for any i j i and define if i and j are not connected if i and j are connected by an edge of weight mij if i and j are connected by an edge of weight if i and j are connected by an edge of weight if and j are not connected if and j are connected by an edge of weight if and j are connected by an edge of weight if and j are not connected if and j are connected by an edge of weight if and j are connected by an edge of weight let mii for any i i then mij i is a coxeter matrix and mij i is a generalized coxeter matrix see to illustrate generalized coxeter matrices corresponding to a quiver and a quiver are respectively and let be an ordered tuple such that the subquiver of q on the vertices contains only one underlying subdiagram or and does not contain one bing duan li and luo such an ordered tuple is called a shortest path underlying subdiagram tuple if it is the shortest path from to in q for any shortest path tuple we denote by p the word denote by e the identity element of an inverse monoid and denote by aba m an alternating product of m terms definition let q be any quiver mutation equivalent to a quiver define an inverse monoid m q with generators si i i and relations e for i i si sj mij e for i j i and sj sj sj sj for any j i i for every chordless oriented cycle c in q w w w where id i for d either all of the weights are or we have ii for every chordless oriented cycle c in q where id i for d d we have iii for every chordless oriented cycle c in q w w where i if and we have if and we have path relations for every underlying subdiagram of q of the form shown in the first column of table we take path relations listed in the second column of table remark in if then in this case the equation sj sj sj sj can be reduced to be sj sj for ii relation though in this paper we only use the case d the defined relation for arbitrary d is still meaningful see our unpublished paper the following lemma is well known and easily verified lemma if two quivers and both have the same underlying diagram g and g is a tree then it follows from the connectivity and finiteness of that any two quivers are quiver mutations and boolean reflection monoids subdiagrams of q path relations p p p p c p p p p sid for d id p p p p p p p p p table path relations of underlying subdiagrams of q where c stands for a chordless cycle now we are ready for our main results in this section theorem let and be a quiver if q then m q m we will prove theorem in section up to the above isomorphism we denote by m the inverse monoid determined by any quiver appearing in the mutation class of quivers when we say that we mutate a sequence n n of vertices of a quiver we mean that we first mutate the vertex of the quiver then we mutate the n vertex and so on until the first vertex the following proposition shows that everitt and fountain s presentations of boolean reflection monoids can be obtained from any quiver by mutations proposition let bn or dn then m b m proof all quivers are so any quiver can be viewed as an initial quiver we mutate bing duan li and luo we mutate the sequence n n of vertices of the following quiver we obtain the quiver then by definition m si sj mij e for i j n si si for i i where if i j mij if otherwise by lemma we deduce that m m b mutating a sequence n n of vertices of the following quiver we get obtain quiver then by definition m si sj mij e for i j n si si for i i where mij if if if if i j i and j are not connected i and j are connected by an edge with weight i and j are connected by an edge with weight by lemma we have m m bn b quiver mutations and boolean reflection monoids by mutating a sequence n n of vertices of the following quiver o we obtain the quiver then by definition m si sj mij e for i j n si si for i sj sj sj sj sj for j i where mij if i j if i and j are not connected if i and j are connected by an edge with weight we claim that m m dn b which follows from lemma suppose that a quiver q is mutation equivalent to a quiver then by theorem and proposition m q gives a presentation of the boolean reflection monoid m in everitt and fountain proved that the boolean reflection monoid m b respectively m bn b is isomorphic to the symmetric inverse semigroup in respectively the monoid of partial signed permutations hence our results recover the presentation of the symmetric inverse semigroup in defined in that is such presentation is exactly the presentation of m q for a quiver q the following example is given to explain theorem example we start with a quiver which is shown in figure a let be the quiver obtained from by a mutation at bing duan li and luo a b o figure a a quiver b the quiver q it follows from definition that m e i m e i then m m is an inverse monoid isomorphism defined by ti if i si ti otherwise inner by diagram automorphisms of boolean reflection monoids we first consider inner automorphisms of boolean reflection monoids it is well known that for any group g inn g g where inn g is the inner automorphism group of g z g is the center of it has been shown in that automorphisms of the boolean reflection monoid m b are inner for every automorphism of m b there exists a uniquely determined element g w of the weyl group w such that t gtg for all t m b in other words the automorphism group of m b is naturally isomorphic to w w for n and the automorphism group of m b is naturally isomorphic to w as a generalization of the above result we have the following theorem theorem the inner automorphism group of m b is naturally isomorphic to w w where bn n dn proof let be an inner automorphism of m b since w m b is the unique unit group of m b inn w w w in the following we prove the cases of bn n dn let be one of and shown in figure suppose that is a set of generators of m b for any element g w we claim that the set is still a set of generators of m b firstly it is obvious that gsi e and nextly we will prove that satisfies in definition case there is no edge between i and j then gsi gsj gsi sj gsj si gsj g gsi case there is an edge labeled by between i and j then gsi gsj gsi gsi sj si gsj si sj gsj gsi g gsj case there is an edge labeled by between i and j then gsi gsj gsi gsj gsi sj si sj gsj si sj si g gsj gsi gsj gsi quiver mutations and boolean reflection monoids case there is no edge between i and then gsi gsi si gsi g case there is an edge labeled by between i and then gsi gsi gsi si si si si g gsi gsi g gsi case in type bn g g case in type dn g g g g finally we shall show that for any z w it suffices to prove that where is the longest element in w and is an involution in section of we have wn where wi in type bn and wi for i and in type dn then by and of definition wn wn wn wn in type bn in type dn therefore the inner automorphism group of m b is isomorphic to w w for bn n dn let be one of and shown in figure let q be a quiver let i be the set of vertices of q and q the quiver obtained by a mutation of q at a mutable vertex following barot and marsh s work one can define variables bing duan li and luo ti for i i and in m as follows sk si sk if there is an arrow i k in q possibly weighted ti si otherwise sk sk if there is an arrow k in q possibly weighted otherwise from lemma and equation it follows that new elements ti i i appearing in the procedure of mutations of quivers must be some reflections of weyl groups by our theorem and proposition up to isomorphism boolean reflection monoids are encoded by their generalized coxeter diagrams see figure in the following theorem we show that inner by diagram automorphisms of boolean reflection monoids can be constructed by a sequence of mutations preserving the same underlying diagrams theorem let q be a quiver and m q the corresponding boolean reflection monoid generated by a set s consisting of simple reflections and then is an inner by diagram automorphism of m q if and only if there exists a sequence of mutations preserving the underlying diagram such that s can be obtained from q by mutations in particular all reflections in w and g for g w can be obtained from q by mutations proof let be one of and shown in figure suppose that s for or s for in the case of type an all automorphisms of m b are inner see a sequence of mutations preserving the underlying diagram of q induces to an inner automorphism of m b all automorphisms of weyl groups that preserve reflections are inner by diagram automorphisms see lemmas and so we assume without loss of generality that m q i where ti gsi for i n g w bn respectively g w dn we claim that g firstly if then is a set of generators of m q and the generalized coxeter diagram corresponding to preserves the underlying diagram since ti ti for i n we have z w respectively z w where w i respectively w i the variable must be of the form for some w bn respectively w dn so is not the longest word in w respectively w therefore must be the unique identity element in w respectively w and hence conversely for each inner automorphism of m q by theorem there exists an element g w of the weyl group w m q such that t gtg for all t m q the remainder proof of the necessity is similar to the proof of the necessity of theorem every reflection in w is of the form gsi where g sik w is a reduced expression for by the same arguments as before we mutate the sequence ik ik starting from q we get gsi g and quiver mutations and boolean reflection monoids cellularity of semigroup algebras of boolean reflection monoids in this section we show that semigroup algebras of boolean reflection monoids are cellular algebras we use these presentations we obtained to construct new cellular bases of such cellular algebras let r be a commutative ring with identity recall that a semigroup s is said to be cellular if its semigroup algebra r s is a cellular algebra proposition the boolean reflection monoid m b for bn or dn is a cellular semigroup proof all maximal subgroups of the boolean reflection monoid m b are finite reflection groups it has been shown in that any finite reflection group w is cellular with respect to which the is inversion therefore for each d of m b the subgroup hd m b is cellular with cell datum md cd id which satisfies east s first assumption see theorem in or theorem we define a map i r m b r m b x x rj gj rj j j where rj r gj m b the map i is an and i e f g d e f g d f e g d f e id g d for any g hd e d f in m b from theorem in or theorem it follows that the boolean reflection monoid m b is a cellular semigroup as required remark the case that a finite inverse semigroup whose maximal subgroups are direct products of symmetric groups has been considered by east see theorem of the boolean reflection monoid of type is isomorphic to the symmetric group sn of degree maximal subgroups of the boolean reflection monoid of type bn are finite reflection groups of type br r n which are isomorphic to sr let be one of and shown in figure for two quivers with the same underlying diagrams appearing in the mutation class of quivers we always use their presentations to construct inner by diagram automorphisms of boolean reflection monoids see theorem and then we extend it an automorphism of semigroup algebras of boolean reflection monoids by corollary we obtain new cellular bases of semigroup algebras of boolean reflection monoids an example let in be the symmetric inverse semigroup on n n let w be a partial permutation on a set a n and denote the image of i a under the map w by wi and the image of i a under the map w by wi we denote w by the sequence wn for example is the partial permutation with domain and range under which the following example gives new cellular bases of r by the method of quiver mutations example let be the quiver in example and by the results of preceding sections m a boolean reflection monoid we have m bing duan li and luo where each di is the set of all elements of m of rank i and idempotents in each di are the partial identity permutation on of let a be an of as shown in example of the containing the idempotent ida is the subgroup x im x dom x a it is well known that the group algebra r sn has cellular bases with respect to which the is inversion indeed the bases and the murphy basis both have this property see example of example of or section of take and by mutating we obtain the following isomorphic quivers theorems and a b c f d e from theorem and proposition it follows that the inverse monoids determined by quivers a f are isomorphic to the symmetric inverse semigroup respectively the presentation of determined by the quiver a admits an initial cellular bases by theorem of or theorem we can construct an automorphism of r using these presentations corresponding to quivers a f and then by corollary we obtain new cellular bases of r mutations of quivers of finite type throughout this section let as before be one of and in figure we consider the way of mutations of quivers and the oriented cycles appearing in them refer to a quiver without no loops and no is said to be of finite type if it is mutation equivalent to a dynkin quiver a chordless cycle is a cycle such that no two vertices of the cycle are connected by an edge that does not itself one can show proposition or proposition that all chordless cycles are oriented in the mutation classes of dynkin quivers we extend the results in and corollary to the case of quivers lemma let q be a quiver and k a mutable vertex of q suppose that k has two neighbouring vertices then the induced subquivers of q containing vertex k and its neighbours are shown in figure the effect of the mutation of q at k is shown in each case quiver mutations and boolean reflection monoids k a i b i i i j k a b c i k e k g k i i d k f i i j k k k i k j k i i k b i i j j k i k i j b k i k i k i k j i j k i i j f k k i d k j k b k j i k e j k c k k k o i k i figure subquivers of mutations of q in a diagram a vertex is said to be connected to another if there is an edge between them let q be a quiver mutation equivalent to a dynkin quiver in lemma of barot and marsh have described the way vertices in q can be connected to a chordless cycle a vertex is connected to at most two vertices of a chordless cycle and if it is connected to two vertices then the two vertices must be adjacent in the cycle the following lemma is a generalization of barot and marsh s results lemma lemma let q be the mutation of q at vertex we list various types of induced subquivers in q and corresponding cycles in then every chordless cycle in arises in such a way bing duan li and luo k a k a k k k e b k d h o k k k o k k o d i i j o k k d k v j d i i j o o k o i k k f o i j k i o i k j k i k i k k k i g o e k i k k k i i k k d o i c k k k b k c k k v i o d j the vertex k does not connect to an oriented chordless cycle c in q then c is the corresponding cycle in quiver mutations and boolean reflection monoids k the vertex k connects to one vertex of an oriented chordless cycle c in q via an edge of unspecified weight then c is the corresponding cycle in by lemmas and we have the following corollary corollary let q be a quiver in the mutation class of a quiver then the frozen vertex in q has one neighbour or two neighbours and if it has two neighbours then must be in an oriented cycle cycle relations and path relations in this section we find an efficient subset of the relations sufficient to define boolean reflection monoids which generalizes barot and marsh s results lemmas and proposition in lemma lemmas and let q be a dynkin quiver and w q the reflection group determined by q see section if q contains a chordless cycle cd see figure d then the following are equivalent a sa e with subscripts modulo d for a single fixed value of a a d b sa e with subscripts modulo d for any a d if q contains a chordless see figure then the following are equivalent a e b furthermore if one of the above holds then the following holds c if q contains a chordless see figure then the following are equivalent a e b furthermore if one of the above holds then the following holds c e d o o figure a chordless cd a chordless and a chordless bing duan li and luo let be one of and in figure suppose that q is any quiver mutation equivalent to a quiver the following lemma shows an efficient subset of relations and in definition which generalizes the above lemma lemma let m q be an inverse monoid with generators subjecting to relations in definition if q contains a chordless cycle see figure then the following statements are equivalent a b furthermore if one of the above holds then the following statements are equivalent c d if q contains a chordless cycle see figure then if q contains a chordless cycle see figure then the following statement holds a b furthermore if one of the above holds then the following statements are equivalent c d if q contains a subquiver see figure then the following statements are equivalent a sa sd p p sd sa p p for a single fixed value of a a d b sa sd p p sd sa p p for any a o o d figure a chordless a chordless a chordless and a subquiver see lemma quiver mutations and boolean reflection monoids proof for the equivalence of a and b follows from using suppose that a and b hold then by a and b the equivalence of c and d follows from for the equivalence of a and b follows from using first and then suppose that a and b hold using first and then a we have by where in the last equation we used that and commute using a and b by a similar argument we have therefore c and d are equivalent for using it is obvious at an end of this section we show that m q could be defined using only the underlying unoriented weighted diagram of q by taking relations corresponding to both q and qop as the defining relations our result can be viewed as a generalization of proposition of proposition let m b be a boolean reflection monoid with generators si i i then the generators satisfy with respect to q if and only if they satisfy with respect to qop bing duan li and luo proof we assume that generators si i i satisfy relations with respect to q and show that these generators satisfy relations with respect to qop the converse follows by replacing q with qop since and do not depend on the orientation of q generators si i i satisfy relation and with respect to qop the cases of chordless cycles appearing in quivers of finite type have been proved in proposition of the remaining needed to check the cases are and shown in figure case in we have case in we have case in note that we have case in it follows from lemma that do not depend on the orientation of chordless cycles in since every chordless cylce in qop corresponds to a chordless cycle in q the result holds the proof of theorem in this section we give the proof of theorem let be one of and in figure we fix a quiver q let q be the mutation of q at vertex k k i throughout the section we will write si and ri for the generators corresponding to vertex i i of m q and m respectively similar to we define elements ti i i and in m q as follows sk si sk if there is an arrow i k in q possibly weighted ti si otherwise sk sk if there is an arrow k in q possibly weighted otherwise then sk si sk sk si sk e if there is an arrow i k in q possibly weighted e otherwise sk sk sk sk if there is an arrow k in q possibly weighted otherwise quiver mutations and boolean reflection monoids in order to prove theorem we need the following proposition which we will prove in section proposition for each i i the map m m q ri ti is an inverse monoid homomorphism proof theorem for each vertex i i of q define the elements in m as follows rk ri rk if there is an arrow k i in ri otherwise rk rk if there is an arrow k in otherwise we claim that these elements for each vertex i i satisfy the relations defining m q this follows from proposition by interchanging q and and using the fact that the definition of m q is unchanged under reversing the orientation of all the arrows in q see lemma therefore there is an inverse monoid homomorphism m q m such that si for each i if there is no arrow i k in q then there is also no arrow k i in and consequently ri si ri if there is an arrow i k in q then there is an arrow k i in and therefore ri sk si sk sk si sk rk rk ri rk rk ri so idm and similarly idm q and hence and are isomorphisms the proof of proposition we will prove proposition by showing that the elements ti i i satisfy the relations in m we denote by the value of mij for by equation is obvious in the sequel the proof that the elements ti i i satisfy in m follows from lemma and the rest of proof is completed case by case lemma the elements ti for i a vertex of q satisfy the following relations if i k or j k and i j then ti tj mij if at most one of i j is connected to k in q or equivalently in and i j then ti tj mij let i be in i then if i are not connected in ti ti ti ti ti ti ti if i are connected by an edge with weight in ti ti if i are connected by an edge with weight in proof in lemma of barot and marsh proved the parts and we only need to prove the part bing duan li and luo suppose without loss of generality that i the only nontrivial case is when there is an arrow k i with a weight q in q if q then tk tk sk sk sk sk sk sk sk sk sk sk sk sk sk sk sk sk tk tk sk sk sk sk sk sk sk sk sk tk if q note that sk sk we have tk sk sk sk sk sk tk in the following suppose that i we divide this proof into three cases case there are no arrows from i to k then ti si hold case there are arrows from one of i to k and there are no arrows from the other of i to k in q then we assume that there are arrows from to k and there are no arrows from i to k in q if i are not connected in q we have ti si sk sk sk sk si ti if i are connected by an edge with weight in q then ti sk sk si sk sk sk si sk sk si sk si sk sk si sk sk si ti ti si sk si sk si sk sk si sk sk ti ti that i are connected by an edge with weight and there are no arrows from i to k in q is impossible because of the fact that there is only cycle in the mutation class of quivers and corollary case there are arrows from i to the possibilities for the subquivers induced by i and k are enumerated in a g of figure we show that ti and satisfy by checking each case within each case subcase i is when the subquiver of q is the diagram on the left and subcase ii is when the subquiver of q is the diagram on the right i we have ti sk si sk sk sk sk si sk sk si sk sk sk sk si sk ti ii we have ti si si ti i we have ti sk si sk si sk si si sk si s i s s k s s k s i s s i s k s i s s i s k s i s s k s i s k s s k s i s k ti ti si sk sk si si sk si si sk si sk si sk sk si sk ti ti ii we have ti si sk sk sk sk si sk sk sk sk si sk sk sk sk si ti i we have ti sk sk si sk sk sk si sk si sk sk si sk si sk sk si sk sk si sk sk si sk si sk si sk sk si sk sk si ti ti sk si sk sk si sk si sk si sk si sk si sk sk si sk sk ti ti ii we have ti sk si sk si sk si si sk si sk si sk ti i we have ti sk si sk sk sk sk si sk sk si sk sk sk sk si sk ti quiver mutations and boolean reflection monoids ii we have ti si si ti i note that si sk sk and sk si sk we have ti sk si sk sk sk ti sk si sk sk sk ii we have ti si sk sk si sk sk si sk sk si si sk sk si sk sk si sk sk si ti f i note that si sk sk and sk si sk we have ti si sk sk sk sk ti sk sk si sk sk f ii we have ti sk si sk sk si sk sk sk sk sk sk si sk sk si sk ti i note that sk sk we have s i s s i s ti ti ti sk sk si sk sk si s s i s s i ti ti ii note that sk sk we have si sk si si sk sk si sk sk si sk ti ti ti sk si sk sk si sk si si sk si sk sk si sk ti ti the possibilities for chordless cycles in mutation classes of quivers are enumerated in lemma for barot and marsh proved in that i holds for a e we show that ii and iii hold by checking in each case we need to check that the corresponding cycle relations hold within each case subcase i is when the subquiver of q is the diagram on the left and subcase ii is when the subquiver of q is the diagram on the right in the sequel we frequently use and without comment i we have tk ti tk sk sk si sk sk si si sk sk si sk sk tk ti tk i we have ti tk ti sk sk si sk si sk si sk sk si sk si sk si sk sk ti tk ti i we have tk ti tk sk sk si sk sk si si sk sk si sk sk tk ti tk i we have ti tk tk si sk sk sk sk si si sk sk sk sk si tk tk ti i note that sk sk and sk si si si si sk we have ti tk ti sk sk si sk si si sk si si sk sk si si sk si si sk si si sk sk si si sk si si sk si sk sk ti tk ti bing duan li and luo ii note that sk sk and si sk si si sk si we have tk ti ti sk sk si sk sk si sk si si sk si si sk si si si si sk si si sk si si sk si sk sk si sk sk ti ti tk f i note that sj sj and si sj sk sj si si sj sk sj si we have tk tj tk sk sk sj sk sk sj sj sk sk sj sk sk tk tj tk ti tj ti si sk sj sk si si sj sk sj si si sj sk sj si si sk sj sk si ti tj ti f ii note that sk si si sk and si sj si si sj si have ti tj tk tj ti sk sk si sj sk sj si sk si sk sj sk sj si sk si sj si sk sk si sj si sk si sj sk sj si sk sk ti tj tk tj ti i note that sj sj and sk sj si sj sk sk sj si sj sk we have tj tk tj sk sk sj sk sj sk sj sk sk sj sk sj sk sj sk sk tj tk tj tj ti tj sk sk sj si sj sj si sj sk sk tj ti tj ii note that sk si si sk and sj si sj sj si sj we have tk tj ti tj tk sk sk sj sk si sk sj sk sk sj sk si sk sj sj si sj sj si sj sk sk sj sk si sk sj sk sk tk tj ti tj tk i note that si sj sj si sk sk and sj sk si sk sj sj sk si sk sj we have ti tk tj tk si sk sk sj sk sk si sj sj si sk sk sj sk sk si tk tj tk ti tj ti tj sk sj sk si sk sj sk sk sj sk si sk sj sk sk sj sk si sk sj sk sk sj sk si sk sj sk tj ti tj ii note that sj si sj sj si sj we have tj tk ti tk tj sj sk sk si sk sk sj sj si sj sj si sj sj sk sk si sk sk sj tj tk ti tk tj case follows from either barot and marsh s result or or or case is trivial and case follows from the commutative property of tk and ti for each vertex i in for by lemma we prove the following several cases where in each case we number the vertices d of these subquivers for convenience within each case subcase i is when the subquiver of q is the diagram on the left and subcase ii is when the subquiver of q is the diagram on the right in the sequel we frequently use and without comment j k k k quiver mutations and boolean reflection monoids o o k d k v h k d i note that sk sk sk and we have p p p sk p sk p sk p sk p p p sk p sk p sk p sk p p ii we have p p tk p sk sk sk p sk p sk sk sk p p sk sk sk p sk p sk sk sk p p bing duan li and luo i we have p p p p p p p p p p p p p p p p ii we have p p p p p p p p p p p p p p p p i we have p p p p p p p p ii we have p p p p p p p p p p p p p p i we have td p p td sk sk sd p p sd sk sk sk sd p p sd sk sk sd p p sd sk p p p p quiver mutations and boolean reflection monoids ii we have tk td p p td tk sk sk sk sd p p sd sk sk sk sk sd p p sd sk sd p p sd p p p p acknowledgements duan would like to express his gratitude to everitt franzsen schiffler xi for helpful discussions duan was supported by china scholarship council to visit uconn department of mathematics and he would like to thank uconn department of mathematics for hospitality during his visit this work was partially supported by the national natural science foundation of china no the research of li on this project is supported by the minerva foundation with funding from the federal german ministry for education and research references bannai automorphisms of irreducible weyl groups fac sci univ tokyo sect i barot and marsh reflection group presentations arising from cluster algebras trans amer math soc no bourbaki lie groups and lie algebras chapters berlin coxeter the complete enumeration of finite groups of the form ri rj kij london math soc no charney and davis when is a coxeter system determined by its coxeter group london math soc no davis the geometry and topology of coxeter groups london mathematical society monographs series vol princeton university press princeton nj duan presentations of monoids of uniform block permutations ready j east cellular algebras and inverse semigroups algebra no braids and partial permutations adv math no generators and relations for partition monoids and algebras algebra everitt and fountain partial symmetry reflection monoids and coxeter groups adv math no partial mirror symmetry lattice presentations and algebraic monoids proc lond math soc no easdown and lavers the inverse braid monoid adv math no fitzgerald a presentation for the monoid of uniform block permutations bull austral math soc no fitzgerald and leech dual symmetric inverse monoids and representation theory austral math soc ser a no franzsen automorphisms of coxeter groups phd thesis university of sydney australia felikson and tumarkin coxeter groups and their quotients arising from cluster algebras int math res not imrn no coxeter groups quiver mutations and geometric manifolds lond math soc no bing duan li and luo fomin and reading root systems and generalized associahedra geometric combinatorics city math vol amer math providence ri pp fomin and zelevinsky cluster algebras i foundations amer math soc no cluster algebras ii finite type classification invent math no geck relative cells represent theory hecke algebras of finite type are cellular invent math no graham and lehrer cellular algebras invent math no j grant and marsh braid groups and quiver mutation pacific journal of mathematics no guo and xi cellularity of twisted semigroup algebras j pure appl algebra no tom halverson representations of the monoid algebra no humphreys reflection groups and coxeter groups cambridge studies in advanced mathematics cambridge university press cambridge howie fundamental of semigroup theory oxford university press new york haley hemminger landesman and peck artin group presentations arising from cluster algebras algebr represent theory no tom halverson and arun ram monoid algebras hecke algebras and duality j math sci no ji and luo cellularity of some semigroup algebras bull malays math sci soc no liber on symmetric generalized groups russian mat sbornik no popova defining relations in some semigroups of partial transformations of a finite set uchenye zap leningrad gos ped inst marsh lecture notes on cluster algebras zurich lectures in advanced mathematics european mathematical society ems mathas algebras and schur algebras of the symmetric group university lecture series american mathematical society providence ri murphy the representations of hecke algebras of type an algebra no i seven reflection group relations arising from cluster algebras proc amer math soc no schein and teclezghi endomorphisms of finite symmetric inverse semigroups algebra no tsaranov representation and classification of coxeter monoids european combin no wilcox cellularity of diagram algebras as twisted semigroup algebras algebra no xi partition algebras are cellular compositio math no cellular algebras available at https bing duan school of mathematics and statistics lanzhou university lanzhou china address li dept of mathematics the weizmann institute of science rehovot israel school of mathematics and statistics lanzhou university lanzhou china address quiver mutations and boolean reflection monoids luo school of mathematics and statistics lanzhou university lanzhou china address luoyf
4
sep the conjugacy ratio of groups laura ciobanu charles garnet cox and armando martino abstract in this paper we introduce and study the conjugacy ratio of a finitely generated group which is the limit at infinity of the quotient of the conjugacy and standard growth functions we conjecture that the conjugacy ratio is for all groups except the virtually abelian ones and confirm this conjecture for certain residually finite groups of subexponential growth hyperbolic groups artin groups and the lamplighter group introduction in this paper we introduce and study the conjugacy ratio of a group which is the limit of the quotient of two functions naturally associated to any finitely generated group conjugacy growth and standard growth more precisely if g is generated by the finite set x let bg x n denote the ball of radius n with respect to x and let cg x n denote the set of conjugacy classes of g which have a representative in bg x n then the conjugacy ratio of g with respect to x is crx g lim sup x n x n the motivation of this paper is twofold on one hand the conjugacy ratio of a finite group h is equal to the degree of commutativity dc h of h which measures the probability that two elements of the group commute and is defined as x y h h xy yx dc h the degree of commutativity of a group has received a lot of attention recently as its definition was extended to finitely generated infinite groups in to be dcx g lim sup x y bg x n ab ba x n as raised in it is natural to explore whether the degree of commutativity and the conjugacy ratio are related for infinite groups as well our second motivation comes from the fact that very few quantitative results comparing standard and conjugacy growth in groups exist in the literature while in any group there are fewer conjugacy classes than elements the gap between these two functions has not been been explored in detail and it is worth investigating for example the standard and conjugacy growth rates taking the limit of the nth root of the function at n are equal in some of the most frequently encountered families of infinite groups hyperbolic groups graph products date march mathematics subject classification key words and phrases conjugacy growth degree of commutativity polynomial growth raags hyperbolic groups wreath products laura ciobanu charles garnet cox and armando martino many wreath products thus in these examples the quotient of the two functions as a function of n must be at most subexponential and if the conjugacy ratio is the convergence to will not be very fast our starting point is the following conjecture inspired by conj conjecture let g be a group generated by a finite set x then crx g if and only if g is virtually abelian our results on the conjugacy ratio in several families of groups support conjecture in section we investigate groups of stable subexponential growth definition we first show that any virtually abelian group has crx g for any finite generating set x we then show that if n is a normal finite index subgroup of g then for any finite generating set x of g crx g dc this allows us to apply a technique from to show that any residually finite group g of stable subexponential growth which is not virtually abelian has crx g for any finite generating set x we also show in theorem that if g is a finitely generated virtually abelian group with finite generating sets x and y then crx g cry g we say that a group g with generating set x has stable subexponential growth x definition this includes all finitely generated if g x n groups since all finitely generated groups are residually finite the theorem below means that conjecture is true for all groups of polynomial growth theorem the conjugacy ratio for all finitely generated residually finite groups of stable subexponential growth that are not virtually abelian is zero with respect to all finite generating sets the proof of theorem can not be generalised to groups of exponential growth but we provide independent arguments for several important classes of groups of exponential growth theorem let g be a hyperbolic group then crx g for any finite generating set theorem let g be the lamplighter group that is the wreath product z then crx g for the standard generating set x defined in theorem let g gv xv be a artin group raag based on a graph v e with generating set xv then crxv g unless g is free abelian in which case crxv g we may also consider the strict or spherical conjugacy ratio where the counting is done in the sphere of radius n rather than the ball of radius n that is we may take the ratio of the strict conjugacy growth function over the spherical growth function more precisely let sg x n be the sphere of radius n in the group g s with respect to finite generating set x and let cg x n be the conjugacy classes that intersect sg x n but not bg x n that is those conjugacy classes with a minimal length representative in sg x n the spherical conjugacy ratio is then crsx g lim sup s x n x n the conjugacy ratio of groups remark by the theorem anytime the spherical conjugacy ratio turns out to be a limit the conjugacy ratio will be equal to this limit in particular if the spherical conjugacy ratio is then the conjugacy ratio is preliminaries recall that for a finitely generated group g with generating set x the exponential growth rate of g with respect to x is q expx g lim n x n definition a group g with finite generating set x is said to have exponential growth if expx g and subexponential growth if expx g this does not depend on the generating set additionally for any if expx g then for sufficiently large n x n n moreover if we replace balls with spheres we get the same limit and inequality we collect below a few results on convergence of series that will be relevant later theorem let an bn n be two sequences with bn strictly increasing and divergent if the lefthandside limit exists an an lim l lim bn bn proposition is a partial converse to the theorem it implies that for groups of exponential growth if the conjugacy ratio is a limit and the ratio of sizes of consecutive balls has a limit then the spherical conjugacy ratio is equal to the conjugacy ratio proposition let an bn n be two sequences with bn strictly increasing and divergent such that the lefthandside limit exists and bn then an an l lim bn bn lim proposition let an bn cn dn n be monotonically increasing sequences of positive integers define the sequences b cn and dbn as b and cn cn and dbn dn for n b suppose that i an bn and b cn dbn for all n an cn ii bn and dn as n then pn ai b lim n bi proof given fix an n such that abnn for all n n next choose an m n such that dcnn for all n m then for n m n n n n x x x bi ai b bi b laura ciobanu charles garnet cox and armando martino thus for n m pn pn pn pn ai b ai b ai b ai b pn pn pn b b b bi bi bi bi now we obtain the result by using the fact that for n m pn pn b cn ai b an an pn dn bi proposition let an bn cn dn n be sequences of positive integers satisfying the following properties i an bn are monotone sequences ii an bn and cn dn for all n iii abnn as n iv dbnn n for all sufficiently large n and for some then pn ai lim n bi proof given fix an n such that n x ai an bn n x for all n n then for n n bi n x bi thus for n n pn pn pn pn ai ai ai ai pn pn pn b d b d b d i i i bi and so it suffices to show that pn pn ai ai lim lim n bn bi now pn ai bn pn n n x x ai an an bn b b using hypothesis iv there is a sufficiently large n such that n n x x n n an an an n an n b results for groups of stable subexponential growth definition a group g with finite generating set x is said to be of stable subexponential growth if g x n the conjugacy ratio of groups note that being of stable subexponential growth implies that expx g and hence that the group has subexponential growth by the celebrated result of gromov every finitely generated group of polynomial growth where bg x n is bounded above by a polynomial function is virtually nilpotent all these groups are of stable subexponential growth since by a result of bass if g is a finitely generated virtually nilpotent group and x is any finite generating set then for some exponent d and constants a b and x n bnd the exponent d is calculated explicitly in for a virtually abelian group it is equal to the rank of a finite index free abelian subgroup from we get that for any positive integer k lim x n k x n the main result which we require for this class is the following proposition let g be a finitely generated group with stable subexponential growth and finite generating set x for every finite index subgroup h g and every g g we have bg x n bg x n lim lim x n x n g h furthermore if h is an infinite index subgroup of g then both limits are zero for any coset of remark the last statement does not appear explicitly in but follows easily from their arguments alternatively one could prove this via the construction of an invariant mean which requires the choice of an ultrafilter the stable subexponential condition ensures that any ultrafilter will do and hence that all limit points of the sequences above are equal from now on whenever there is no ambiguity concerning the group and its generating set we will write c n instead of cg x n and b n instead of bg x n proposition suppose that g is a finitely generated virtually abelian group then for any finite generating set x of g we have that crx g more precisely if g a m where a is abelian then crx g proof let g a m where a is abelian we note that g acts by multiplication on the right cosets of a if g and h lie in the same right coset then h for some a so for any a a ah a g ag since a is abelian thus there are at most m conjugates of each element a a and so for all n n now we have that n n m n n n n n n n n n n m which tends to by proposition lemma let g be a group of stable subexponential growth with finite generating set x let g g and let h be a finite index subgroup of for d n we have bg x n d lim x n g h laura ciobanu charles garnet cox and armando martino proof this follows from writing lim n d b n d b n d lim n n n d together with proposition and proposition let g be a finitely generated group of stable subexponential growth and n a subgroup of finite index in then crx g dc for any finite generating set x of proof let g n m so that g n n n for some gm let d max i m now consider if xn yn in then yn g xgn g xgg n g g xn g for some g moreover since x and y are conjugate in we may choose g from gm and so now let b n we know there must exist some xn such that g g but then g and so b n hence for every n n each element in b n yn is conjugate to some element in b n xn let xk gm be the representatives of the conjugacy classes in for every i n and every j zk we will assume that there are n xj n conjugacy classes in b n xj n hence pk n b n n n n which tends to by the previous lemma theorem conjecture is true for all finitely generated residually finite groups of stable subexponetial growth proof proposition states that if a finitely generated group g is virtually abelian then for any finite generating set x crx g for the other direction we apply the method of proof of thm by using proposition for completeness we will describe their argument it requires the following result from if f is a finite group and n e f then dc f dc dc n our hypotheses are that g is finitely generated residually finite of stable subexponential growth and not virtually abelian we wish to show that crx g for any finite generating set x we will work with finite quotients and will build a chain of normal subgroups since g is finitely generated we may choose these subgroups to be characteristic and will do this because being characteristic is transitive since g is not virtually abelian choose g that do not commute and using the residually finite assumption let where is a characteristic and finite index subgroup of hence is and by gustafson s result we have that dc now since the properties of g which we have used also apply to finite index subgroups this argument also applies to hence we may construct a descending chain of characteristic finite index subgroups ki g the conjugacy ratio of groups where for every i n dc moreover and so from dc dc dc dc by induction dc i and so by proposition for any finite generating set x of g we have that crx g dc i since this holds for every i n we obtain that crx g corollary conjecture is true for all finitely generated virtually nilpotent groups or equivalently all groups of polynomial growth virtually abelian groups the goal of this section is to prove theorem let g be a finitely generated virtually abelian group and x y be finite generating sets for then crx g cry g it will be useful to have the following shorthand definition let g be generated by the finite set x a subset s of g is x n generic if lim x n and negligible if the limit is given a group g with finite generating set x a finitely generated subgroup h of g is said to be undistorted if any word metric on h is equivalent to any word metric on g when restricted to this makes sense since any two finite generating sets on a group induce equivalent word metrics it is easy to see that a finite index subgroup is always undistorted and that a subgroup h is undistorted if and only if it has an undistorted subgroup of finite index retracts are also undistorted recall that a retract of g is the image of an endomorphism g g such that we now collect the following facts proposition suppose that g is a finitely generated virtually abelian group with finite generating set x having a subgroup of finite index isomorphic to zd i every subgroup of g is both finitely generated and undistorted ii let h be an infinite subgroup of let t n th x n for transversal be the number of cosets of h that have a representative in bg x n then lim t n x n proof i let h it is well known that h is finitely generated as this fact is true in the case where g is virtually polycyclic which includes the finitely generated virtually nilpotent and abelian case however the fact that h is undistorted is not true more generally and follows from the fact that every subgroup of a finitely generated free abelian group has finite index in a direct summand in our case h has a finite index subgroup which is a retract of a finite index subgroup of g and is therefore undistorted in ii from above h is finitely generated and undistorted since h is infinite it must contain an element of infinite order so there exists an such that bg x n more precisely bg x n will have polynomial bounds of degree e d e laura ciobanu charles garnet cox and armando martino let a b d be the constants in then hence nd x t n bg x n t n t n lim x n lim from now on we let g be an infinite finitely generated virtually abelian group let a be a normal finite index free abelian subgroup and b be the centraliser of a in note that a is a subgroup of b which therefore has finite index proposition let g be a finitely generated virtually abelian group and x any finite generating set for let a be a normal finite index free abelian subgroup and b be the centraliser of a in then the set of minimal length representatives in g b is negligible proof let y b be an element of g and denote by cya n the number of conjugacy classes which have a representative in bg x n ya then we claim that lim cya n x n for each conjugacy class with a representative in bg x n ya choose a shortest such representative and denote this set of representatives z yai ai a from these extract the set u ai rewriting the ai as geodesics if required note that for some fixed k the length of y we have cya n bg x n bg x n k now let my denote the automorphism of a induced by conjugation with y which we think of as a matrix for any a we have that y i my if we switch to an additive notation in a let h be the image of i my in a that is h h a y a ai since y b we can conclude that h is a subgroup of a and is therefore infinite moreover the elements of u are all in distinct cosets of hence by proposition part ii we may conclude that x n x n x x n th x x x x n proposition shows that the only elements of g that contribute to the conjugacy ratio are the elements of b the representative of a conjugacy class might not have a shortest representative in our particular coset ya but varying y we see that we have an overcount of the number of conjugacy classes in the complement of b which nonetheless gives thus the strategy for proving theorem is the following first note that each element of b has finite conjugacy class in we split the elements in b into those which centralise elements from outside of b and those whose centraliser is completely in b proposition shows the former ones form a negligible set and the conjugacy ratio of groups the latter ones a generic set of b corollary moreover for the latter ones the size of the class is the index of the which is constant for elements in the same therefore each coset or rather conjugacy class of cosets of a contributes a fixed amount to the conjugacy ratio which is algebraically determined we use the notation zk g for the of g g that is zk g k k k gk g lemma let x then zb x zb xa for any a a moreover g zb x if x b s proposition the set zb y is a finite union of infinite index subgroups of hence this set is negligible with respect to any finite generating set proof since zb y zb ya for any a a this is a finite union so it is enough to show that each zb y has infinite index in fact it is sufficient to show that za y zb y a is an infinite index subgroup of a however za y is a pure subgroup of a that is if am za y and m then a za y this implies that za y is a direct summand of a but since y b this direct summand can not be the whole of a and is therefore an infinite index subgroup of a as required corollary there is a generic set of elements of b with respect to any generating set whose centraliser lies entirely in b proof if for some b b there exists t b such that t b then b zb t s zb y which is negligible by proposition proof of theorem for each r let ar be the elements b b for which zb b has index r in g and therefore conjugacy class size r in g and let n b b zb b b that is s n is the set of elements of b whose centraliser does not fully lie in b then n zb y and so by corollary it is a negligible set since a zb b g for any b b and a has finite index in g there are only finitely many values for the index of zb b in g and thus finitely many r for which ar is moreover since zb y zb ya for any y b a and a a if y ar then ya ar so each ar is a union of and thus bg x n x n lim where is g a times the number of in ar so is independent of x it is easy to see that there is an integer k such that if two elements of b are conjugate in g then they are conjugate by an element of length at most k the same holds for ar as ar b moreover since b is normal in g it is easy to see that g acts on ar by conjugation g acts by conjugation on n and hence on ar n as well let cn be the number of conjugacy classes of g which meet bg x n and are contained in ar n then ar n bg x n rcn ar n bg x n the first inequality comes from the fact that each element of ar n has r conjugates in g and the second from the fact that each of the conjugates can be obtained from a conjugator of length at most laura ciobanu charles garnet cox and armando martino now cn x n ar cn bg x n x n x n x n x n and by and corollary cn lim lim x n bg x n cn x n x n r so we get lim x n ar x n r hence the number of conjugacy classes of g that meet ar is independent of the generating set summing over the finitely many r gives the result remark the same ideas as those just presented can be used to show that if g is a finitely generated virtually abelian group and x is any finite generating set then crx g inf n ef g cr that is the conjugacy ratio is equal to the infimum of conjugacy ratios of the finite quotients hence if one were to measure the conjugacy ratio using invariant means one would get the same numerical value unpublished results indicate that this is the same as the degree of commutativity for similar reasons the same is true whenever g is a finitely generated virtually nilpotent group the virtually abelian case being the key one results for other families of groups hyperbolic groups in this section we prove conjecture for hyperbolic groups we will write f n g n to mean f n n as n theorem let g be a hyperbolic group then crx g for any finite generating set x proof let g be a hyperbolic group with finite generating set x then by a result of coornaert see there are positive constants and integer such that for all n enh x n enh where h expx g by theorem in there are positive constants and such that enh enh x n n n for all n thus from and we get x n x n n for all n max and by taking the limit we obtain that crx g the conjugacy ratio of groups the lamplighter group we follow the notation in let i be a l set for g we write i for the ith component of and if l moreover i is a group and x i we define x g by x i i and say that x is the left translate of by x definition consider groups h and l with symmetric generating sets a and b and neutral elements e and respectively the wreath product of g by l written g l is defined as m h l h l where for m n h l m n mn l for h h let h be such that h and i e for i then x a a a b b generates h for the lamplighter group g z we let a a where a is the element of and let b be the standard generating set of theorem let g be the lamplighter group that is the wreath product z then crx g for the standard generating set x proof the statement immediately from example where it s is shown that x n n and the fact that x n by artin groups let v e be a simple graph a graph without loops or multiple edges with vertex set v and edge set for each vertex v of let gv be a group the graph product of the groups gv with respect to is defined to be the quotient of their free product by the normal closure of the relators gv gw for all gv gv gw gw for which v w is an edge of here we consider artin groups raags which are graph products with all gv z and denote by gv xv the raag based on the graph with generating set xv in bijection to v conjugacy representatives in a raag come to a large extent from taking one word out of each cyclic permutation class so we first establish the asymptotics of the language of cyclic representatives in a rather general setting example in a free group on the free generating basis counting the conjugacy classes with a minimal representative of length n is equivalent to counting the number of cyclically reduced words of length n up to cyclic permutation cyclic representatives of languages we follow the notation in section let l be a language over a finite alphabet x that is l x and let l n denote the set of length n in for n n n let ln wn w l and n l v v n l define prim l w l v l such that v k w to be the language of primitive words in suppose l is closed under cyclic permutations then we construct a language cycrep l of cyclic representatives of l out of the words wc where wc the word that is least lexicographically among all cyclic permutations of w for w l cycrep l wc w l laura ciobanu charles garnet cox and armando martino proposition see also lemma let l be an exponential growth language closed under cyclic permutations furthermore assume that lk l k and l l for all k then lim l n n proof for simplicity of notation let a n n p n l s n and c n l s n that is we consider the numbers of words of length exactly n in each write l as l primk l and notice that the number of cyclic representatives of length n in prim l is p n and the number of cyclic representatives of p p length nk in primk l is also p n thus a n p d and c n p d d let n and n be the standardpnumber theoretic and euler functions then by inversion p n nd a n and so p x x l l a d a d c n d n p p n which follows from d n and d d n since a n is exponential only the last term in the sum above is of the same magnitude as a n so c n l s n a n lim n n by we obtain the result conjugacy representatives in raags we first establish a result about the conjugacy ratio of direct products lemma let h and k be two groups with finite generating sets x and y respectively if either i crx h cry k or ii crx h and expx h expy k then h k proof we calculate the conjugacy ratio with respect to balls in h to do this we use balls in h and spheres in let an x n bn x n s tn y n and sn y n then pn ai h k lim sup bi if crx h cry k then by proposition putting tn cbn sn dc n we get that h k similarly if crx h and expx h expy k then proposition putting cn tn dn sn states that this limit is zero so h k since raags interpolate between free and free abelian groups the presence of commutativity does not allow us to simply consider cyclically reduced words up to permutation as in free groups we need to single out the words for which taking cyclic representatives produces conjugacy representatives and use crisp godelle and wiest s approach from cgw which was further developed in the conjugacy ratio of groups definition def cgw let v an and set the total order a cyclically reduced word w is in cyclic normal form if it is in the shortlex language sl gv xv of gv with respect to xv and all its cyclic conjugates are in sl gv xv as well not all elements posses a cyclic normal form for example if the word is in sl gv xv but its cyclic permutation is not to deal with this situation cgw divides the words over xv into split and definition definition cgw let w be a cyclically reduced word over xv and denote by w the full subgraph spanned by supp w let w be the graph complement of w i the word w is split if w is disconnected which amounts to being able to write w as a product of commuting subwords or blocks ii the word w is if w is connected iii let cycsl gv xv denote the set of all cyclic normal forms corresponding to words in gv we say that a group element is split if it can be represented by a cyclically reduced word which is split proposition prop cgw two cyclic normal forms represent conjugate elements if and only if they are equal up to a cyclic permutation proposition remark cgw let w and v be two cyclically reduced split words then they are conjugate if and only if w v and the words corresponding to the commuting blocks are conjugate respectively lemma let cycsl gv xv be the set of cyclic normal forms in gv the following hold cycslk gv xv cycsl gv xv for all k and cycsl gv xv is closed under cyclic permutations theorem let g gv xv be a artin group raag based on a graph v e with generating set xv then crxv g unless g is free abelian in which case crxv g proof we use induction on the number of vertices let n the result is trivial for n if g is a direct product then we get cr g if at least one of the factors has cr this follows from lemma i if both factors have cr and from lemma ii if say the first factor has cr as the second is by induction free abelian and of strictly smaller growth rate than the first we get cr g when each factor is free abelian so suppose g is not a direct product we split the conjugacy classes cgv xv of g into two types those which have a shortest length representative with support xu where u v and denote these by cgv and those which have a shortest length representative with support exactly xv and denote these by cgv by propositions and this is well defined moreover by propositions and two cyclically reduced words with support xu are conjugate in gv if and only if they are conjugate in gu note that if a word w cycsl gv xv where u v then w cycsl gu xu laura ciobanu charles garnet cox and armando martino thus we can write cgv cgv xv then implies that xv n xv n now for u v so s cgu xu and express the above as cgv cgu xu u v u v xu u v xu n n xv n xu n xu n xu n xv n xu n xv n xu n xu n crxu gu lim sup n gv xv gv xv n the right hand side is equal to since either i crxu gu by induction or ii gu is free abelian so of polynomial growth if ii since g itself if not a direct product by assumption it is of exponential growth and the last fraction is n v it remains to find lim n the second part of the right hand v xv side of since g is not a direct product all conjugacy representatives with support exactly xv are so it suffices to consider cyclic normal forms up to cyclic permutations that is lim sup n cycsl gv xv n xv n g xv n cycsl gv xv n gv xv n g xv n g xv n and by proposition applied to the language cycsl gv xv which satisfies the hypothesis of proposition by lemma lim this proves the result cycsl gv xv n g xv n reflections and open questions our results on the conjugacy ratio values are essentially identical to those on the degree of commutativity in that is the two quantities are equal for all the classes of groups we studied however we could not establish a direct general link between them question is the limsup in the definition of the conjugacy ratio a limit question what are the groups for which dcx g crx g or vice versa they are equal in the virtually nilpotent case in the hyperbolic group case and in many more as is the case for the degree of commutativity we do not know whether the conjugacy ratio might be influenced by a change of generators question does there exist a group g with finite generating sets x and y such that crx g cry g the conjugacy ratio of groups finally it would be interesting to unify the proofs confirming our conjecture for larger classes of groups such as all groups of exponential growth for example references and ciobanu formal conjugacy growth in acylindrically hyperbolic groups int math res notices martino and ventura degree of commutativity of infinite groups proceedings of the american mathematical society bass the degree of polynomial growth of finitely generated nilpotent groups proc london math soc burillo and ventura counting primitive elements in free groups geom dedicata ciobanu hermiller and mercier conjugacy growth in graph products preprint coornaert mesures de dans les espaces hyperboliques au sens de gromov pacific j math coornaert asymptotic growth of conjugacy classes in free groups ijac vol cox the degree of commutativity and lamplighter groups preprint https cgw crisp godelle and wiest linear time solution to the conjugacy problem in artin groups and their subgroups journal of topology vol and on some problems of a statistical iv acta math acad sci hungar mr gallagher the number of conjugacy classes in a finite group math z gustafson what is the probability that two group elements commute amer math monthly mr mercier conjugacy growth series in some wreath products preprint https parry growth series of some wreath products trans amer math valiunas degree of commutativity for artin groups preprint https university edinburgh uk address url http university of bath uk address cpgcox mathematical sciences university of southampton uk address url http
4
complex systems science meets and iot nicola marchetti irene macaluso nicholas kaminski merim dzaferagic majid butt marco ruffini saul friedner julie bradford andrea zanella oct michele zorzi and linda doyle abstract we propose a new paradigm for telecommunications and develop a framework drawing on concepts from information different metrics of complexity and computational agent based modeling theory adapted from complex system science we proceed in a systematic fashion by dividing network complexity understanding and analysis into different layers modelling layer forms the foundation of the proposed framework supporting analysis and tuning layers the modelling layer aims at capturing the significant attributes of networks and the interactions that shape them through the application of tools such as modelling and graph theoretical abstractions to derive new metrics that holistically describe a network the analysis phase completes the core functionality of the framework by linking our new metrics to the overall network performance the tuning layer augments this core with algorithms that aim at automatically guiding networks toward desired conditions in order to maximize the impact of our ideas the proposed approach is rooted in relevant architectures and use cases in networks internet of things iot and cellular networks index terms complex systems science modelling internet of things nicola marchetti irene macaluso nicholas kaminski merim dzaferagic majid butt marco ruffini and linda doyle are with connect the centre for future networks and communications trinity college the university of dublin ireland saul friedner and julie bradford are with real wireless uk andrea zanella and michele zorzi are with the university of padova italy this material is based upon works supported by the science foundation ireland under grants no and i ntroduction the transition of humanity into the information age has precipitated the need for new paradigms to comprehend and overcome a new set of challenges specifically the telecommunication networks that underpin modern societies represent some of the largest scale construction and deployment efforts ever attempted by humanity with renovations occurring nearly continuously over the course of decades this results in networks that consist of numerous subsections each following its own trajectory of development commingled into a cacophony a few emerging trends confirm the picture just drawn mobile and wireless networks are getting denser and more heterogeneous in nature nodes in the network vary hugely in form and functionality ranging from tiny simple sensors to sophisticated cognitive entities there is a wider range of node and parameters to set many of which are interdependent and which impact heavily on network performance networks are becoming more and more adaptive and dynamic and many parameters are set during in response to changing contexts as networks evolve all of the above issues become more exaggerated networks will see more antennas more base stations and devices more modes of operation more variability and more dynamism in a world like that there is no way to systematically capture network behaviour there is no straightforward network theory or information theoretic approach that can be used to describe the overall network or the interplay between the different networks we propose to tackle this by studying wireless networks from the perspective of complex systems science css developing complexity metrics and relating them to more traditional measures of network performance one of the key questions in css relates to the degree of with the term complexity we refer to a specific set of complex systems science quantities related to the interactions between network entities rather than to entities themselves and between networks as the current and future trend is towards more diverse networks coexisting and more entities within iot or ultra dense small cell networks the amount of interactions will increase leading to an increase in complexity in the meaning given to the word by complex systems science organization of a system in terms of both the difficulty in describing its organizational structure a and the amount of information shared between the parts of the system as a result of the organizational structure b for example the measure of excess entropy type a can be used to describe the behaviour of a collection of networks while the signalling complexity associated with future network resource management can be analyzed through a type b measure functional complexity introduced in the above conceptual structure based on complexity informs an modelling abm paradigm to examine the interactions between the different entities that shape a network abm provides a method of modelling complex systems from the ground up which allows for a deeper investigation of the interactions that shape the ultimate system performance abm provides powerful modelling of entities in a variety of areas and contexts the attributes of abm can be applied to inform communication networks decision making in particular abm can be used to investigate the impact of several medium access control mac component technologies on the key performance indicators kpi of both telecom networks and applications for example in the case of a wireless sensor network aiding an internet of things iot system in summary we propose a new paradigm for telecommunications drawing on concepts of a complex systems science nature to understand and model the behaviour of highly heterogeneous networks and systems of networks we also employ our framework to create new technologies for supporting network operation ii m otivation we propose the development of a conceptual framework as a means of exploring a broad range of possibilities in wireless networks including a vast array of technological possibilities this framework for thought applies concepts from complex systems science to provide a means to understand wireless networks holistically on a variety of scales specifically we consider the communication patterns that enable network functions by capturing all nodes necessary to perform a given function then by drawing connections between these nodes we highlight their functional dependencies we call a graph obtained in this way functional topology this approach allows us to analyze the communication patterns on multiple scales the lowest scale models the communication between individual in other words the lowest scale focuses on the communication between a node and all the immediate neighbors of this node in the functional topology the second scale models the communication between a node all its immediate neighbors and all neighbors of its neighbors the increasing scale size moves the focus away from the communication between individual nodes and allows us to analyze communication patterns between groups of nodes functional considering the high degree of heterogeneity and dense interplay of network elements in proposed and iot systems achieving a holistic understanding of network operation is poised to become an even more challenging prospect in the near future to address these challenges we demonstrate the power of our framework for the modeling and analysis of relevant scenarios cellular and iot networks while our framework supports innovation beyond these concepts we feel these scenarios adequately represent the applications of our work the development of our concept is organized in a layered fashion with a modelling layer forming the foundation of the framework and supporting analysis and tuning layers the main aspects of our framework are represented in fig and will be discussed in detail in the remainder of the paper as compared to the css literature addressing communication systems we study wireless networks from the infrastructure perspective as a simple example in excess entropy is used to measure complexity and in combination with entropy leads to an understanding of the structure emerging in a lattice of networks the systems modelling analysis communication metrics css metrics functional topology graphs links between communication css metrics constraints technological behaviours abm parameters and range values tuning guidelines for tuning network local rules global fitness adaptive resource allocation fig our complex systems science based layered approach to networks functional topology graphs are abstracted from the network and are then used to compute complexity and telecom metrics and find their relations the understanding of such relations will then feed an abm approach to network tuning studied in exhibit a complex behaviour and this relates to robustness against changes in the environment in particular exploring frequency planning from a complex systems perspective leads to conclude that future networks shall eschew any current frequency planning approaches and instead determine frequency of operation on the fly this has enormous implications for design and of networks deployment of small cells and network operation iii m ethodology significant impacts have been made by css in a wide range of areas including physics biology economics social sciences computer sciences and various engineering domains we claim that the css perspective provides the necessary means to redefine the general understanding of telecommunication networks we draw on concepts from information theory and abm each concept augmenting and developing the understanding of wireless networks we will now briefly review some of the most important tools and concepts we use in our studies in order to specify and analyse the complexity of a network function introduced a framework representing an abstraction of a telecommunication network by modelling its operation and capturing all elements nodes and connections necessary to perform a given function our framework includes functional topologies graphs created based on the functional connectivity between system entities see fig a node in our topology represents a functional entity of a network node or any information source that is part of the given network function the links indicate dependencies between nodes the definition of functional topologies allows us to visualise the relationships between system entities and enables the systematic study of interactions between them based on these topologies one can define css inspired metrics such as functional complexity which quantifies the variety of structural patterns and roles of nodes in the functional topology or other information metrics modelling abm is a useful method to model networks in abm was used to investigate the impact of several mac component technologies in terms of both telecom and iot application s key performance indicators kpi this is key for our framework s analysis and tuning layers our framework enables modelling analysis and tuning of wireless networks in which changes in the networks domain can be analysed and assessed indeed in order to maximize the impact of our framework our proposed approach is rooted in relevant architectures and use cases in networks such as cellular and iot networks the use cases define the expected parameters types of and environments a general set of possible scenarios we could investigate using our framework is shown in table i table i possible use cases parameters type of users environments low latency high throughput high reliability extensive coverage energy efficiency typical mobile broadband healthcare automotive automation wearable devices busy train station location busy office large plant a solution approach our framework is based around the idea of using concepts tools and measures of a complex systems science nature the framework is based on a modelling layer which supports the analysis and tuning layers see fig modeling layer the modelling phase focuses on developing techniques to capture the significant attributes of networks and the interactions that shape them along with the traditional attributes used to characterize networks coverage and throughput the modelling phase develops new complexity metrics and investigates their relation to telecom kpis these metrics shall be developed distinctly for each application based on existing and new concepts we draw from css the modelling component of the framework develops appropriate abstractions and formalisms to enable metric calculation to this end we produce a abstraction for networks the first level or device level of this abstraction focuses on individual elements within a network targeting the interplay that results from information being collected and used locally by a single entity interference and stability of the connection as a function of power available at the node between nodes in a network are two examples of notions studied at the device scale available local information may as in the case of the interference perceived at a certain network node or may not battery level result from the actions of other nodes that is the device scale typically models the implicit exchange of information where nodes infer information for each other s actions without directly exchanging messages such as the paradigm of a distributed time division multiple access tdma system the higher scales model the explicit exchange of information between groups of nodes in the network at this level interaction scale the nodes act on the basis of information provided by some other node directly as occurs for example when assigning a slot in a centralized tdma system the interactions that shape the network formation and operation are directly modelled using abm our model considers the interactions between the interests of different network operators these agents operate in a hierarchical fashion see fig with network operator agents who in turn contain that determine specific aspects of the network based on technical behaviours anything that makes decisions in a network can be viewed as an agent and abm is applied to model interactions between agents for example iot agents may attempt to use the infrastructure provided by operator agents as shown in fig to capture the range of possibilities we can use nested subagents in which major agents might represent a whole network with subagents representing individual cells abm allows conversion of experience with detailed processes behaviours into knowledge about complete systems macrolevel outcomes in general we can consider several radio resources in our abm model resources belonging to frequency power and space domains several alternative techniques and technologies can be applied within each domain which entails a wide set of resources and related modes of utilisation analysis layer in the analysis layer the models are reviewed to determine the representative power and meaning of the metrics developed by linking i the operator behaviours with our new css metrics ii the operator behaviours with network kpis fitness iii our new css metrics and kpis as an example we could analyse the relationship between operator decisions on the amount of shared resources infrastructure spectrum and the resulting network characteristics for each scenario measures of network performance can be identified including standard network operator agent cellular network agent cell agent access point agent iot agent network operator agent cellular network agent network operator agent wifi network agent wifi network agent wifi network agent cellular network agent iot agent iot agent fig agent organization our agent model is hierarchical with major agents representing a whole network and subagents representing iot agents or individual cells and access points agents operator kpis such as cell edge peak and mean throughput spectrum utilisation relative to available bandwidth network reliability and coverage for each type of the above mentioned relations i ii iii we can determine the most promising pairing of elements operator behaviour and css metric or css metric and kpi within each scale and between scales for determining connections in particular we can identify which behaviours correlate to specific network performance measures on each scale and to what extent and how our css metrics describe these relationships further we can investigate how a certain css relation at a certain scale affects another css relation at a different scale a strategy leading to throughput maximisation at the device level might compromise the fairness objective of the resource allocation scheduler at the interaction level this process involves assessing the ability of the css metrics to describe the impact of operator behaviours analysing the effect of these behaviours on the network kpis and finally describing the network kpis in terms of the css metrics determining the link between network css metrics and kpis would allow us to attempt to answer fundamental questions such as whether one needs a minimum complexity for achieving a given level of kpi fitness and what excess complexity implies in terms of adaptivity and robustness cost in summary the analysis layer completes the development of the core of our framework by establishing a compact representation of the networks by linking complexity metrics to network performance tuning layer the tuning layer augments the framework with algorithms that automatically guide the operation and management behaviours of relevant agents to achieve desired network properties this tuning approach utilizes the holistic information encoded into the complexity based quantities to select appropriate parameters and constraints for the behaviours of the agents the developed tuning approach can be based on the application of optimization techniques the algorithms to be developed within this paradigm might apply optimization algorithms pgen successive pareto optimization to determine the pareto fronts for the state spaces of the agent behaviours on the basis of achieving desirable css metrics values these pareto fronts provide the parameters and constraints of the operator behaviours allowing operators to further optimize for specific differentiations while maintaining desired holistic properties a particular solution may be selected from the pareto front on the basis of agent preferences such as a preference for high adaptivity and robustness or low complexity without compromising the overall quality of the solution iv a pplications of the p roposed f ramework a modeling layer modelling of the internet of things we employ an instance of our framework concept to investigate the tightened coupling between operative reality and information transfer precipitated by iot as such this investigation resides primarily in the modelling phase with some extension into the analysis phase within this work we apply the tool of abm to study the impact of communications technologies within the scope of iot an automatic traffic management system is considered where for the purposes of illustrating dm fig single intersection diagram sensors are deployed alongside the roads and are represented as dots inactive sensors are depicted as black dots sensors detecting moving and static cars are shown as orange and purple dots respectively the nature of our abm approach a single intersection is assumed depicted in fig controlled with traffic lights in which the avenue of the is observed by sensor nodes a processing unit here denoted as the decision maker dm serves as the sink of sensor information and the source of light control commands sensor nodes mark the advancement of cars here portrayed as yellow squares and proceeding on the left side of the roadway toward the intersection two mac protocols csma and aloha are investigated for communication between the sensors and the dm the dm applies the resultant information from this process to govern vehicular progress through the coloration of traffic signals notably the semantics of communications greatly impact the operation of the physical system fig exemplifies this notion through a depiction of the difference between the actual number of cars waiting at a traffic light and the perceived number of cars known to the dm component of the system as revealed by abm the minor difference of a channel csma or not aloha causes either an or an of the actual number of vehicles by the controlling element in the system as such the application of abm techniques allows the development of an understanding of the various that direct the behavior of a complete telecommunication system fig impact of mac on perception of situation on a scenario vehicles always travel in a straight line at constant speed unless they need to stop due to traffic lights or other cars at each iteration the probability of a new car arriving at one of the four edges of the grid and travelling in the corresponding direction is functional complexity as another example of work at the modelling layer we have developed a metric to capture the amount of information shared between elements of a network as a result of the organization of the network in support of a network function this analytical approach to quantify the complexity of a functional topology provides us with the means to capture the signaling complexity of functional operations within a network such as handover or frequency assignment that is our complexity metric provides a new method of describing the functional operation of telecommunication networks our complexity metric is built upon the concept of shannon entropy hr xn we employ the bernoulli random variable xn to model the potential of a node to interact with other nodes the probability of interaction pr xn is defined as the reachability of a node n pr xn inr where inr is the number of nodes that can reach node n and j is the number of nodes for the given subgraph the definition of reachability in terms of the number of hops allowed between two nodes in the functional topology enables the analysis of complexity on multiple scales r the one hop reachability represents the lowest possible scale r where each node interacts only with its immediate neighbors the increasing number of allowed hops between the nodes brings the nodes closer to each other in terms of interactions and moves the focus from interactions among nodes to interactions among groups of nodes analysis of higher scales the total amount of information of the k th subgraph with j nodes for scale r is calculated as ir x hr xn where is the k th subgraph with j nodes the total amount of information represents the total uncertainty which is related to the actual roles of nodes that appear within a subgraph and different subgraph patterns our complexity metric which is calculated with eq quantifies the amount of order and structure in a system that is seemingly disordered n x x cf i ir r where r is the maximum scale size which is defined as the diameter of the functional topology n is the number of nodes in the functional topology is the whole functional graph and hir i is the average amount of information for a given subgraph size j we call the metric in eq functional complexity our approach holistically gauges the functional organization of a network by first describing the interactions necessary to perform a given function topologically within this representation we capture the network elements involved in performing some function and the interactions that support the operation of the function our quantification of networks in terms of their functional relationships provides a wholly new approach to understanding the operation of networks as corroborated by fig more typical metrics for network topology do not capture the notions represented by our complexity metric in fact the correlation of our complexity metric with other traditional metrics is lower than average path length clustering coefficient complexity average degree complexity average path length complexity average path length average degree complexity clustering coefficient average degree complexity clustering coefficient complexity fig correlation between the proposed complexity metric and the three most used measures of network topology average path length average degree distribution clustering coefficient in all the cases we consider this complexity metric thus provides an alternative method of describing network operation the above functional topology and complexity framework can be applied for instance to understand the underlying mechanisms that lead to certain network properties scalability energy efficiency in wireless sensor networks wsn as the result of different clustering algorithms b analysis layer in the context of the analysis layer of our framework we focus on a cellular network that selforganises from a frequency perspective to understand the collective behaviour of the network we calculate the excess entropy ec x h m h m to measure complexity and the entropy h lim h m m where h m is the entropy of the target cell x conditioned on m surrounding cells by measuring ec and h we gain an understanding of the structure emerging in the lattice for a network based on eqs and in one shows that a cellular network can exhibit a complex behaviour and that it can be robust against changes in the environment in more detail a and a centralised channel allocation are analyzed with respect to their robustness to local changes in the environment in order to compare the stability of the two types of channel allocation instances of the frequency allocation algorithm are run using lattices then for each resulting channel allocation all possible cells n are considered and for each cell all possible frequencies are in turn considered then the optimal minimum distance c to an channel allocation is computed we define the distance between two channel allocations as the number of changes that are necessary to move from one configuration to the other we found that the locally perturbed channel allocation matrices resulting from are more stable than those resulting from a centralized frequency planner what we know so far is that there is a relation between some complexity metrics and some telecom kpis between excess entropy and robustness to changes and between functional complexity and the efficiency the complexity metrics we introduced have shed some new light on very relevant telecom in the context of networks excess entropy can measure capabilities in the frequency allocation context and functional complexity can measure scalability in wsn as widely acknowledged and scalability are very important properties of systems for iot and dense small cell deployments in the future we plan to improve and expand such understanding to all the most prominent network technologies and kpis tuning layer abm rules will choose the technological behaviour options that maximize the targeted communication network kpi subject to constraints defined by the correlation between css metrics set of available parameters complexity robustness complexity energy efficiency complexity resilience fitness functions waveforms mimo frequency reuse duplexing tuning layer mimo scheme ofdma full duplex frequency assignment algorithm network configuration parameters fig the adaptation of network configuration parameters in the tuning layer the set of available parameters represents a virtual pool of all the available network resources the fitness functions depict a relationship between different network kpis and complexity metrics which are calculated upon the set of available parameters and other telecom kpis local decisions will be based on only a few css metrics and will lead to desired global of the network the local decisions are made according to abm rules by exploring and selecting the fittest behaviours where by behaviour we mean some algorithm or policy acting on some radio resources our goal for different services mobile broadband is to choose behaviours that allow the network to achieve satisfactory kpis in terms of delay throughput coverage energy efficiency emission etc the question is whether we can keep achieving globally satisfactory kpis just by changing abm rules in a distributed fashion at different nodes such adaptation will act within a certain resource allocation domain picking among different massive mimo schemes or between allocations using resources from different domains spectrum or infrastructure the main ideas behind the tuning layer of our framework are exemplified in fig although our own work on the tuning layer is still in the initial phase from a substantial amount of literature we can gather evidence that different physical layer phy and radio resource management rrm techniques in the domain should be chosen depending on environmental conditions and network requirements we are potentially in a situation where our tuning layer is relevant and beneficial we give a brief account of such evidence next in it is shown that for a massive mimo system the has a linear or sublinear behaviour with respect to the number of base station antennas depending on the spatial richness of the environment related work on adaptive precoding for distributed mimo is explored in several works investigate the coexistence of various waveforms in terms of leakage interference and possible implications for the waveform selection the fraction of cells that have full duplex base stations can be used as a design parameter to target an optimal between area spectral efficiency and outage in a mixed duplex cellular system in it is shown that increasing the frequency reuse can improve the for small cell deployments while a lower frequency reuse should be favoured if the target is maximizing throughput given a certain bs density in summary we plan to use the above understanding of the benefit of adaptation at phy and mac layers in networks and extend it as needed in terms of technology components kpis and adaptation criteria to inform our framework and show its immediate benefit in understanding operating and designing systems o pen c hallenges several more component technologies in addition to those considered in this paper can enrich the set of possible choices used to model analyse and tune the network including massive or distributed multiple antenna arrays different waveforms and multiple access schemes different duplexing schemes novel spectrum sharing schemes such as license assisted access laa and different frequency reuse schemes including probabilistic ones for networks what we know so far is that there is a relation between some complexity metrics and some telecom kpis between excess entropy and robustness to changes and between functional complexity and the efficiency in the future the aim is to improve and expand such understanding to all the most prominent technologies and kpis for networks in particular it is still an open question how to achieve the desired network tuning properties within a large optimization space encompassing many different network resources kpi objectives and constraints many different heterogeneous networks and a very large number of nodes and decision points we conjecture abm can help us achieve such ambitious goal as a key tool to engineer desired emergent properties in such future challenging networks as the network graph representations discussed in the proposed framework might dynamically change according to the different radio resource domains and related techniques used one open area of investigation is to study how the complexity metrics can be calculated and how they evolve over time for such dynamic resource allocation and then use such metrics to analyse and tune the network behaviour taking into account robustness resilience network utilization and other network characteristics vi c onclusion current complex systems science literature focusing on communication systems draws on network science studying applications and traffic modelling but lacks considerations of architecture infrastructure and technology we instead apply complex systems science to wireless networks from the functional perspective drawing on concepts from information different metrics of complexity and computational agent based modeling theory adapted from complex system science since complex systems science metrics are currently absent from the quantities considered when operating and designing communication networks by introducing our proposed framework we initiate a completely new way to model analyse and engineer networks founding a new theory and practice of telecommunications not previously anticipated as a simple example our work on exploring frequency planning from a complex systems perspective leads us to conclude that future networks shall eschew any current frequency planning approaches and instead determine frequency of operation on the fly with enormous implications for design rollout and operation of networks we believe such distributed decision making paradigm is likely going to be the way forward for many of the future and iot resource allocation problems in particular we have reasons to believe that complex systems science provides the key to unlock the full potential of in telecom systems r eferences lloyd measures of complexity a nonexhaustive list ieee control systems magazine vol no pp feldman crutchfield structural information in patterns entropy convergence and excess entropy physical review e macaluso cornean marchetti doyle complex communication systems achieving frequency allocation in ieee icc pp macaluso galiotto marchetti doyle a complex systems science perspective on cognitive networks journal of systems science and complexity vol no pp january dzaferagic kaminski mcbride macaluso marchetti functional complexity framework for the analysis of telecommunication networks journal of systems science and complexity under review available online on arxiv https dzaferagic kaminski macaluso marchetti relation between functional complexity scalability and energy efficiency in wsns in international wireless communications and mobile computing conference iwcmc jun under review available online on arxiv https niazi hussain tools for modeling and simulation of in ad hoc and other complex networks ieee communications magazine vol no pp mar cirillo et evaluating the potential impact of transmission constraints on the operation of a competitive electricity market in illinois report anl tonmukayakul weiss an model for secondary use of radio spectrum in ieee international symposium on dynamic spectrum access networks dyspan pp kaminski murphy marchetti modelling of an iot network in ieee international symposium on systems engineering isse whitacre degeneracy a link between evolvability robustness and complexity in biological systems theoretical biology and medical modelling hooker philosophy of complex systems elsevier candia et uncovering individual and collective human dynamics from mobile phone records journal of physics a mathematical and theoretical vol no pp may deville inard martin gilbert stevens gaughan blondel and tatem dynamic population mapping using mobile phone data proceedings of the national academy of sciences hidalgo and the dynamics of a mobile phone network physica a statistical mechanics and its applications vol no pp may onnela et structure and tie strengths in mobile communication networks proceedings of the national academy of sciences vol no pp may wang gonzalez hidalgo and barabasi understanding the spreading patterns of mobile phone viruses science vol no pp march bentosela cornean farhang marchetti on the sublinear behavior of massive multi user mimo sum rate for deterministic channel models ieee transactions on communications ryu jung and song adaptive precoding scheme with efficient joint processing for downlink coordinated transmission system electronics letters xing renfors investigation of filter bank based communication integrated into ofdma cellular system in international symposium on wireless communications systems iswcs bodinier bader and palicot modeling interference between and limitations of the model in international conference on telecommunications ict may sexton bodinier farhang marchetti bader dasilva coexistence of ofdm and fbmc for underlay communication in networks in ieee global telecommunications conference globecom goyal galiotto marchetti panwar throughput and coverage for a mixed full and half duplex small cell network in ieee international conference on communications icc may cirik rikkinen joint subcarrier and power allocation for maximization in ofdma systems in ieee vehicular technology conference vtc may galiotto pratas doyle marchetti effect of propagation on networks computer networks under review available on arxiv https
3
analysis of unprotected intersection conflicts based on naturalistic driving data apr xinpeng ding huei and david analyzing and reconstructing driving scenarios is crucial for testing and evaluating highly automated vehicles havs this research analyzed conflicts at unprotected intersections by extracting actual vehicle motion data from a naturalistic driving database collected by the university of michigan nearly left turn across path opposite direction events involving heavy trucks and light vehicles were extracted and used to build a stochastic model of such scenario which is among the top priority scenarios identified by national highway traffic safety administration nhtsa statistical analysis showed that vehicle type is a significant factor whereas the change of season seems to have limited influence on the statistical nature of the conflict the results can be used to build testing environments for havs to simulate the crash cases in a stochastic manner i introduction before highly automated vehicles havs can be released to the general public a process for testing and evaluating them must be established the google car project experienced its first crash in february moreover tesla autopilot failed to detect a in its first fatal crash happened in may and was criticized for using the consumers as beta testers fig briefly demonstrates how this crash happened with the red sedan representing the tesla the national highway traffic safety administration nhtsa is now considering the possibility of putting a approval process into place in addition to a rigorous process still anticipated from the vehicle manufacturers a key factor in hav testing is the test scenarios and behaviors of other road users particularly those of other vehicles the test conditions need to be not only realistic but also feasible for repeated safety tests test scenario models can be divided into two types the first type has fixed scenarios such as the tests of lane support systems lss and autonomous emergency braking aeb launched by the european new car assessment programme euro ncap a major advantage of this type is that it is repeatable however it is hard to use this type of models to this work is funded by the mobility transformation center denso tailor project at the university of michigan with grant no wang is with the department of automation tsinghua university beijing china and now he is a visiting scholar at the university of michigan ann arbor mi zhao corresponding author zhaoding and leblanc are with the university of michigan transportation research institute ann arbor mi peng is with department of mechanical engineering at the university of michigan transportation research institute ann arbor mi fig a brief description of the tesla accident represent the highly complex and variable nature of the human driving environment moreover havs could be adjusted to pass certain fixed scenarios while their performance under broad conditions might not be well assessed to overcome these drawbacks we proposed a second type of models in our previous works we proposed a stochastic test method and built a test environment for and scenarios in this paper we will focus on the intersection scenario the intersection has been one of the most challenging scenarios for havs due to the variety of road users complexity of traffic flow and the unpredictability of vehicles and pedestrians according to crashes at intersections took up a major portion about of all the traffic crashes in the us among all kinds of scenarios with potential risks at an intersection unprotected left turn across path opposite direction is a typical one this scenario is ranked second among priority precrash scenarios in an scenario two vehicles are considered the turning vehicle tv and the straightdriving vehicle sdv although a lot of research such as has been conducted on traffic conflict analysis of scenario the factor of vehicle type has not been widely investigated now that the crash of the tesla with autopilot system has been attributed to its failure to detect the truck turning ahead it is crucial that more attention should be paid to scenarios involving heavy trucks moreover there has been insufficient research on the influence of season change on driving behaviors at intersections as extreme weather such as storm and fog has a strong impact on the driving behaviors of human drivers we propose that they are possibly influential to havs as well this research focused on two major tasks first it built a stochastic model of traffic conflicts in the scenario table i introduction to ivbss database vehicle type distance time trips vehicles drivers type of front radar light vehicle mi apr apr sedans drivers bosch heavy truck mi tractors drivers trw a light vehicle fig based on naturalistic driving data events involving both light vehicles lvs and heavy trucks hts as the sdv were extracted from the database reconstructed into realistic trajectories of tvs and sdvs and finally described with several key variables secondly the influence of vehicle type of the sdv and the season factor to the driving behavior of the tv was analyzed by comparing the distribution of these key variables between lvs and hts as well as between summer and winter ii data s ource the data source for this research is the integrated vehiclebased safety systems ivbss database which was collected from to and maintained by the university of michigan transportation research institute umtri the database consists of two parts lv platform and ht platform it comes from a naturalistic field operational test which is to assess the potential safety benefits and driver acceptance associated with a prototype integrated crash warning system as the system incorporates forward crash warning fcw lateral drift warning ldw warning lcm and curve speed warning csw non of the functions are designed to deal with scenario thus it is assumed in this research that whether the warning system is enabled will not affect driver behavior at scenario for lv platform identical prototype vehicles were driven by drivers for their own personal use for over six weeks on each test vehicle there is one radar that looks forward and six radars that cover the adjacent lanes as well as the area behind the vehicle in addition there is a vision system an automotivegrade global positioning system gps and an digital map around different channels of signals have been collected for the ht platform male commercial truck drivers from freight drove equipped class tractors for months there are eight radars three exterior cameras and several interior cameras on the test truck recording over channels of data including the driving environment drivers activity system behaviors and vehicle kinematics basic information about the lv and ht platforms in the ivbss database is listed in table i the configuration of sensors on each platform is shown in fig the test area covered by ivbss is primarily in b heavy truck sensor configuration of ivbss test vehicles the detroit area of the most of the ht trips took place in the lower peninsula of michigan and ohio most of the lv trips fell within a similar region the database provides adequate information for this research for event extraction data from the gps sensor is used to locate the instrumented vehicle data from the front radar is used to reconstruct the trajectory of target vehicles video recordings from vision cameras around the vehicles are used as a supplemental tool for event screening in addition as the ivbss test lasted approximately one year driving data under a variety of weather conditions throughout the year were covered enabling us to uncover the influence of season factors iii e xtraction of l eft t urn s cenario in order to extract eligible events from the database for both the lv and the hv platform three major tasks were performed first we processed data from radar for further use second we searched the database for all events that meet our criteria finally data points in each event were interpreted into trajectories of the tv and the sdv a target association of truck data for radar data from the ht platform we need to associate and mark data points together that belong to the same target in order to screen out unfit targets and create a trajectory for every eligible tv to cluster points of interests we apply the following criteria to processing ht data only objects tvs that move in the opposite direction are detected vt v only points with small azimuth angle are considered as the effective detecting range of the radar is when the cluster with point i is expended only data points within a small time slot t i t i s are considered only neighbor points that satisfy the following rules are grouped strong correspondence between range range rate and time difference t j t i where r j r i rr j rr i a range fig b transversal fig configuration of the instrumented vehicle and the target vehicle for event extraction in scenario an exemplary result of target association here r i is the range of point i and rr i is the range rate of point i reasonable difference in transversal tr j tr i t j t i here tr i is the transversal of data point i and t i is the time fig shows an example of data points that are associated and divided into different groups in one event the dots with the same color show trajectories of targets while red dots do not belong to any group and are seen as noise in such a typical scenario a vehicle is turning in front of the instrumented truck fig a shows how the range of target points change over time as target vehicles cross the intersection when the instrumented truck is moving forward at a steady speed the ranges to different targets are decreasing linearly moreover fig b shows that the transversal of multiple targets is increasing from negative to positive indicating that they cross from left to right in the view of the instrumented truck once data points from each target are clustered the ht platform can be used for further event extraction for eligible scenarios b event screening an unprotected scenario can be recorded by either the sdv or the tv in this paper we use only the scenarios recorded by sdvs fig demonstrates the configuration of the instrumented vehicle the sdv and the target vehicle the tv for event extraction in scenarios for both the lv and ht platforms eligible leftturn events are queried based on the following criteria the intersection has a stop sign or a set of signal lights although there will be protected events retrieved with this criterion they can be screened out by the following conditions such as the constraint on the velocity of the tv and the sdv the instrumented vehicle is moving straight speed larger than change of heading angle smaller than the target vehicle is moving towards the instrumented vehicle the longitudinal projection of speed smaller than and moving from left to right due to the difference in radars transversal goes from positive to negative for lvs and from negative to positive for hts fig procedure and interim results for event extraction time duration of the event is adequate more than s the maximum of time difference between two consecutive points defined as in an event should be small enough to be seen as points of the same target max s event extraction follows a similar procedure for lv and ht for the lv platform we first select all occurrence at intersections we then extract those with leftturn objects from the opposite direction these tasks are completed in the microsoft sql server management studio ssms afterwards the extracted events are exported to matlab for the last round of screening which guarantees reasonable speed targets and time duration for the ht platform the only difference is that after retrieving all the occurrences of at intersections we export data from the database server directly into matlab for target association and the following extraction tasks the diagram in fig illustrates the procedure and interim results of each phase for event extraction finally ht has eligible events whereas lv has the location of these events is shown in fig trajectory reconstruction once all eligible events have been selected the trajectories of both tv and sdv in each event are then reconstructed the exact position of sdv comes from the gps sensor the data from the front radar are used to extract the relative position of tv in the coordinate of sdv after synchronization on gps and radar data the trajectories of tv and sdv are generated fig shows the reconstructed trajectories for sdv and tv in one event here dots with the same color represent the position of tv and sdv at the same moment the tv crossed intersection before the sdv in this example a light vehicle fig b heavy truck fig the location of extracted events time to the conflict point of the sdv in a scenario vsdv speed of the sdv at tx vt v speed of the tv at tx first we demonstrate an example of conflict analysis on a single event here we use the aforementioned occurrence where the tv crossed the intersection before the sdv did fig uses tcp to demonstrate how the sdv and the tv interacted in one real event the vertical axis indicates predicted time to the point of conflict of the sdv whereas the horizontal axis shows the real elapsed time relative to the moment when the tv crosses the intersection that is tx in this scenario time to the conflict point decreased linearly over time indicating the margin between the tv and the sdv was large enough for the sdv to maintain a nearly constant speed when the tv was crossing when the tv reached the conflict point there was a margin for sdv that is tcp which is demonstrated by the red dot here tcp described the essence of this interaction between the sdv and the tv then the following modeling and analysis will ignore the detailed interaction of the tv and the sdv paying attention only to the four aforementioned variables in each event we use all events we retrieved in the previous section from both the ht and lv platforms as the source for modeling fig example of reconstructed trajectory iv c onflict a nalysis and c omparison a definition and metrics of conflicts in this section conflict is used to describe risky events in traffic according to conflict is defined as an observational situation in which two or more road users approach each other in space and time to such an extent that a collision is imminent if their movements remain unchanged many conflict metrics have been used for measuring the level of safety for an event including time pet leading buffer lb and trailing buffer tb used by and gap time gt used by for this paper as the goal is to construct a stochastic model we choose only the most representative time slice in each event to model conflicts the heading angle of the sdv is taken as constant during each event with any small deviation being ignored thus a conflict point is naturally defined as the location of the tv when its transversal in the radar of sdv crosses zero and this exact moment is regarded as the representative moment of this conflict defined as tx consequently four variables at tx are chosen to model the conflict including two modified conflict metrics time to the conflict point tcp and distance to the conflict point dcp dcp distance to the conflict point for the sdv at tx dcp dist psdv tx pt v tx in this section the effect of vehicle type on traffic conflict in scenarios is discussed distributions of variables for lvs and hts are compared as events with smaller dcp and tcp are more dangerous we generated the distributions of the reciprocal of dcp and tcp to put these risky but rare events in the tail as shown in here psdv and pt v are the positions of sdv and tv tcp time to the conflict point for the sdv at tx tcp dcp b comparison between light vehicles and heavy trucks a distribution of dcp b distribution of tcp fig comparison of dcp and dcp between heavy trucks and light vehicles light vehicle heavy truck a speed distribution of b speed distribution of turning driving heavy trucks and heavy trucks and turning light vehidriving light vehicles cles fig comparison of the speed between heavy trucks and light vehicles fig the dots and bars at the top of figures show the mean value and the standard deviation of each empirical distribution from fig we can see that when dcp or tcp increases there are fewer points of data giving rise to a shape with a long tail moreover events with an ht as sdv tend to have both smaller dcp and smaller tcp than with an lv indicating less severe conflicts fig shows the distributions of vsdv and vt v the distribution of vsdv for ht and lv platforms both have a triangular shape most vsdv ranges from to whereas most vt v is less than at the conflict point though there is no obvious difference with the distribution of vsdv between events with lv and ht as the sdv the vt v tends to be significantly higher when the sdv is an lv than is an ht combined with the previous results of dcp and tcp we can conclude that for conflicts where hts are sdvs conflict metrics have significantly higher value and tvs tend to turn with less aggressive speed this means that when the tv chooses the time of turning and commences turning action it behaves more conservatively when confronted by an ht coming from the opposite direction than an lv the difference in vehicle type does influence the driving behavior of the tv and the severity of the conflict analysis of season factor in this section we uncover the influence of season factor on behaviors of sdvs and tvs in scenarios during the test of driving for hts and for lvs took place in freezing temperature the months with events that took place in freezing temperatures are defined as winter which includes december through march of the following year this period also coincides with the time when the average snowfall in ann arbor is over inches on the other hand summer is defined as being from june to august we have retrieved events in summer and events in winter for lvs whereas the numbers for hts are and respectively tcp dcp vsdv and vt v are compared for summer and winter driving mww test is a nonparametric hypothesis test of the null hypothesis that two populations are the same against an alternative hypothesis here we used it to determine whether the conflict metrics differ between summer and winter fig comparison between events in summer and winter fig shows the result of the comparison it can be concluded that for both lv and ht platforms the mean values for summer and winter of all four variables that describe the conflict at for both sdvs and tvs are very close as the from the mww test is large for all the eight distributions we are not able to distinguish between the pattern in summer and vt v and vsdv this result in winter in terms of dcp tcp indicates that despite a large difference in climate there is no significant difference between the way people drive in winter and in summer at scenarios in the great lakes area this conclusion has its significance for designing and testing of havs conclusion in this research traffic conflicts of tvs and sdvs in scenarios are modeled and analyzed based on nearly events extracted and reconstructed from the naturalistic database the two modified conflict metrics tcp and dcp are used to model turning behavior of the tv this stochastic model can be further used for developing simulation tools for evaluating havs the significance of vehicle type and season are also addressed in the research in general when the sdv is an ht the driver of the tv tends to turn in a more conservative fashion with a wider margin surprisingly despite prevailing snow and freezing weather in the winter of michigan driver behavior at scenarios during the test did not differ significantly between summer and winter these two conclusions can be useful for designing automated driving algorithms and for establishing regulations and policies for havs in the following research we will improve the accuracy of trajectory reconstruction by conducting sensor fusion to the gps and yaw rate sensor and by data from different channels moreover we will further investigate the reasons behind the conclusion on the similarity of driver behavior between summer and winter possible causes could be snow on the road was shoveled promptly in winter thus normal driving was almost unaffected the trips avoided extreme weather in winter so that the data was biased besides we will also facilitate the model to build a stochastic simulation environment for the testing and evaluation of havs d isclaimers this work was funded in part by the university of michigan mobility transformation center denso pool project the findings and conclusions in the report are those of the authors and do not necessarily represent the views of the mtc or denso r eferences google car project monthly report february pp solon should tesla be beta testing autopilot if there is a chance someone might die online available https department of transportation and national highway traffic safety administration federal automated vehicles policy tech september online available https european new car assessment programme test protocol lane support systems online available http european new car assessment programme test protocol aeb systems online available http zhao huang peng lam and leblanc accelerated evaluation of automated vehicles in maneuvers submitted to ieee transactions on intelligent transportation systems online available http zhao lam peng bao leblanc and pan accelerated evaluation of automated vehicles safety in lane change scenarios based on importance sampling techniques ieee transactions on intelligent transportation systems huang zhao lam and leblanc accelerated evaluation of automated vehicles using piecewise mixture models submitted to ieee transactions on intelligent transportation systems online available http chan defining safety performance measures of driver assistance systems for intersection conflicts ieee intelligent vehicles pp wassim toma and j brewer depiction of priority scenarios for safety applications based on communications tech april chan characterization of driving behaviors based on field observation of intersection left turn across path scenarios ieee transactions on intelligent transportation systems vol no pp nobukawa barnes goodsell gordon and arbor reconstruction of vehicle trajectories for intersection conflict analysis using sensors conflict no july pp preliminary report online available http leblanc sayer bao bogard buonarosa blankespoor and funkhouser driver acceptance and behavioral changes with an integrated warning system key findings from the ivbss fot tech online available http sayer buonarosa bao bogard leblanc blankespoor funkhouser and winkler integrated safety systems field operational test methodology and results report no december sayer bogard funkhouser lablance bao blankespoor and buonarosa mary lynn winkler integrated safety systems field operational test key findings report tech august zhao peng nobukawa bao leblanc and pan analysis of mandatory and discretionary lane change behaviors for heavy trucks in avec no dlc pp tarko use of crash surrogates and exceedance statistics to estimate road safety accident analysis and prevention vol pp nobukawa a model based approach to the analysis of intersection conflicts and collision avoidance systems dissertation the university of michigan misener california intersection decision support a systems approach to achieve nationally interoperable solutions ii california path research report mann and whitney on a test of whether one of two random variables is stochastically larger than the the annals of mathematical statistics the annals of mathematical statistics vol no pp
3
a revised incremental conductance mppt algorithm for solar pv generation systems meng yue and xiaoyu wang sustainable energy technologies department brookhaven national laboratory upton ny usa yuemeng xywang revised incremental conductance inccond maximum power point tracking mppt algorithm for pv generation systems is proposed in this paper the commonly adopted traditional inccond method uses a constant step size for voltage adjustment and is difficult to achieve both a good tracking performance and quick elimination of the oscillations especially under the dramatic changes of the environment conditions for the revised algorithm the incremental voltage change step size is adaptively adjusted based on the slope of the curve an accelerating factor and a decelerating factor are further applied to adjust the voltage step change considering whether the sign of the curve slope remains the same or not in a subsequent tracking step in addition the upper bound of the maximum voltage step change is also updated considering the information of sign changes the revised mppt algorithm can quickly track the maximum power points mpps and remove the oscillation of the actual operation points around the real mpps the effectiveness of the revised algorithm is demonstrated using a simulation index mppt algorithm fractional mppt algorithm p o mptt algorithm solar pv generation i introduction as one of the most promising renewable energy technologies the installed capacity of the solar photovoltaic pv generation has increased dramatically in recent years although the cost of pv generation continues to drop the economic competitiveness of solar pv energy is still low compared to the traditional energy sources even with various local and federal policy instruments while it is desirable to further lower the cost and increase the efficiency of solar energy systems including both the solar panels and the power electronic devices increasing the efficiency of the installed pv energy systems by simply improving the existing control algorithms should also be pursued one way of achieving this is to modify the existing mppt algorithms to extract more solar energy under various environmental conditions many different types of mppt algorithms have been proposed in the literature as summarized in different algorithms have their own pros and cons in terms of complexity accuracy and convergence speed etc among the commonly used algorithms the and observation p o method is easy to implement using either analog or digital circuits it periodically perturbs either the duty ratio of the converter or the the pv array operating voltage even when the mpp is achieved further the true mppt can not be achieved using the p o method since the operation point of the pv system is oscillating around the mpp under the conditions of continuous fast changing irradiance the operating point might continuously deviate from the mpps and eventually the optimal operation points can not be achieved at all these issues degrade the performance of the solar generation system the fractional voltage or current method needs only to sense one voltage or current parameter to approximate the mpp by using empirical parameters the major issue related to this method is that the pv circuit has to be periodically operated in or conditions and may have significant impact on the grid operation other types of algorithms such as those based on fuzzy logic control and neural network may accurately track the mpps under different environmental conditions the mppt performance however is not guaranteed since they both rely heavily on the algorithm developers a significant volume of field data under all kind of conditions for the design and implementation of such algorithms the inccond method see appears to be the most popular one in practice due to its medium complexity and the relatively good tracking performance one of the major difficulties implementing the inccond method is the selection of the fixed voltage change step size for simultaneously satisfying the tracking speed and maintaining the mpp a large step size of voltage change helps the system rapidly approach the mpps on the other hand this large value generally induces persisting oscillations around the mpp if no other special countermeasures were taken the issues with using a small step size of voltage change are the opposite a simple and effective revised inccond algorithm is proposed in this paper an adaptive voltage step change scheme is first adopted based on the slope where the operation point locates on the curve an accelerating factor and a decelerating factor are then applied to further adjust the voltage step change considering whether the sign of the curve slope remains the same or changes in a subsequent tracking step the same information of sign changes is also used to update the upper bound of the maximum voltage step change the adaptive voltage step change enables the pv system to quickly track the environment condition variations reach and stay at the mpps in this way more solar energy generation can be harvested from the pv energy systems these improvements enable the quick response to the environment condition changes and rapid landing on the mpp the revised method is easy to implement since it does not require knowledge of the characteristics of specific pv panels and the parameters are easy to tune the revised inccond algorithm is described in detail in section ii with an overview of various modified inccond methods modeling of generic pv generation systems is presented in section iii for simulation purposes simulation results using the proposed mppt algorithm will be shown in section iv and concluding remarks are given in section ii a revised inccond mppt method the mpp is achieved by adjusting the terminal output voltage of a solar array through controlling the converter duty ratio while the cell temperature can be easily measured the irradiance is difficult to measure accurately and the desired voltage at the mpp is hard to know exactly therefore a test condition needs to be developed in order to determine whether the current operating point is the mpp or not without measuring the temperature and irradiance for a solar panel there is only one maximum power point for a given irradiance level and cell temperature note the presence of a partial shading condition of a panel may cause multiple local maxima and is not considered in this paper although the revised algorithm can be used together with the methods proposed in the inccond method uses the information of the solar curve at the left hand side of the mpp the slope is greater than zero at the right hand side of the mpp the slope is less than zero and the slope is zero exactly at the mpp therefore the solar array terminal voltage needs to be increased when the slope is positive and decreased when the slope is negative the slope can be calculated as dp d iv di i dv dv dv with the incremental conductance in an implementation under the mpp condition the following relationship holds i v the major difficulty with the inccond method is the selection of the incremental step size of the duty ratio for adjusting the solar terminal output voltage a fixed incremental step size of the duty ratio in general will not bring the array to the mpp because the operating point will oscillate around the mpp either on the left or the right of the mpp ref divided the entire curves into two domains using square root functions with all of the mpps contained in only one of them therefore the first step of performing mppt is to bring the operating point to the domain that contains all of the mpps this method however requires a good understanding of the pv panel characteristics that are panel specific in a van allen oscillator was added between the solar panel and the inverter for a purpose of balancing the power source and the load that continues changing a simple proportional integral pi controller was developed to track the mpps based on this configuration it is intuitively easy to avoid a fixed voltage change step size by adjusting the increment proportionally to the steepness of the slope and eventually the increment of the duty ratio will become zero at the mpp where the slope is zero similar to the pi controller proposed in the implementation however appears to be very difficult because the curve steepness around the mpp can be very different for different operating conditions the curve for a lower irradiance level can be more flat and a sudden change in the operating condition of the solar array may produce a very large numerical difference in calculating the slope when the change occurs since the duty ratio is between and this may cause unacceptable change in the solar terminal output voltage and make it very difficult to bring the voltage back to normal note refs and proposed twostage methods mainly to avoid the local maxima caused by the insolation experienced by the solar panels the traditional inccond method was still used after bringing the operating point close to the global mpp by using monitoring cells in in this section a simple and effective modified inccond method is proposed based on observations that in two consecutive tracking steps a changing in sign of the slope from positive to negative or from negative to positive indicates that the increment step size is too large otherwise it may land on the mpp or the same side on the curve of the duty ratio and the same sign of the slope in two consecutive tracking steps indicates that the increment step size is too small otherwise the operating point may land on the mpp or the other side on the curve based on these observations the strategy proposed here is to adjust the incremental step size considering the steepness of the slope and further adjust the incremental step size comparing the sign of slopes in two consecutive tracking steps decrease the incremental size in the former case by multiplying the incremental size by a factor deacc such as and to increase the incremental size in the latter case by multiplying the increment by a factor acc such as by applying this improved strategy the solar array will approach the mpp in an accelerating manner after a change in the operating condition s and the magnitude of oscillation around the mpp may rapidly decrease until the test condition is considered to be satisfied after landing onto the mpp the duty ratio will not be adjusted until the operating condition changes again in the implementation the and for the incremental step size need to be defined to avoid extremely drastic changes in the duty ratio however the is generally fixed and needs to be large enough to permit the rapid tracking of the mpp for a sudden change of the operation condition the issue with a fixed upper bound is when the array starts tracking the new mpp and it quickly approaches the mpp and lands on the other side of the mpp the duty ratio needs to be adjusted in the reverse direction using the incremental step which could be large due to the factor acc and remain large for some time although the factor deacc has been applied at this point having a large incremental step does not help because it may cause very large fluctuations or overshoot of the voltage before the mpp is achieved therefore the second proposed improvement is when the sign of the slope changes the is also decreased together with the incremental size it is also preferred to maintain the small nearby the mpp until the mpp is reached note in the implementation of the algorithm a nominal incremental step size is after the test condition of the slope is considered to be satisfied the duty ratio will not be changed but the incremental step size might become very small now which if not corrected will cause the very slow response in the beginning of attempting to track the next mpp under a different operating condition a simple solution is to reset the step size to the nominal value without adjusting the duty ratio when the mpp is considered to be reached i i or k k v v if the slope is not small enough greater than a preselected constant and equation is still not satisfied and the duty ratio d k d k k indicates the initial incremental size of the duty ratio max the initial upper bound of the incremental size k and max k are the updated based on conditions discussed above as indicated in fig increment size and the upper boundary of the incremental size iii modeling of pv energy systems a solar array in general a solar array consists of many solar modules connected in series parallel each module being manufactured by serially connecting a certain number of solar cells a solar cell is essentially represented by an equivalent electrical circuit as shown in fig for illustration purposes modeling of a solar cell is briefly summarized in this part interested readers can find details in other references note also the pv terminal output voltage may be very sensitive to the duty ratio especially for duty ratio close to or the converter input and output voltages should thus be selected such that the duty ratio is in the middle of the duty ratio range fig an equivalent electrical circuit of a solar cell for the solar cell model in fig the following equation can be derived i pv i ph i d vpv i pv rs rp where ipv and vpv represent the solar cell terminal output current and voltage respectively iph is the photon current source id the diode current the series resistance rs and shunt resistance rp are used to represent the power losses while the latter can generally be neglected it is noted in equation that the photon current and the diode current are temperature and irradiance dependent for given panel temperature t kelvin and irradiance level g iph and id can be calculated using the following equations g g i ph g i sc gref t tref ph gref gref gref g i g exp q v i rs nkt id t n exp i t g i g tref nk t tref fig flowchart of the revised inccond mppt algorithm i g i sc gref exp qvocref nktref based on the above discussions the flow chart of the proposed modified inccond algorithm is shown in fig in fig v and i are the terminal output voltage and current and g represent the reference cell in equation t ref ref of the solar array which will be adjusted according to the slope calculated by the converter k k temperature tref k and reference irradiance gref under the standard condition is the dv can be obtained from the manufacturers data di ocref sheet after substituting the above parameters into equation the characteristics of the solar cell can be numerically computed for any given cell temperature and irradiance level and then used to represent a solar array consisting of interconnected modules and cells where a converter is used to step up the output dc voltage of a solar array such that a bulky transformer can be avoided and perform the mppt by controlling the duty ratio of the converter see for more details iv simulation results the revised inccond algorithm was implemented in an integrated power system simulation software eptool that was developed based on the power system toolbox eptool can be used to perform a transient analysis of the grid under faulted conditions and the solar irradiance and temperature changes only mppt simulation results are presented here to validate the effectiveness of the revised algorithm using a hypothetical solar irradiance profile as the input to solar plant as tabulated in table i the other input the panel temperature is assumed constant of during the cloud transients a centralized solar pv plant consists of solar bp sx panels the capacity of the pv plant is mw or pu base mva and under standard environmental conditions a system was used as an example for carrying out the simulation purpose only table i variation of irradiance at a solar plant during a cloud transient time s irradiance w m time s irradiance w m time s irradiance w m time s irradiance w m the conventional inccond algorithm with a fixed incremental step size of duty ratio is first applied and the mppt is performed every other parameters are selected as the following and pv output voltage kv qvocref dv rs i g q exp ref di nktref simulation results for the solar array terminal output voltage and the deviation of the actual dc output power from the calculated maximum power points are shown in fig from which one can observe the persisting oscillations around the mpps in most of the time the mpps were not truly achieved the reason for the oscillations is that as implied in the algorithm description of section ii the terminal voltage continues to be adjusted fig also indicates that the conventional inccond algorithm is not able to track the mpp for a rapid variation of the irradiance since the output power deviations from the actual solar power generation are significant for the large change in irradiance at s although for slow variations it can provide acceptable performance this significant power deficiency around s is caused by an inability to adjust the panel voltage rapidly enough to compensate for the large decrement of irradiance level as can be seen by comparing the top curves in fig with those given in fig this highlights the inefficiencies of selecting control parameters such as the incremental step size and the in accordance with conventional mppt algorithms these inefficiencies related to the conventional inccond method can be addressed by the modified algorithm proposed in this paper pv dc power deviation temperature coefficient and isc is the current of the solar cell these are both constants that can be obtained from manufacturers data sheets g is the reverse saturation current of the diode n the diode ideality factor q c the coulomb constant k the boltzmann constant eg is gap ev and is given as voc is the voltage of the solar cell the series resistance can be solved using parameters at reference temperature and irradiance time s fig terminal voltage variation and deviated power output of the pv plant using conventional inccond algorithm for the modified mppt algorithm proposed in section ii the deceleration and acceleration factors are applied in the modified inccond algorithm with deacc and acc while other parameters remain the same in the first scenario the upper bound of the incremental step size of the duty ratio is fixed k k as shown in fig the oscillations are eliminated quickly as the irradiance changes fig also shows that the revised algorithm with the fixed of the step size can quickly make the operating point reach the mpp as the voltage level becomes quickly stable even after the sudden change of irradiance at time however relatively large terminal voltage overshoot at changing points of irradiance is now introduced and must be addressed fig shows further improvement of the tracking performance for a second scenario where an adaptive upperbound of the incremental step is used the overshoot at the change points of the irradiance have been significantly decreased the output power deviation is also reduced simulation also shows that the tracking performance is not sensitive to the associated parameters which makes parameter tuning very easy and the modified inccond algorithm very robust pv dc power deviation pv output voltage kv time s fig terminal voltage and deviated power output of the array using the modified inccond algorithm fixed upper bound of the incremental step size of duty ratio pv dc power deviation pv output voltage kv time s fig terminal voltage and deviated power output of the array using the modified inccond algorithm with adaptive upper bound of the incremental step size of duty ratio conclusions a revised inccond algorithm was presented in this paper for pv generation systems compared with traditional inccond methods the voltage step change is adaptively determined based on the slope of the curve and the location of the operating points in two consecutive tracking steps such that the pv system can track the rapid change in environmental conditions while the oscillation of the pv system operating points around the mpp can be avoided in addition the upper bound of the voltage step change is assigned a factor deacc less than to constrain the step change when a change in the sign of slope is detected the simulation results demonstrate the effectiveness of the proposed algorithm the robustness of the mppt algorithm is also enhanced due to fact that the parameters can be easily tuned regardless of the pv systems and it does not require knowledge of the characteristics of specific pv panels references trends in photovoltaic applications iea report esram and chapman comparison of photovoltaic array maximum power point tracking techniques ieee transactions on energy conversion vol pp femia petrone spagnuolo and vitelli optimization of perturb and observe maximum power point tracking method power electronics ieee transactions on vol pp wasynezuk dynamic behavior of a class of photovoltaic power systems power apparatus and systems ieee transactions on vol pp lopes and xuejun an intelligent maximum power point tracker using peak current control in power electronics specialists conference pesc ieee kasa iida and chen flyback inverter controlled by sensorless current mppt for photovoltaic power system industrial electronics ieee transactions on vol pp schoeman and wyk a simplified maximal power controller for terrestrial photovoltaic panel arrays in annu ieee power electron spec pp kobayashi matsuo and sekine a novel optimum operating point tracker of the solar cell power supply system in power electronics specialists conference pesc ieee annual pp mutoh matuo okada and sakai method for photovoltaic power generation systems in power electronics specialists conference pesc ieee annual pp hussein muta hoshino and osakada maximum photovoltaic power tracking an algorithm for rapidly changing atmospheric conditions generation transmission and distribution iee vol pp seung kyu and a novel maximum power point tracking control for photovoltaic power system under rapidly changing solar radiation in industrial electronics proceedings isie ieee international symposium on pp and novel controller for photovoltaic energy conversion system industrial electronics ieee transactions on vol pp wenkai pongratananukul weihong rustom kasparis and batarseh multiple peak power tracking for expandable power system in applied power electronics conference and exposition apec eighteenth annual ieee pp koizumi and kurokawa a novel maximum power point tracking method for pv module integrated converter in power electronics specialists conference pesc ieee pp harada and zhao controlled power interface between solar cells and ac source power electronics ieee transactions on vol pp irisawa saito takano and sawada maximum power point tracking control of photovoltaic generation system under nonuniform insolation by means of monitoring cells in photovoltaic specialists conference conference record of the twentyeighth ieee pp kobayashi takano and sawada a study on a two stage maximum power point tracking control of a photovoltaic system under partially shaded insolation conditions in power engineering society general meeting ieee vol kim jeon cho kim and ahn modeling and simulation of a pv generation system for electromagnetic transient analysis solar energy vol pp mohan undeland and robbins power electronics converters applications and design john wiley sons power system toolbox webpage http bp sx http
5
focus querying large video datasets with low latency and low cost kevin ganesh peter paramvir matthai phillip onur carnegie mellon university microsoft eth large volumes of videos are continuously recorded from cameras deployed for traffic control and surveillance with the goal of answering after the fact queries identify video frames with objects of certain classes cars bags from many days of recorded video while advancements in convolutional neural networks cnns have enabled answering such queries with high accuracy they are too expensive and slow we build focus a system for lowlatency and querying on large video datasets focus uses cheap ingestion techniques to index the videos by the objects occurring in them at it uses compression and specialization of cnns focus handles the lower accuracy of the cheap cnns by judiciously leveraging expensive cnns at to reduce query time latency we cluster similar objects and hence avoid redundant processing using experiments on video streams from traffic surveillance and news channels we see that focus uses fewer gpu cycles than running expensive ingest processors and is faster than processing all the video at query time normalized query latency jan abstract normalized ingest cost normalized ingest cost figure effectiveness of focus at reducing both ingest cost and query latency for an example traffic video we compare against two baselines that runs on all video frames during ingest and that runs on all the video frames at query time by zooming in we see that focus the point is simultaneously cheaper than in its gpu consumption and faster than in query latency all the while achieving at least precision and recall also shown are two alternatives offering slightly different using them for video analytics queries is both expensive and slow using the classifier at to identify video frames with cars on a traffic video requires gpu hours and costs in the azure cloud the latency for running queries is also high to achieve a query latency of one minute on gpu hours of work would require tens of thousands of gpus classifying the frames of the video in parallel which is many orders of magnitude more than what is typically provisioned few tens or hundreds by traffic jurisdictions or retail stores note that the above cost and latency values are after using motion detection techniques to exclude frames with no moving objects we believe that enabling and querying over large video datasets will make video analytics more useful and open up many new opportunities a natural approach to enabling low latency querying is doing all classifications with at on the live videos and store the results in an index of object classes to video frames any queries for specific classes cars will thus involve only a simple index lookup at there are however at least two problems with this approach first the cost to index all the video at in the introduction cameras are ubiquitous with millions of them deployed by government and private entities at traffic intersections enterprise offices and retail stores videos from these cameras are continuously recorded one of the main purposes for recording the videos is answering queries identify video frames with objects of certain classes like cars or bags over many days of recorded video as results from these queries are used by analysts and investigators achieving low query latencies is crucial advances in convolutional neural networks cnns backed by copious training data and hardware accelerators gpus have led to high accuracy in the computer vision tasks like object detection and object classification for instance the object classifier cnn won the imagenet challenge that evaluates classification accuracy on classes using a public image dataset with labeled ground truths for each image these classifiers return a ranked list of classes in decreasing order of confidence despite the accuracy of image classifier cnns like above example is prohibitively high second most of this cost is wasteful because typically only a small fraction of recorded videos get queried following a theft the police would query a few days of video from a handful of surveillance cameras but not all the videos we present focus a system to support querying on large video datasets to address the above drawbacks focus has the following goals low cost indexing of video at providing high accuracy and low latency for queries and allowing trade offs between the cost at against the latency at as input the user specifies the cnn or the classifier and the desired accuracy of results that focus needs to achieve relative to the focus uses four key techniques cheap cnns for ingest using results from the cnn clustering similar objects and judicious selection of system and model parameters first to make video ingestion cheap focus uses compressed and specialized versions of cnns to create an index of object classes to frames cnn compression creates new cnns with fewer convolutional layers and smaller input images specialization trains those cnns on a smaller set of object classes specific to each video stream so that those cheaper cnns can classify these objects more accurately together these techniques result in highly efficient cnns for video indexing second the cheap ingest cnns however are also less accurate than the expensive like measured in terms of recall and precision recall is the fraction of frames in the video that contained objects of the queried class that were actually returned in the query s results precision on the other hand is the fraction of frames in the query s results that contained objects of the queried class to increase recall focus relies on an empirical observation while the most confident classification results of the cheap and expensive cnns may not always match the result of the expensive cnn falls within the results of the cheap cnn therefore at focus indexes each object with the results of the cheap cnn instead of just the to increase precision at we first filter the objects from the index and then classify the filtered objects with the expensive third to reduce the latency of using the expensive focus relies on the significant similarity between objects in videos for example a car moving across an intersection will look very similar in consecutive frames focus leverages this similarity by clustering the objects at classifying only the cluster centroids with the expensive at and assigning the same class to all objects in the cluster thus considerably reducing query latency in a nutshell focus s and operations are as follows at it classifies the detected objects using a cheap cnn clusters similar objects and indexes each cluster centroid using the classification results at when the user queries for class x focus looks up the ingest index for centroids that match class x and classifies them using the for centroids that were classified as class x it returns all objects from the corresponding clusters to the user finally focus smartly chooses the cnn and its parameters to meet targets on precision and recall among the choices that meet the accuracy targets it allows the user to trade off between the ingest cost and query latency for example using a cheaper ingest cnn reduces the ingest cost but increases the query latency as focus needs to use a larger k for the index to retain the accuracy targets focus identifies the sweet spot in parameters that sharply improve one of ingest cost or query latency for a small worsening of the other we built focus and evaluated it on thirteen videos from three domains traffic cameras surveillance cameras and news channels we compare against two baselines that runs on all video frames during ingest and that runs on all the video frames at query time we use as and augment both baselines with motion detection to remove frames with no objects which is one of the core techniques in a recent prior work noscope figure shows a representative result for a traffic video from a commercial intersection on average focus is up to cheaper than and up to faster than this leads to the cost of ingestion coming down from to and the latency to query a hour video dropping from hour to under minutes see for the full details we make the following contributions we formulate the problem of querying video datasets by showing the between query latency ingest cost and accuracy precision and recall of results we propose techniques to ingest videos with low cost by leveraging compressed and specialization of cnns while retaining high accuracy targets by creating approximate indexes we identify and leverage similarity between objects in a video to cluster them using cnn features and significantly speeding up queries we propose and build a new system to support querying on large video datasets we show that our system offers new options between ingestion cost and query latency as it is significantly cheaper than analyzing all videos frames at ingest time and significantly faster than analyzing queried video frames at query time background and motivation can only process even with a gpu nvidia this makes querying on large video datasets using these cnns to be slow and costly there are at least two recent techniques designed to reduce the cost of cnns first compression is a set of techniques aiming to reduce the cost of cnn inference classification at the expense of reduced accuracy such techniques include removing some expensive convolutional layers matrix pruning and others and can dramatically reduce the classification cost of a cnn for example which is a variant with only layers is cheaper second a more recent technique is cnn specialization where the cnns are trained on a subset of a dataset specific to a particular context also making them much cheaper using the combination of cheap and expensive cnns is a key facet of our solution described in we first provide a brief overview of convolutional neural networks cnn the approach to detecting and classifying objects in images we then discuss new observations we made about videos which motivate the design of our techniques convolutional neural networks a convolution neural network cnn is a specific class of neural networks that works by extracting the visual features in images during image classification or inference a cnn takes an input image and outputs the probability of each class dog flower or car cnns are the method used for many computer vision tasks such as image classification and face recognition input image pooling layers convolutional rectification layers layer prob apple prob car prob orange prob cat prob flower prob dog extracted features characterizing videos we aim to support queries of the form find all frames in the video that contain objects of class x we identify some key characteristics of videos towards supporting these queries large portions of videos can be excluded only a limited set of object classes occur in each video and objects of the same class have similar feature vectors the design of focus is based on these characteristics we have analyzed hours of video from six video streams each the six video stream span across traffic cameras surveillance cameras and news channels provides the details we detect the objects in each frame of these videos using background subtraction and classify each object with the cnn among the supported object classes in this paper we use results from the costly cnn as ground truth excluding large portions of videos we find considerable potential to avoid processing large portions of videos at significant portions of video streams either have no objects at all as in a garage camera at night or the objects are stationary like parked cars we find that in our video sets to of the frames fall in these categories therefore queries to any object class would benefit from filters applied to exclude these portions of the videos even among the frames that do contain objects not all of them are relevant to a query because each query only looks for a specific class of objects in our video sets an object class occurs in only of the frames on average and even the most frequent object classes occur in no more than of the frames in the different videos this is because while there are usually some dominant classes cars in a traffic camera people in a news channel most other classes are rare since queries are for specific object classes there is considerable figure architecture of an image classification cnn figure illustrates the architecture of an image classification cnn broadly almost all cnns consist of three key types of network layers convolutional and rectification layers which detect visual features from input pixels pooling layers which the input by merging neighboring pixel values and layers which provide the reasoning to classify the input object based on the outputs from previous layers the outputs of an image classification cnn are the the probabilities of all object classes and the class with the highest probability is the predicted class for the input image the output of the penultimate layer can be considered as representative features of the input image the features are a vector with lengths between and in classifier cnns it has been shown that images with similar feature vectors small euclidean distances are visually similar the high accuracy of cnns comes at a cost inferring or classifying using cnns to classify objects in images requires significant computational resources this is because the higher accuracy of cnns comes from using deeper architectures more layers to obtain better visual features for instance the winner of the imagenet competition in has been trained to classify across classes from the imagenet dataset using layers but cdf number of objects since they are specifically trained to extract visual features for classification we verify the robustness of feature vectors using the following analysis in each video for each object i we find its nearest neighbor j using feature vectors from the cheap cnn and compute the fraction of object pairs that belong to the same class this fraction is over in each of our videos which shows using feature vectors from cheap cnns can potentially help identify duplicate objects of objects auburn lausanne cnn jackson hole sittard msnbc percentage of classes figure cdf of frequency of object classes the is the fraction of classes out of the recognized by truncated to overview of focus the goal of focus is to index live video streams by the object classes occurring in them and enable answering queries later on the stored videos of the form find all frames that contain objects of class optionally the query can be restricted to a subset of cameras and a time range such a query formulation is the basis for many widespread applications and could be used either on its own such as for detecting all cars or bicycles in the video or used as a basis for further processing finding all collisions between cars and bicycles focus is designed to work with a wide variety of current and future cnns at system configuration time the user system administrator provides a cnn which serves as the accuracy baseline for focus but is far too costly to run on every video frame through a sequence of techniques focus provides nearlycomparable accuracy but at greatly reduced cost by default and throughout this paper we use the image classifier as the because the acceptable target accuracy is applicationdependent focus permits the user to specify the target while providing reasonable defaults accuracy is specified in terms of precision fraction of frames output by the query that actually contain an object of class x according to and recall fraction of frames that contain objects of class x according to that were actually returned by the query the lower the target the greater the provided by focus even for high targets such as focus is able to achieve or more cost savings figure presents the design of focus at left part of figure focus classifies objects in the incoming video frames and extracts their feature vectors to make this step cheap it uses a highly compressed and specialized version of the model in figure focus then clusters objects based on their feature vectors and assign to each cluster the top k most likely classes these objects belong to based on classification confidence of the ingest cnn it creates a index which maps each class to the set of object clusters the index is the output of focus processing of videos at right part of figure when the user tial in indexing frames by the classes of objects limited set of object classes in each video we next focus on the classes of objects that occur in each of the videos and the disparity in frequency among them most video streams have a limited set of objects because each video has its own context traffic cameras can have automobiles pedestrians or bikes but not airplanes it is rare that a video stream contains objects of all the classes recognized by classifier cnns figure shows the cumulative distribution function cdf of the frequency of object classes in our videos as classified by we make two observations first objects of only not graphed of the object classes occur in the less busy videos auburn jackson hole lausanne and sittard even in the busier videos cnn and msnbc objects of only of the classes appear also there is little overlap between the classes of objects among the different videos on average the jaccard indexes intersection over union between the videos based on their object classes is only second even among the object classes that do occur a small fraction of classes disproportionately dominate figure shows that of the most frequent object classes cover of the objects in each video stream this suggests that for each video stream we can automatically i determine its most frequently occurring classes and ii train efficient cnns specialized for classifying these classes feature vectors for finding duplicate objects objects moving in the video often stay in the frame for several seconds for example a pedestrian might take a minute to cross a street instead of classifying each instance of the same object across the frames we would like to inexpensively find duplicate objects and only classify one of them using a cnn and apply the same label to all duplicates thus given n duplicate objects this requires only one cnn classification operation instead of comparing pixel values across frames is an obvious choice to identify duplicate objects however they turn out to be highly sensitive to even small changes in the camera s view of an object instead feature vectors extracted from the cnns are much more robust frames frames frames object feature vectors objects specialized compressed cnn cnn specialization object clusters object classes frames with objects of class x querying for class x matching clusters for x centroid objects index figure overview of focus queries for a certain class x focus retrieves the matching clusters from the index runs the centroids of the clusters through and returns all frames from the clusters whose centroids were classified by as class x the ingest index is a mapping between the object class to the clusters specifically object class hcluster idi cluster id centroid object hobjectsi in cluster hframe idsi of objects we next explain how focus key techniques keep ingest cost and query latency low while also meeting the userspecified accuracy targets cheap cnn focus makes indexing at cheap by compressing and specializing the model for each video stream i compression of cnn models uses fewer convolutional layers and other approximation techniques ii specialization of cnns uses the observation that a specific video stream contains only a small number of object classes and their appearance is more constrained than in a generic video both techniques are done automatically and together result in cnn models that are up to cheaper than ingest index the cheap cnns are less accurate their results do not often match the classifications of therefore to keep the recall high focus associates each object with the classification results of the cheap cnn instead of just its result increasing the k increases recall because the result of often falls within the cnn s results at querytime focus uses the to remove objects in this larger set that do not match the class to regain precision lost by including all the clustering similar objects a high value of k at increases the work to do at query time thereby increasing query latency to reduce this overhead focus clusters similar objects at using feature vectors from the cnn in each cluster at querytime we run only the cluster centroid through and apply the classified result from the to all objects in the cluster thus if the objects are not tightly clustered clustering can reduce precision and recall trading off ingest query costs focus automatically chooses the cheap cnn its k and specialization and clustering parameters to achieve the desired precision and recall targets these choices also help focus trade off between the work done at and for instance to save ingest work focus can select a cheaper cnn and then counteract the resultant loss in accuracy by running the expensive on more objects at query time focus chooses its parameters so as to offer a sharp improvement in one of the two costs for a small degradation in the other cost because the desired point is focus provides users with a choice of three options and balanced the default note that while our explanation is anchored on image classification cnns the architecture of focus is generally applicable to all existing cnns face recognition techniques that we use for cnn compression and specialization and feature extraction from the cnns are all broadly applicable to all cnns video ingest querying techniques in this section we describe the main techniques used in focus using cheap cnn models at identifying similar objects and frames to save on redundant cnn processing and specializing the cnns to the specific videos that are being analyzed describes setting parameters in focus cheap ingestion focus indexes the live videos at to reduce the latency we perform object detection on each frame typically an inexpensive operation and then will classify the extracted objects using cnns that are far cheaper than the we use these classifications to index objects by class cheap cnn as noted earlier the user provides focus with a optionally the user can also provide other classifier architectures to be used in focus search for cheap cnns such as alexnet and cheapcnn cheapcnn cheapcnn class the selection of the cheap cnn model cheapcnni and the k value for the results has a significant influence on the recall of the outputs produced lower values of k reduce recall focus will miss returning frames that contain the queried objects at the same time higher values of k increase the number of objects to classify with at query time to keep precision high and hence adds to the latency we defer to on how focus sets these parameters as they have to be jointly set with other parameters in and recall number of selected results k figure effect of k on recall for three cheap cnns the number within the parenthesis indicates how much cheaper the model is compared to our vgg which vary in their resource costs and accuracies starting from these cnns focus applies various levels of compression such as removing convolutional layers and reducing the input image resolution this results in a large set of cnn options for ingestion cheapcnnn with a wide range of costs and accuracies ingest index to keep recall high focus indexes each object using the top k object classes from cheapcnni s output instead of using just the class recall from that the output of the cnn is a list of object classes in descending order of confidence we empirically observe that the output of the expensive is often contained within the classes output by the cheap cnn for a small value of k relative to the classes recognized by the cnns figure plots the effect of k on recall on one of our video streams lausanne see the three models in the figure are and with and layers removed additionally the input images were rescaled to and pixels respectively all models were retrained on their original training data imagenet we make two observations first we observe steady increase in recall with increasing k for all three cheapcnns as the figure shows and reach recall when k k and k respectively note that all these models recognize classes so even k represents only of the possible classes second there is a between different models the cheaper they are the lower their recall with the same overall we conclude that by selecting the appropriate k focus can achieve the target recall focus creates the index of an object s classes output by cheapcnni at while filtering for objects of the queried class x using the index with the appropriate k will have a high recall it will have very low precision since we associate each object with k classes while it has only one true class the average precision is only thus at query time to keep the precision high focus determines the actual class of objects from the index using the expensive and only return objects that match the queried redundancy elimination at query time focus retrieves the objects likely matching the class from the index and infers their actual class using the this would ensure precision of but could cause significant latency at query time even if this inference is parallelized across many gpus it would still incur a large cost focus uses the following observation to reduce this cost if two objects are visually similar their feature vectors would be closely aligned and they would likely be classified as the same class cars by the model focus clusters objects that are similar invokes the expensive only on the cluster centroids and assigns the centroid s label to all objects in each cluster doing so dramatically reduces the work done by the gtcnn classifier at query time focus uses the feature vector output by the layer of the cheap ingest cnn see for clustering note that focus clusters the objects in the frames and not the frames as a whole the key questions regarding clustering are how do we cluster algorithm and when do we cluster system we discuss both these key questions below clustering heuristic we require two properties in our clustering technique first given the high volume of video data it should be a algorithm to keep the overhead low as the complexities of most clustering algorithms are quadratic second it should make no assumption on the number of clusters and adapt to outliers in data points on the fly given these requirements we use the following simple approach for incremental clustering which has been in the literature we put the first object into the first cluster to cluster a new object i with a feature vector fi we assign it to the closest cluster c j if c j is at most distance t away from fi however if none of the clusters are within a distance t we create a new cluster with centroid at fi where t is a distance threshold we measure distance as the norm between cluster centroid and object feature vector we keep the number of clusters at a constant m by removing the smallest ones and storing their data in the index using this algorithm we can keep growing the popular clusters such as similar cars while keeping the complexity as o mn which is linear to n the total number of objects clustering can reduce both precision and recall depending on parameter t if the centroid is classified by as the queried class x but the cluster contains another object of a different class it reduces precision if the centroid is classified as a class different than x but the cluster has an object of class x it reduces recall we discuss setting t in clustering at ingest query time focus clusters the objects at rather than at clustering at would involve storing all feature vectors loading them for objects filtered from the ingest index and then clustering them instead clustering at ingest time creates clusters right when the feature vectors are created and only stores the cluster centroids in the index this makes the latency much lower and also reduces the size of the index we observe that the ordering of indexing and clustering operations is mostly commutative in practice and has little impact on result accuracy we do not present these results due to space constraints we therefore use clustering due to its latency and storage benefits pixel differencing of objects while clustering primarily reduces work done at number of objects to be classified by the focus also employs pixel differencing among objects in adjacent incoming frames to reduce ingest cost specifically if two objects have very similar pixel values it only runs the cheap cnn on one of them and assign them both to the same cluster in our index curacy on video streams while removing of the convolutional layers and making the input image smaller in resolution this leads to the specialized cheapcnni being cheaper than even the generic cheapcnni since the specialized cnn classifies across fewer classes they are more accurate which allows focus to select a much smaller k for the ingest index to meet the desired recall we find that specialized models can use k or much smaller than the typical k for the generic cheap cnns figure smaller k directly translates to fewer objects that have to be classified by at query time thus reducing latency model retraining on each video stream focus periodically obtains a small sample of video frames and classifies their objects using to estimate the ground truth of distribution of object classes for the video similar to figure from this distribution focus selects the most frequently occurring ls object classes to retrain new specialized models as we saw in there is usually a power law in the distribution of classes just a handful of classes account for a dominant majority of the objects thus low values of ls usually specialization is also based off a family of cnn architectures such as resnet alexnet and vgg with different number of convolution layers similar to specialization adds to the set of options available for ingest cnns cheapcnnn in and focus picks the best model cheapcnni and the corresponding k for the index other class while focus specializes the cnn towards the most frequently occurring ls classes we also want to support querying of the less frequent classes for this purpose focus includes an additional class called other in the specialized being classified as other simply means not being one of the ls classes at query time if the queried class is among the other classes of the ingest cnn s index focus extracts all the clusters that match the other class and classifies their centroids through the model the parameter ls for each stream exposes the following using a small ls allows us to train a simpler model with cheaper ingest cost and lower latency for the popular classes however it also leads to a larger fraction of objects falling in the other class querying for them will be expensive because all those objects will have to be classified by the using a larger value of ls on the other hand leads to a more expensive ingest and models but cheaper querying for the other classes we select ls next in specialization of cnns recall from that focus uses a cheap cnn cheapcnni to index object classes focus further reduces its cost by specializing the cnn model to each video stream model specialization benefits from two properties of objects in each video stream first while object classification models are trained to differentiate between thousands of object classes many video streams contain only a small number of classes second objects in a specific stream are often visually more constrained than objects in general say compared to the imagenet dataset the cars and buses that occur in a specific traffic camera have much less variability they have very similar angle distortion and size than a generic set of vehicles instead of trying to differentiate among thousands of object classes differentiating among just say fifty classes and in a specific camera s video is a much simpler task requiring simpler image features and smaller image resolutions as a result the specialized models are smaller and more accurate for example by retraining a cheapcnni we can achieve similar specialized cnns can be retrained quickly on a small dataset retraining is relatively infrequent and done once every few days since there will be considerably fewer objects in the video belonging to the other class we proportionally the training data to contain equal number of objects of all the classes normalized query latency balancing accuracy latency and cost focus goals of high accuracy low ingest cost and low query latency are impacted by the parameters in focus techniques k the number of top results from the ingesttime cnn to index an object ls the number of popular object classes we use to create a specialized model cheapcnni the specialized cheap cnn and t the distance threshold for clustering objects the effect of these four parameters is intertwined all the four parameters impact ingest cost query latency and recall but only t impacts precision this is because we apply the cluster centroid s classification by to all the objects in its cluster thus if the clustering is not tight high value of t we lose precision parameter selection focus selects parameter values per video stream it samples a representative fraction of frames of the video stream and classifies them using for the ground truth for each combination of parameter values focus computes the expected precision and recall using the ground truths generated by that would be achieved for each of the object classes to navigate the combinatorial space of options for these parameters we adopt a approach in the first step focus chooses cheapcnni ls and k using only the recall target in the next step focus iterates through the values of t the clustering distance threshold and only select values that meet the precision target trading off ingest cost and query latency among the combination of values that meet the precision and recall targets the selection is based on balancing the and costs for example picking a model cheapcnni that is more accurate will have higher ingest cost but lower query cost because we can use a lower using a less accurate cheapcnni will have the opposite effect focus identifies intelligent defaults that sharply improve one of the two costs for a small worsening of the other cost figure illustrates the parameter selection based on the ingest cost and query latency for one of our video streams the figure plots all the viable configurations set of parameters that meet the precision and recall target based on their ingest cost cost of cheapcnni and query latency the number of clusters according to k ls t we first draw the pareto boundary which is the set of configurations that can not improve one metric without worsening the other focus can discard all the other configurations because at least one point on the pareto boundary is better than them in both metrics focus balances between the ingest cost and query latency balance in figure by selecting the configuration that minimizes the sum of ingest and query cost measured in total gpu cycles focus also allows for other configurations based on the application s preferences and query rates balance normalized ingest cost figure parameter selection based on trading off ingest cost and query latency the ingest cost is normalized to ingesting all objects with while the query latency is normalized to the time for querying all objects with the dashed line is the pareto boundary minimizes the ingest cost and is applicable when the application expects most of the video streams to not get queried such as a surveillance cameras as this policy also minimizes the amount of wasted ingest work on the other hand minimizes query latency even if it incurs a heavy ingest cost such flexibility allows focus to fit different applications implementation details we describe the key aspects in focus s implementation worker processes focus s work is distributed across many machines with each machine running one worker process for each video stream s ingestion the ingest worker receives the live video stream and extracts the moving objects using background subtraction it is extensible to plug in any other object detector the detected objects are sent to the cnn to infer the classes and the feature vectors the ingest worker uses the features to cluster objects in its video stream and stores the index in mongodb for efficient retrieval at worker processes also serve queries by fetching the relevant frames off the index database and classifying the objects with we parallelize a query s work across many worker processes if resources are idle gpus for cnn classification the cheap cnns and gtcnn execute on gpus or other hardware accelerators for cnns which could either be local on the same machine as the worker processes or disaggregated on a remote cluster this detail is abstracted away from our worker process and it seamlessly works with both designs dynamically adjusting k at as an enhanced technique we can select a new kx k at query time and only extract clusters where class x appears among the classes this will result in fewer clusters and thus also lower latency this technique is useful in two scenarios some classes might be very accurately classified by the cheap cnn using a lower kx will still meet the accuracy yet will result in much lower latency if we want to retrieve only some objects of class x we can use very low kx to quickly retrieve them if more objects are required we can increase table video dataset characteristics kx to extract a new batch of results type description a commercial area intersection in the city of auburn a residential area intersection al usa in the city of auburn a downtown intersection in usa city traffic a residential area intersection usa in city a camera in the city bend or usa of bend a busy intersection town jacksonh wy usa square in jackson hole a video stream rotates among vt usa cameras in a shopping mall church street marketplace a pedestrian plazalatency place de surveillance lausanne switzerland la palud in lausanne a bookshop street in the oxford england university of oxford sittard netherlands a market square in sittard news channel cnn usa news news channel foxnews usa news channel msnbc usa evaluation we evaluate the focus prototype with more than hours of videos from real video streams that span across traffic cameras surveillance cameras and news channels our highlights are on average focus is simultaneously up to cheaper than the baseline in its gpu consumption and up to faster than the baseline in query latency all the while achieving at least precision and recall focus provides a rich space between ingest cost and query latency among the video streams the ingest cost is up to cheaper than the ingestall baseline and reduces query latency by if optimizing for ingest the query latency is reduced by up to with cheaper ingest if optimizing for query latency focus is effective under broad conditions such as high accuracy targets one savings even for accuracy target and various frame sampling rates fps name location al usa in a segment of video if the reports such class in of the frames in that segment we use this criteria as our ground truth because our sometimes gives different answers to the exact same object in consecutive frames and this criteria can effectively eliminate these random erroneous results we set our default accuracy target as recall and precision we also evaluate the results with other accuracy targets such as and note that in most practical cases only one of the two metrics recall or accuracy needs to be high for example an investigator cares about high recall and looking through some irrelevant results is an acceptable by setting both targets high we are lower bounding the performance improvements that focus can achieve setup software tools we use opencv to decode the videos into frames and then use the background subtraction algorithm in opencv to extract moving objects from video frames we use background subtraction instead of object detector cnns or faster to detect objects because running background subtraction is orders of magnitude faster than running these cnns and background subtraction can detect moving objects more reliably while object detector cnns usually have difficulties on small objects nonetheless our system can seamlessly use object detector cnns as well we run and train cnns with microsoft cognitive toolkit an deep learning system video datasets we evaluate live video streams that span across traffic cameras surveillance cameras and news channels we evaluate each video stream for hours which evenly cover day time and night time table summarizes the video characteristics by default we evaluate each video at fps and also evaluate the sensitivity to other frame rates in some figures we only show a representative sample of cameras to improve legibility accuracy target we use a cnn as our cnn we evaluate all extracted objects with the and use the results as the correct answers we define a class present baselines we use two baselines for comparisons the baseline system that uses to analyze all objects at ingest time and stores the inverted index for query and the baseline system that simply extracts objects at ingest time and uses to analyze all the objects that fall into the query interval at query time note that we strengthen both baselines with basic motion detection background subtraction therefore the baselines do not run any on the frames that have no moving objects note that not running gtcnn on frames with no moving objects is one of the core techniques in the recent noscope work metrics we use two performance metrics the first metric is ingest cost which is the gpu time to ingest each video the second metric is query latency which is the latency for an object class query specifically for each video stream we evaluate all dominant object classes and take the average of their latencies querying for other classes is much cheaper than querying popular classes and would skew the results because there are far more such classes thus we focus on the video streams are obtained from real and operational traffic cameras in a city we mask the city name for anonymity traffic surveillance news avg surveillance msnbc cnn foxnews sittard oxford and jacksonh normal intersections or roads and bend rotating cameras busy plazas lausanne and sittard a university street oxford and different news channels cnn foxnews and msnbc among these videos the gains in query latency are smaller for relatively less busy videos bend lausanne and oxford this is because these videos are dominated by fewer object classes and focus has more work analysis using to do at query time for these classes we conclude that the core techniques of focus are general and effective on a variety of videos msnbc lausanne jacksonh bend cnn sittard oxford lausanne jacksonh faster than by factor traffic bend foxnews cheaper than by factor news effect of different focus components figure shows the breakdown of cost and query latency across different design points of focus compressed model which applies a generic compressed model for indexing at ingest time compressed specialized model which uses a specialized and compressed model for indexing and compressed specialized model clustering which adds clustering at ingest time to reduce redundant work at query time all of the above include the index and using at and achieve the same accuracy of three main observations are in order first generic compressed models provide benefits for both ingest cost and query latency but they are not the major source of improvement this is because the accuracy of a generic compressed model degrades significantly when we remove convolutional layers in order to retain the accuracy target we need to choose relatively expensive compressed models cheapcnni and a larger k which incur higher ingest cost and query latency second specializing the model in addition to compressing it greatly reduces ingest cost and query latency because of fewer convolutional layers and smaller input resolution our specialized models are to cheaper than the while retaining the accuracy target for each video streams running a specialized model at ingest time speeds up query latency by to figure third clustering is a very effective technique to further reduce query latency with unnoticeable costs at ingest time as figure shows using clustering on top of a specialized compressed model reduces the query latency by up to significantly better than just running a specialized model at ingest time this gain comes with a negligible cost figure because we run our clustering algorithm on the cpus of the ingest machine which is fully pipelined with the gpus that run the specialized cnn model avg figure top focus ingest cost compared to bottom focus query latency compared to the popular ones both metrics include only gpu time spent classifying images and exclude other cpu time spent decoding video frames detecting moving objects recording and loading video and reading and writing to the index we focus solely on gpu time because when the gpu is involved it is the bottleneck resource the query latency of is and the ingest cost of is experiment platform we run the experiments on our local cluster each machine in the cluster is equipped with a gpu nvidia gtx titan x intel xeon cpu gb ram a gbe nic and runs ubuntu lts performance we first show the performance of focus by showing its ingest cost and query latency when focus aims to balance these two metrics figure compares the ingest cost of focus with and the query latency of focus with we make two main observations first focus significantly improves query latency with a very small ingest cost focus makes queries by an average of faster than with a very small cost at ingest time an average of cheaper than with a cluster the query latency on a video goes down from one hour to less than two minutes the processing cost of each video stream also goes down from to this shows that focus can strike a very good balance between these two competing goals very effectively second focus is effective across different video streams with various characteristics it makes queries to faster with a very small ingest time cost to cheaper across busy intersections ingest cost query latency one of the interesting features of focus is the flexibility to tune its system parameters to achieve different application a ingest cost cnn jacksonh lausanne sittard ingest cheaper by query faster by improvements factor compressed model specialized model clustering jacksonh lausanne sittard cnn foxnews msnbc avg jacksonh lausanne sittard cnn foxnews msnbc avg faster than by factor cheaper than by factor compressed model specialized model clustering foxnews msnbc figure between ingest cost and query latency three higher targets and as the figures show with higher accuracy targets the ingest costs are about the same and the improvement of query latency decreases focus keeps the ingest cost similar to cheaper than the baseline because it still runs the specialized and compressed cnn at ingest time however when the accuracy targets are higher focus needs to select more classification results which increases the work at query time on average the query latency of focus is faster than by and with respect to and accuracy targets we conclude that the techniques of focus can achieve higher accuracy targets with significant improvements on both ingest cost and query latency b query latency figure effect of different focus components ingest cheaper by factor goals figure from depicted three alternative settings for focus that illustrate the space between ingest cost and query latency using the video stream which optimizes for query latency by increasing ingest cost which is the default option that balances these two metrics and which is the opposite of the results are shown relative to the two baselines the chart at the right of the figure is the region that covers the three settings of focus and each data label i q indicates its ingest cost is cheaper than while its query latency is faster than as figure shows focus offers very good options in the space between ingest cost and query latency achieves cheaper cost than ingestall to ingest the video stream and makes the query faster than doing nothing at ingest on the other hand reduces query latency by with a relatively higher ingest cost but it is still cheaper than as they are all good options compared to the baselines such flexibility allows a user to tailor focus for different contexts for example a traffic camera that requires fast turnaround time for queries can use while a surveillance video stream that will be queried very rarely would choose to reduce the amount of wasted ingest cost figure shows the i q values for both and for the representative videos as the figure show the flexibility exists among all the other videos on average spends only cheaper ingest cost to provide query latency reduction on the other hand makes queries faster with a higher ingest cost cheaper than we conclude that focus provides good flexibility between ingest cost and query latency and makes it a better fit in different contexts query faster by factor figure ingest cost sensitivity to accuracy target figure query latency sensitivity to accuracy target sensitivity to frame sampling a common approach to reduce the video processing time is to use frame sampling periodically select a frame to process however not all applications can use frame sampling because it can miss objects that show up and disappear within a frame sampling window as the frame sampling rate is an application dependent choice we study the sensitivity of focus s performance to different frame rates figures and show the ingest cost and query latency of focus at different frame rates fps fps fps and fps compared to and respectively we make two observations first the ingest cost reduction is roughly the same across the different frame rates on average the ingest sensitivity to accuracy target figures and illustrate the improvements of ingest cost and query latency of focus compared to the baselines under different accuracy targets other than the default accuracy target recall and precision we evaluate ingest cheaper by factor related work to our best knowledge focus is the first system that offers and video queries by balancing between cost and query latency we now discuss work related to our key techniques cascaded classification various works in vision research propose speeding up classification by cascading a series of classifiers viola et al is the earliest work which cascades a series of classifiers from the simplest to the most complicated to quickly disregard regions in an image many improvements followed cnns are also cascaded to reduce object detection latency our work is different in two major ways first we decouple the compressed cnn from the which allows us to choose from a wider range for cnns and allows for better between ingest cost and query latency a key aspect of our work second we cluster similar objects using cnn features to eliminate redundant work which is a new and effective technique for video streams neural network compression recent work proposes various techniques to reduce the running time of cnns these techniques include shallow models predicting weights matrix pruning model quantization and others our work is largely orthogonal to these in that our system is not tied to a specific model compression technique and we can employ any of these techniques model specialization contextspecific specialization of models can improve accuracy or reduce running time among these the closest to our work is kang et s proposal noscope which aims to optimize video queries a few key differences stand out first noscope applies all the optimizations at while focus adopts a different architecture by splitting work between and thus focus trades off higher ingest cost for even lower query latency second noscope optimizes cnns for a single class while we optimize ingest cnns for all frequent classes in the stream and allow queries even for the rare other classes finally we use the object feature vectors to cluster similar objects and create an index to map classes to clusters this allows us to efficiently query across all classes while noscope has to redo all work including training specialized cnns for each query stream processing systems systems for general stream data processing and specific to video analytics mainly focus on the general stream processing challenges such as load shedding fault tolerance distributed execution or limited network bandwidth in contrast our work is specific for querying on recorded video data with ingest and query thus it is mostly orthogonal to these query faster by factor figure ingest cost sensitivity to frame sampling figure query latency sensitivity to frame sampling cost of focus is cheaper than at fps and it is to cheaper at lower frame rates this is because the major ingest cost saving comes from the specialized and compressed cnn models which are orthogonal to frame sampling rates second the query latency improvement of focus degrades with lower frame rates this is expected because one of our key techniques to reduce query latency is redundancy elimination especially clustering similar objects using cnn feature vectors at lower frame rates the benefit of this technique reduces because there are fewer redundancies nonetheless on average focus is still one order of magnitude faster than at a very low frame rate fps applicability with different query rate there are two factors that can affect the applicability of focus the number of classes that get queried over time and the fraction of videos that get queried in the first extreme case where all the classes and all the videos are queried could be a good option because its cost is amortized among all the queries in our study even in such an extreme case the overall cost of focus is still cheaper than on average up to cheaper because we run a very cheap cnn at ingest time and we run per object cluster only once so the overall cost is still cheaper than the second extreme case is only a tiny fraction of videos gets queried while focus can save the ingest cost by up to it can be more costly than if the fraction of videos gets queried is less than in such a case we can choose to do nothing at ingest time and run all the techniques of focus only at query time when we know the fraction of videos that get queried while this approach increases query latency it still reduces the query latency by a average of up to than in our evaluation we conclude that focus is still better than both baselines even under extreme query rates we can integrate focus with one of these general stream processing system to build a more fault tolerable system video indexing and retrieval a large body of works in multimedia and information retrieval research propose various video indexing and retrieval techniques to facilitate queries on videos among them most works focus on indexing videos for different types of queries such as shot boundary detection semantic video search video classification or video retrieval some works focus on the query interface to enable query by keywords concepts or examples these works are largely orthogonal to our work because we focus on the cost and latency of video queries not query types or interfaces we believe our idea of splitting and work is generic for videos queries and can be extended to different types of queries city of auburn north ross st and east magnolia online available https cjuskmmylla city of auburn toomer s corner online available https fozami genetec https greenwood avenue bend online available https jackson hole wyoming usa town online available https online available http lausanne place de la online available https online available https nvidia tesla online available http opencv online available http oxford martin school webcam broad street online available https top video surveillance trends for online available https wikipedia pareto online available https abadi ahmad balazinska cherniack hwang lindner maskey rasin ryvkina tatbul xing and zdonik the design of the borealis stream processing engine in cidr amini andrade bhagwan eskesen king selo y park and venkatramani spc a distributed scalable platform for data mining in anwar hwang and sung fixed point optimization of deep convolutional neural networks for object recognition in icassp ba and caruana do deep nets really need to be deep in nips babenko and lempitsky aggregating deep convolutional features for image retrieval in iccv babenko slesarev chigorin and lempitsky neural codes for image retrieval in eccv bailis gan madden narayanan rong and suri macrobase prioritizing attention in fast data in sigmod brezeale and cook automatic video classification a survey of the literature ieee trans systems man and cybernetics part cai saberian and vasconcelos learning cascades for deep pedestrian detection in iccv cao ester qian and zhou clustering over an evolving data stream with noise in siam international conference on data mining carney cherniack convey lee seidman stonebraker tatbul and b conclusion answering queries of the form find me frames that contain objects of class x is an important workload on recorded video datasets such queries are used by analysts and investigators and it is crucial to answer them with low latency and low cost we present focus a system that performs low cost analytics on live video that later facilitates queries on the recorded videos focus uses compressed and specialized cnns at that substantially reduces cost it also clusters similar objects to reduce the work done at and hence the latency focus selects the cnn and its parameters to smartly between the ingesttime cost and latency our evaluations using hours of video from traffic surveillance and news domains show that focus reduces gpu consumption by and makes queries faster compared to current baselines we conclude that focus is a promising approach to querying large video datasets we hope that focus will enable future works on better determining the and in video querying systems our next steps include training a specialized and highly accurate cnn for each stream and object to further reduce query latency references apache online available http avigilon http church street market online available https city cam webcamsittard town square sittard nl online available https zdonik monitoring streams a new class of data management applications in vldb chandrasekaran cooper deshpande franklin hellerstein hong krishnamurthy madden reiss and shah telegraphcq continuous dataflow processing in sigmod chang ma and smeulders recent advances and challenges of semantic search in icassp chen wilson tyree weinberger and chen compressing neural networks with the hashing trick corr vol christel and conescu mining novice user activity with trecvid interactive retrieval tasks in civr denil shakibi dinh ranzato and de freitas predicting parameters in deep learning in nips denton zaremba bruna lecun and fergus exploiting linear structure within convolutional networks for efficient evaluation in nips han shen philipose agarwal wolman and krishnamurthy mcdnn an approximationbased execution framework for deep stream processing under resource constraints in mobisys han mao and dally deep compression compressing deep neural network with pruning trained quantization and huffman coding in iclr han j pool tran and dally learning both weights and connections for efficient neural network in nips he zhang ren and j sun deep residual learning for image recognition in cvpr hinton vinyals and j dean distilling the knowledge in a neural network corr vol hu xie li zeng and maybank a survey on visual video indexing and retrieval ieee trans systems man and cybernetics part hwang and sung feedforward deep neural network design using weights and in sips jaderberg vedaldi and zisserman speeding up convolutional neural networks with low rank expansions corr vol kaewtrakulpong and bowden an improved adaptive background mixture model for tracking with shadow detection in avss kang emmons abuzaid bailis and zaharia noscope optimizing deep queries over video streams at scale pvldb krizhevsky sutskever and hinton imagenet classification with deep convolutional neural networks in nips lawrence giles tsoi and back face recognition a convolutional approach ieee trans neural networks lecun boser denker henderson howard hubbard and jackel backpropagation applied to handwritten zip code recognition neural computation lew sebe djeraba and jain contentbased multimedia information retrieval state of the art and challenges tomccap li lin shen brandt and hua a convolutional neural network cascade for face detection in cvpr lienhart and maydt an extended set of features for rapid object detection in icip lin fan qian xu yang zhou and zhou streamscope continuous reliable distributed processing of big data streams in nsdi liu anguelov erhan szegedy reed fu and berg ssd single shot multibox detector in eccv mhalla chateau gazzah and amara faster scene specialization with a sequential framework in dicta microsoft microsoft cognitive online available https o callaghan mishra meyerson and guha algorithms for clustering in icde rabkin arye sen pai and freedman aggregation and degradation in jetstream streaming analytics in the wide area in nsdi rastegari ordonez redmon and farhadi imagenet classification using binary convolutional neural networks in eccv razavian azizpour sullivan and carlsson cnn features an astounding baseline for recognition in cvpr workshops redmon and farhadi better faster stronger corr vol ren he girshick and j sun faster towards object detection with region proposal networks in nips ren singh singh and zhu on video retrieval pattern recognition romero ballas kahou chassang gatta and bengio fitnets hints for thin deep nets corr vol russakovsky deng su krause satheesh ma huang karpathy khosla bernstein berg and imagenet large scale visual recognition challenge ijcv schroff kalenichenko and philbin facenet a unified embedding for face recognition and clustering in cvpr shen han philipose and krishnamurthy fast video classification via adaptive cascading of deep models in cvpr simonyan and zisserman very deep convolutional networks for image recognition in iclr snoek van de sande de rooij huurnink gavves odijk de rijke gevers worring koelma and smeulders the mediamill trecvid semantic video search engine in trecvid workshop participants notebook papers snoek and worring multimodal video indexing a review of the multimedia tools appl snoek and worring video retrieval foundations and trends in information retrieval y sun wang and tang deep convolutional network cascade for facial point detection in cvpr szegedy liu jia sermanet reed anguelov erhan vanhoucke and rabinovich going deeper with convolutions in cvpr tan steinbach and kumar introduction to data mining first edition boston ma usa longman publishing tatbul and zdonik staying fit efficient load shedding techniques for distributed stream processing in vldb tu liu prabhakar and yao load shedding in stream databases a approach in vldb viola and jones rapid object detection using a boosted cascade of simple features in cvpr xu kusner weinberger and chen tree of classifiers in icml yang ling chai and pan sensitive classification on data with missing values ieee trans knowl data yuan wang xiao zheng li lin and zhang a formal study of shot boundary detection ieee trans circuits syst video techn zaharia das li hunter shenker and stoica discretized streams streaming computation at scale in sosp zhang ananthanarayanan philipose bahl and freedman live video analytics at scale with approximation and in nsdi zivkovic improved adaptive gaussian mixture model for background subtraction in icpr
1
jan subspace perspective on canonical correlation analysis dimension reduction and minimax rates zhuang ma and xiaodong li abstract canonical correlation analysis cca is a fundamental statistical tool for exploring the correlation structure between two sets of random variables in this paper motivated by the recent success of applying cca to learn low dimensional representations of high dimensional objects we propose two losses based on the principal angles between the model spaces spanned by the sample canonical variates and their population correspondents respectively we further characterize the error bounds for the estimation risks under the proposed error metrics which reveal how the performance of sample cca depends adaptively on key quantities including the dimensions the sample size the condition number of the covariance matrices and particularly the population canonical correlation coefficients the optimality of our uniform upper bounds is also justified by analysis based on stringent and localized parameter spaces to the best of our knowledge for the first time our paper separates and for the first order term in the upper bounds without assuming the residual correlations are zeros more significantly our paper derives q for the first time in the nonasymptotic cca estimation convergence rates which is essential to understand the behavior of cca when the leading canonical correlation coefficients are close to introduction canonical correlation analysis cca first introduced by hotelling is a fundamental statistical tool to characterize the relationship between two groups of random variables and finds a wide range of applications across many different fields for example in association study gwas cca is used to discover the genetic associations between the genotype data of single nucleotide polymorphisms snps and the phenotype data of gene expression levels witten et chen et in information retrieval cca is used to embed both the search space images and the query space text into a shared low dimensional latent space such that the similarity between the queries and the candidates can be quantified rasiwasia et gong et in natural language processing cca is applied to the word matrix and learns vector representations of the words which capture the semantics dhillon et faruqui and dyer other applications to name a few include fmri data analysis friman et computer vision kim et and speech recognition arora and livescu wang et the enormous empirical success motivates us to revisit the estimation problem of canonical correlation analysis two theoretical questions are naturally posed what are proper error metrics to quantify the discrepancy between population cca and its sample estimates and under such metrics what are the quantities that characterize the fundamental statistical limits the justification of loss functions in the context of cca has seldom appeared in the literature from first principles that the proper metric to quantify the estimation loss should depend on the specific purpose of using cca we find that the applications discussed above mainly fall into two categories identifying variables of interest and dimension reduction the first category mostly in genomic research witten et chen et treats one group of variables as responses and the other group of variables as covariates the goal is to discover the specific subset of the covariates that are most correlated with the responses such applications are featured by low ratio and the interpretability of the results is the major concern in contrast the second category is investigated extensively in statistical machine learning and engineering community where cca is used to learn low dimensional latent representations of complex objects such as images rasiwasia et text dhillon et and speeches arora and livescu these scenarios are usually accompanied with relatively high ratio and the prediction accuracy using the learned low dimensional embeddings as the new set of predictors is of primary interest in recent years there has been a series of publications establishing fundamental theoretical guarantees for cca to achieve sufficient dimension reduction kakade and foster foster et al sridharan and kakade fukumizu et al chaudhuri et al and many others in this paper we aim to address the problems raised above by treating cca as a tool for dimension reduction population and sample cca suppose x sj p and y sj p are two sets of variates with the joint covariance matrix x cov y xy for simplicity we assume epxi q i epyj q j on the population level cca is designed to extract the most correlated linear combinations between two sets of random variables sequentially the ith pair of canonical variables ui i x and vi y maximizes corrpui vi q such that ui and vi have unit variances and they are uncorrelated to all previous pairs of canonical variables here q is called the ith pair of canonical loadings and is the ith canonical correlation it is well known in multivariate statistical analysis that the canonical loadings can be found recursively by the following criterion q arg max subject to j j j j i although this criterion is a nonconvex optimization it can be obtained easily by spectral methods define s s and q then are singular values of and are actually left and right singular vectors of respectively canonical variables versus canonical loadings pi pi quk the for any given estimates of the leading k canonical loadings denoted by i corresponding estimates for the canonical variables can be represented by pj x pi u i pj y vpi i i to quantify the estimation loss generally speaking we can either focus on measuring the difference pi pi quk or measuring the difference between between the canonical loadings quki and i pi vpi quk here x y in the definition of tpui vi quk the canonical variables tpui vi quki and tpu i i pi pi quk are constructed pi vpi quk are independent of the samples based on which and tpu i i therefore for the discrepancy between the canonical variables there is an extra layer of randomness as discussed above in modern machine learning applications such as natural language processing and information retrieval the leading sample canonical loadings are used for dimension reduction for a new observation q ideally we hope to use the corresponding values of k j k the canonical variables pui i qi and pvi qi to represent the observation in a low pj qk dimension space empirically the actual low dimensional representations are i i pj qk therefore the discrepancy between the ideal dimension reduction and and i i pi vpi quk approximate tpui vi quk actual dimension reduction should be explained by how well tpu i i consequently we choose to quantify the difference between the sample and population canonical variables instead of the canonical loadings linear span however there are still many options to quantify how well the sample canonical variables approximate their population correspondents to choose suitable losses it is convenient to come back to specific applications to get some inspiration motivated by applications in natural language processing and information retrieval the model of sufficient dimension reduction has been studied in foster et al roughly speaking a statistical model was proposed by foster et al to study how to predict z using two sets of predictors denoted by x sj and y sj where the joint covariance of pz x yq is x cov y xy j j z it was proven in foster et al that under certain assumptions the leading k canonical variables uk are sufficient dimension reduction for the linear prediction of z that is the best linear predictor of z based on is the same as the best linear predictor based on uk similarly the best linear predictor of z based on is the same as the best linear predictor based on vk notice that the best linear predictor is actually determined by the set of all linear combinations of uk referred to as the model space in the literature of linear regression for prediction which we denote as uk q inspired by foster et al we propose to quantify the pi uk by the discrepancy between the corresponding subspaces discrepancy between tui uki and tu i u pk q and uk q and similarly measure the difference between tvi uk and spanpu i tvpi uki by the distance between vpk q and vk q hilbert spaces and principal angles xpu kq spanpu u pk q and mpu kq in this section we define the discrepancy between m uk q by introducing a hilbert space noting that for any given sample tpxi yi quni xpu kq and mpu kq are composed by linear combinations of xp denote the set of all both m possible linear combinations as h q moreover for any p h we define a bilinear function y q q it is easy to show that is an inner product and ph is a hilbert space which is isomorphic to xpu kq and mpu kq are with the natural inner product we know both m subspaces of h so it is natural to define their discrepancy based on their principal angles in the literature of statistics and linear algebra two loss functions are usually used u pk q uk qq q lmax pspanpu and q qq k in spite of a somewhat abstract definition we have the following clean formula for these two losses u pk q uk qq lave pspanpu theorem suppose for any k matrix a pa represents the orthogonal projector onto the column span of a assume the observed sample is fixed then u pk q uk qq p lave pspanpu p k k f p x x k k k f p j q min e uj u k p k q lave k and u pk q uk qq p lmax pspanpu p k k p x x k k pj q g max min e uj u gprk p k lmax k here k s is a k matrix consisting of the leading k population canonical p k is its estimate based on a given sample moreover uj uk q and loadings for x and u pk pu uniform upper bounds and minimax rates the most important contribution of this paper is to establish sharp upper bounds for the p k q and of cca based on the proposed subspace losses lmax k p lave k k q it is noteworthy that both upper bounds hold uniformly for all invertible provided n q for some numerical constant furthermore in order to justify the sharpness of these bounds we also establish minimax lower bounds under a family of stringent and localized parameter spaces these results will be detailed in section notations and the organization throughout the paper we use and letters to represent fixed and random variables respectively we also use and bold letters to represent vectors which could be either deterministic or random and matrices respectively for any matrix u p and vector u p rp u u f denotes operator spectral norm and frobenius norm respectively u denotes the vector norm k denotes the submatrix consisting of the first k columns of u and pu stands for the projection matrix onto the column space of u moreover we use pu q and pu q to represent the largest and smallest singular value of u respectively and q pu q pu q to denote the condition number of the matrix we use ip for the identity matrix of dimension p and ip k for the submatrix composed of the first k columns of ip further opm nq and simply opnq when m n stands for the set of m n matrices with orthonormal columns and sp denotes the set of p p strictly positive definite matrices for a random vector x p rp spanpxj q txj w w p rp u denotes the subspace of all the linear combinations of x other notations will be specified within the corresponding context in the following we will introduce our main upper and lower bound results in section to highlight our contributions in the new loss functions and theoretical results we will compare our results to existing work in the literature in section all proofs are deferred to section theory in this section we introduce our main results on upper and lower bounds for estimating cca under the proposed loss functions it is worth recalling that are singular values of it is natural to estimate population cca by its sample counterparts similar to equation the sample canonical loadings are defined recursively by pi pi q arg max p xy p x j p y subject to p x j p y j i p x p y p xy are the sample covariance matrices the sample canonical variables are where defined as the following linear combinations by the sample canonical loadings pj x pi u i pj y vpi i i we prove the following upper bound for the estimate based on sample cca x theorem upper bound suppose n where is defined as in assume y and are invertible moreover assume for some predetermined then there exist universal positive constants c such that if n q the sample canonical p k satisfies coefficients matrix q pp p q p k q q e lmax k n n q k q p e lave k k q e n n p k can be obtained by switching and the upper bounds for since we pursue a nonasymptotic theoretical framework for cca estimates and the loss functions we propose are nonstandard in the literature the standard minimax lower bound results in parametric maximum likelihood estimates do not apply straightforwardly instead we turn to the nonparametric minimax lower bound frameworks particularly those in pca and cca see vu et al cai et al gao et al compared to these existing works the technical novelties of our results and proofs are summarized in sections and we define the parameter space k q as the collection of joint covariance matrices satisfying q and q we deliberately set q q to demonstrate that the lower bound is independent of the condition number for the rest of the paper we will use the shorthand f to represent this parameter space for simplicity theorem lower bound there exists a universal constant c independent of n and such that q k p k p k q c inf sup e lmax k n k p k q k p k p k q c inf sup e lave k n k p k p k can be obtained by replacing with the lower bounds for corollary when cplog nq and q q for some universal positive constant c the minimax rates can be characterized by q p k k p k q inf sup e lmax k q n p k k k q p k k p k q inf sup e lave k q n p k related work and our contributions recently the rate of convergence of cca has been studied by gao et al under a sparse setup and by cai and zhang under the usual setup cai and zhang appeared on arxiv almost at the same time as the first version of our paper was posted in this section we state our contributions by detailed comparison with these works novel loss funcitons we proposed new loss functions based on the principal angles between the subspace spanned by the population canonical variates and the subspace spanned by the estimated canonical variates in contrast gao et al proposed and studied the loss lave cai and zhang proposed lmax and studied both lave and lmax where p k q min e xj k xj p k q p k lave k qpopk kq j j p k q g p k q max p k x k x lmax k min e gprk qpopk kq lave and lmax resemble our loss functions lave and lmax respectively by theorem we also have p k q min e xj k xj p k q p k lave k p j jp p x k x k q g lmax k k q max min e k gprk by these two expressions we can easily obtain p k q p k q k lave k p k q lmax k p k q lmax k p k q and lave k p k q are not equivalent up to a constant neither are however lave k p p lmax k k q and lmax k k q in fact we can prove that as long as n q if then p k q lmax k p k q lave k p k q and lmax k p k q while almost surely lave k to illustrate this comparison very simple simulation suppose we consider the following n and and and in this setup we know the population canonical are and and the leading correlation coefficients in our simulation we generated the following data and canonical loadings are matrices x and y and p furthermore we can obtain the sample canonical correlations as well as q the leading sample canonical loadings and then lave q while lave q lmax q lmax this numerical example clearly shows that the sample cca can exactly identify that among all linear combinations of and and all linear combinations of and and are mostly correlated our loss functions lave and lmax do characterize this exact identification whereas lave and lmax do not moreover the following joint loss was studied in gao et al j j p k p k p k p k ljoint k k q e k k f p k p k almost surely under the special case similarly ljoint k k q sharper upper bounds regardless of loss functions we explain in the following why theorem implies sharper upper bounds than the existing rates in gao et al gao et al and cai and zhang under the nonsparse case our discussion is focused on lave in the following discussion while the discussion for lmax is similar notice that if we only apply wedin s law replacing the fine bound lemma with the rough bound lemma also see gao et al for similar ideas we can obtain the following rough bound p k q e lave k p k from both gao et al and in order to decouple the estimation error bound of cai and zhang assume the residual canonical correlations are zero this assumption is essential for proofs in both gao et al and cai and zhang under certain sample size conditions we got rid of this assumption by developing new proof techniques and these techniques actually work for lave lmax as well a detailed comparison between our result and that in cai and zhang is summarized in table the results of gao et al in the regime can be implied by cai and zhang under milder sample size conditions loss function sample size cai and zhang l lave q upper bound rates n q k our work lave yes no q n n q perhaps the most striking contribution of our upper bound is that we first derive the factors q and q in the literature of nonasymptotic cca estimate we now explain why these factors are essential when leading canonical correlation coefficients are close to example and consider the example that k p log n and then our bound rates q n q actually imply that q c p elave while the rates in gao et al and cai and zhang imply that q q c p elave n q our result could this shows that even under the condition under our loss lave imply sharper convergence rates than that in gao et al and cai and zhang if q through notice that as aforementioned when we can actually prove elave a separate argument how to improve theorem to imply this result is an open problem for future research example both and are close to consider the example that k p log n our bound rates q n b p n and q actually imply that b p then q c p elave n while the rough rates by wedin s law implies c p p elave q c n this shows that our upper bound rates could be much sharper than the rough rates when both and are close to new proof techniques and connection to asymptotic theory to the best of our knowledge none of the analysis in gao et al gao et al cai and zhang can be used to obtain the multiplicative factor q in the first order term of the upper bound even under the strong condition that following a different path we do careful perturbation analysis of the estimating equations of cca to avoid the loss of precision caused by applying matrix inequalities in the early stage of the proof the main challenge is to analyze the properties of matrix hardmard products especially to derive tight operator norm bounds for certain hardmard products we are particularly luckily to find a approach and in the proof of lemma to decompose the target matrices into matrices where we can apply the tools developed in lemma pi pi has been studied by the asymptotic distribution of the canonical loadings i anderson under the assumption that all the canonical correlations are distinct and since we focus on subspaces we only require for the given both anderson and our work are based on analyzing the estimating equations of cca our analysis is more involved because completely novel techniques are required to obtain the factor q in the nonasymptotic framework sharper lower bounds under parameter spaces with fixed and the minimax lower bounds for the estimation rates of cca were first established by gao et al under the losses ljoint and lave however the parameter space discussed in gao et al requires moreover the parameter space in gao et al is parameterized by satisfying but is not specified in fact they also constructed the hypothesis class with and the resulting minimax lower bound is proportional to however this minimax lower bound b is not sharp when and are close suppose p k and np our minimax lower bound in theorem leads to p k q inf sup e lave k p k in contrast to capture the fundamental limit of cca estimates in this scenario under the framework of gao et al one needs to choose to capture both and and hence then the resulting minimax lower bound rate will be op np q which is much looser than technically speaking we follow the analytical framework of gao et al and gao et al but the hypothesis classes construction requires any given instead of and this brings in new technical challenges more detailed technical discussions are deferred to section proof of theorem suppose the observed sample of px yq is fixed and consider the correlation between u pk q let the two subspaces of h defined in uk q and spanpu x x x q q pwk wk q be the first second and kth pair of canonical u pk variates between uk and u then wk q uk q x x p p xj y xw xi w xj y for any i j and wk q uk q and xwi wj y xwi w xi q for i varpwi q varpw xi q is actually the ith principal angle by the definition of principal angles we know w p p xi q this implies that between uk q and uk q w p k q lave k k i sin k i x wi wi u pk are linear combinations of xp we can denote since uk u p w xk q xj b w j wk q xj b and j pw p rp where b bk s b p bk s p by the definition of w we have ik covpwq b j b b j b p j p then b b p are p k basis matrices moreover we have bjp and similarly ik b i bj xj y for all i j moreover we have xwi w p b j p q qq covpw b j b notice that uk q wk q uk q xj k and wk q xj b then k bc x k bc for some nonsingular k k matrix this implies that b and k have the same column space since b p is a basis matrix we have bb j x similarly we have k pb p j p b p x k straightforward calculation gives pb pj b pb p j bb j b pb p jb pb pj pb p j trace bb j bb j bb j b j b f pb p j bq j b q qqq p k q q qq k and pb p jb pb p j trace ip bb j b pb p j ip bb j bb j b f pb p j bq k tracepb j b p k q klave k the above equalities yield the first two equalities in notice that both uk and wk are both orthonormal bases of uk u pk and w w xk are both orthonormal bases of spanpu u pk qq then we similarly u j j have u w r where r is a k k orthogonal matrix then min e uj q min e uj j q min e w j r j q j min e w e min i k qi prk i k i i qrj min e w j j q pwi j qi min qi prk i k k k j epwi j qi min epwi j qi qi prk notice that minqi prk epwi j qi is obtained by the best linear predictor so min epwi j qi varpwi q wi qj wi q qi prk therefore min e uj q k i p k q klave k which implies the third equality in similarly max min e gprk g j u q g max min e max min e gprk g gprk g max gprk g max gprk g max gprk g sin min e qi j w r j q rj g j w j q g min prk k j u j q g e i k k i pwi j qi i finally we prove by wedin we have pb p j ip bb j b pb p j ip bb j b j b p j ip bb j j ip bb j b p b ik q qq p k q q q lmax k which implies the the equalities in proof of upper bound throughout this proof we denote linear invariance without loss of generality we assume by the definition of canonical variables we know that up and vp are only determined by q and q in other words for any invertible p and p the canonical pairs of and are still q q therefore we can consider the following orthonormal bases p q and p q here q is an orthonormal extension of therefore we know that q q are also the the canonical pairs between and similarly for a fixed sample of the variables of x and y the sample canonical pairs p pp vpp q are also sample canonical pairs of the corresponding sample of q pu and this can be easily seen from the concept of sample and are respectively the linear combinations of canonical variables for example u and such that their corresponding sample variance are both and sample correlation is maximized if we replace q and q with and respectively and seek for the first sample canonical pair the constraints linear combinations of the two sets of variables and unit sample variances and the objective q is still the answer similarly sample correlation is maximized are the same as before so pu p p p p q q are the sample canonical pairs of and in particular they are the sample canonical pairs of and the above argument gives the following convenient fact in order to bound u pk q uk qq lave max pspanpu we can replace with in other words we can assume x and y satisfy the standard form r q s where q p moreover which implies that k ik k ik upper bound under the standard form under the standard form by and we have and u pk q uk qq q p p lave pspanpu k k f k p p lmax uk q uk qq k q p k p k denote pu p u and p l are the upper k k and lower kq k sub p k where k k k p k respectively then matrices of j p p j p p trace pi p q p q pi p q k q p p k k p k k k k k f p j p k pj p k q k q p k k q k k q k since p j p k pj p k q k k q k k q j p p l qj p k q k k k q p pl k p k q p k q k we have pi p q p k p k trace f and k q p k p l k f pj l k p p p k q k q k p l k j p k pl p k q p k q k k p l as well as a lower bound of p l and therefore it suffices to give upper bounds of k k f p k basic bounds recall that then and r q s r x cov r j y px p xy x y p cov py p yx y p as the left upper q q principal submatrix of p we can moreover we can define similarly define lemma there exist universal constants c and such that when n then with probability at least the following inequalities hold c p p p c n proof it is obvious that by lemma there exist constants and such that when n with probability at least there holds c p n b p x moreover as submatrices we have n p x pip p qpip p q pip p q ip p ip p x x x x x b p which implies x n lemma there exist universal constants c c and such that when n q then with probability at least q the following inequalities hold c p p p p c n c p p p p c n p k p k q p k p k q k c c p p pl p l k k n where is the the proof is deferred to section p l estimating equations and upper bound of k p l notice that we have already in this section we aim to give a sharp upper bound for k established an upper bound in lemma where wedin s sin law plays the essential role however this bound is actually too loose for our purpose therefore we need to develop new techniques to sharpen the results p p p p consist of the sample canonical coefficients by definition recall that p p and the sample canonical coefficients satisfy the following two estimating equations because py p are left and right singular vectors of px p xy py respectively if we define define p xy p p p p yx p p p p pr p p p p are k k diagonal matrices while p are kq kq diagonal matrices where then imply p xy p k p p k p yx p k p p k p divide the matrices into blocks p p p p p p p p xy xy yx yx y y x x py p xy p px p p p p p p p p x y y xy yx x p l p in p p p pu p where x are k k matrices finally we define k p r k pu p l with these blocks can be rewritten as the same way as k k p p u p p p l p p p l p pu xy k k k k p pu p p l p p u p p p l p yx k k k k p pu p pl p p pl p pu xy k p pu yx k define the of xy k p pl yx k x k p u p p k x k p l p p k r p xy the above equations imply the following lemma lemma the equality gives the following result pu pl p l k b k r k r r p pu r p p p p u yx y xy x k k where p r p p r p b xy yx x y r p r p r q x y r p p r r x xy and p p l p q p ik pu p p p l p p pu p u x k k k k k p pl pu pu p pl p p p q p ik p u k k y k yx y k l l l u r p p p q p p p p p p x x k k q k k p r j p l p p l p pl p pu p y k q k k q k yx k the proof is deferred to section by lemma one can easily obtain that c c n recall that p p u p q p pl p pl p r p l x k x k k q k by lemma we have and p p u p q c p r p l c x xy k k n p pl p pl p pl p pl p x k k k k q p ip pl p pl p x k k q c similarly c therefore we get c combined with lemma we have and r p r p q c r x y r p p c r r xy x the proof of the following lemma is deferred to section lemma if n q then with probability q d q p pp p q k k pl c k upper bounds of risks notice that the inequality yields p l k k q p k p k q by lemma and lemma we know on an event g with probability at least q k q p k c n moreover since k q p k by we have q p k q e q p p c elmax k k k n since k q p k is of at most we have pi p q p k q k p k p k k f then by and the previous inequality we have p elave k k q e k q p k e k q p k k f e k q p k q n in fact the factor in the main term can be reduced to k by similar arguments as done for the operator norm the frobenius norm version of lemma is actually much simpler we omit the proof to avoid unnecessary redundancy and repetition supporting lemmas in linear algebra and probability definition hadamard operator norm for a p define the hadamard operator norm as sup a b b b p let and be arbitrary positive numbers lower bounded by a positive constant n lemma let um i and ui be two sequences of positive numbers for any x p r there hold a i j x x and q x x q x x proof the proof of can be found in norm bounds for hadamard products and an mean inequality for unitarily invariant norms by horn denote q q the proof of relies on the following two results lemma theorem of hom and johnson if a b p and a is positive semidefinite then a b max aii b where is the operator norm lemma theorem of mathias the symmetric matrix minpa a q i j ai aj is positive semidefinite if ai i define i n and n i m define m p rpm nq by mij u by lemma m is also positive semidefinite again apply lemma and notice that is the lower left of m it is easy to obtain finally since b b b for any b we have b b b which implies lemma covariance matrix estimation remark of vershynin assume a p has independent random rows with second moment matrix then there exists universal constant c such that for every t the following inequality holds with probability at least c j t p a a u c n n n lemma bernstein inequality proposition of vershynin let xn be independent centered random variables and k maxi xi then for every a an q p rn and every t we have n t ai xi t min p k a k a i lemma inequality theorem of rudelson and vershynin let x xp q be a random vector with independent components xi which satisfy exi and xi k let a p then there exists universal constant c such that for every t t p ax exj t min k a k a lemma covering number of the sphere lemma of vershynin the unit euclidean sphere equipped with the euclidean metric satisfies for every that qn where n is the of with minimal cardinality the following variant of wedin s sin law wedin is proved in proposition of cai et al p a e define the singular value decompositions of a lemma for a e p and a p and a as p u pd p vp j a u dv j a then the following perturbation bound holds k q pup k k pup k e paq paq where paq paq are the kth and pk singular values of a proofs of key lemmas proof of lemma the proof of c p p p p c n is exactly the same as that of lemma observe that p p xy p pip p p p xy p p p p xy p pip p q p xy q x x y then p p xy p and y p p xy p ip p p x ip p p xy x y x y p and are singular values of p p xy p and respectively hence by the notice that famous weyl s inequality for singular values p p p xy p x y p p x ip p p xy y c c c n n n p p p p xy p we have p p pjp p since x are left singular vectors of x p p ip p j p x ip p then we have and p p ip p j p x ip p p p p p x ip p p p x x x x p p x ip p x x as a submatrix pj p p p x ip p p p x ip x x k k ik p p x ip p x ip p as long as n q for sufficiently large in this case by the same argument recall that p k q p k p k q p k k ik k ik p p the last inequality in the lemma relies on the fact that x k and k are leading k singular p p p vectors of and respectively by a variant of wedin s sin law as stated in lemma c p p xy p k p p x k n on the other hand p p p j p k k k q k p p x k p p j q pi p q k k x p p q k x p p here the second equality is due to the fact that x k has orthonormal columns moreover px p k ql denotes the lower kq k of p p x k again by triangle inequality p l p p l p p k k q k p p p q i q x k k x c c c c p p n n n the last inequality is due to let c q the proof is done proof of lemma the equality implies pu p u p u p q p ik pu p p p l p k k k x k k p p l p pu xy xy p q p pu p p p l p p u p u pu y ik k k k k k p p l p pu k k similarly implies yx yx k k the equality is equivalent to p p u p p p u p r p l r pl p pu xy k k k k k q p pl p pl p l x q k k k which can be written as p p u p pl r pl p p u p pu xy k k k k k q p pl p l q p r p l p pu rj pl p p u pl p p u p yx k k k k k q p r j p l p pl p pl y q x k k xy k apply the same argument to we obtain k k k r then consider q that is r p p u r p p u p p u p p u pl p l k k k k k k r q p l pl p p u r p p u k k k k r r p p u p p u y x k k combined with and p l pl p p u r p p u p p u p k k k k k r p r pu p r y y q k p p p u r p p pu xy x yx y k k p r p q this finishes the proof of plug into we get p l r pl p p pu r p p pu r k k k q k r r p p p u pr xy k x this finishes the proof of proof of lemma first we discuss two quite different cases case and let q define the kq k matrices a by b b i aij i k j k i by in lemma there holds where and by lemma we have p u q a q p l a b k k diag b b diag b b p u q p l b k k u p b k r p u p k recall that k b and it is obvious that b moreover in the q previous section we also have shown that r it suffices to bound b and to this end we apply the standard covering argument step reduction denote by psd q the unit ball surface for and any pair of vectors u p v p rk we can choose p q p q such that u v then j j j uj bv uj bv uj bv bv j u bv uj b v b uj b max uj maximize over u and v we obtain b b max uj therefore b max uj let then it suffices to give an upper bound max uj with high probability step concentration i k and j k bsi j b i y xl for all n and l then for let l l n k i j k i j i k i j i k i j q n n i k i j k i i k i j j q n i k i i k i j i j j k i b n i k i j n i b b b i k i j i k i j b i i k i j b in this way k i k i i nu are mutually independent standard gaussian random variables for any given pair of vectors u p v p rk uj bv n k ui vj b i k i j n i j k k i b b b i k i j i k i j b i i k i j n j w n where j rxj s s and p q is symmetric and determined by the corresponding quadratic form this yields k i i q i j i i q i i q k i i i j i k i i q max i i j max q max q i q i max i q k q q k where the second last inequality is due to the facts that and i q k q moreover k now define w j wnj s and a then we have a max k a and an n nk j w aw where w p n n n therefore by the classic inequality lemma there holds t t j p t exp min nk k uj bv for some numerical constant without loss of generality we can also assume let t by n straightforward calculation gives j p k step union bound by lemma we choose such that d q j p max n q in other words with probability at least we have j b max n d q in summary we have as long as n q with probability q d q q pl c k d q q here the last inequality is due to here c are absolute constants case by we have p l f p l k k where and p q p p p u g x xy x k p p p u p q f q s yx y k y p and p are submatrices of p by lemma we have notice that xy x moreover by c b n p p c xy x c n c and lemma there holds g c n p by a similar p are submatrices of p and rip q similarly q x yx argument f c n then pl k i i i i i here i k and j by lemma there holds for any x i q x x x i i and finally for any x where i q k i x x i x i x a q i b a b i i diag b b and since b in summary we have since diag b b by lemma i x x pl c k there holds d pl c k q q lower bound proof of theorem to establish the minimax lower bounds of cca estimates for our proposed losses we follow the analytical frameworks in the literature of pca and cca vu et al cai et al gao et al where the calculation is focused on the construction of the hypothesis class to which the packing lemma and fano s inequality are applied however since we fix both and in the localized parameter spaces new technical challenges arise and consequently we construct hypothesis classes based on the equality in this section we also denote on divergence the following lemma can be viewed as an extension of lemma in gao et al from to arbitrary the proof of the lemma can be found in section lemma for i and k let upiq wpiq p q vpiq zpiq p q where upiq p vpiq p for let and define j w z j upiq vpiq y piq piq i j z w j vpiq upiq x piq piq let ppiq denote the distribution of a random sample of size n from n q if we further assume s ru w s j j then one can show that q q j u v j f q remark the conditon in is crucial for obtaining the factor in the lower bound and is the key insight behind the construction of the hypothesis class in the proof gao et al has a similar lemma but only deals with the case that the residual canonical correlations are zero to the best of our knowledge the proof techniques in gao et al can not be directly used to obtain our results packing number and fano s lemma the following result on the packing number is based on the metric entropy of the grassmannian manifold gpk rq due to szarek we use the version adapted from lemma of cai et al which is also used in gao et al for any fixed p opp kq and tu p opp kq u u j f u with p pp kqs q define the on by q f then there exists universal constant c such that for any p the packing number q satisfies q the following corollary is used to prove the lower bound corollary if we change the set in lemma to tu p opp kq u f u then we still have r q proof apply lemma to there exists un with n such that ui uij f i n ui uij uj ujj f i j r i arg define u min u ptui q qpopkqu u f by lemma r i u f u riu r ij f u u rn p and therefore u which implies riu r ij u rj u r jj f ui uij uj ujj f u q n lemma for any matrices p opp kq inf qpopk kq q f f proof by definition q qq let u dv j be the singular value decomposition then v u j p opk kq and inf qpopk kq q v u j q du j q on the other hand q q q since p opp kq and therefore all the diagonal elements of d is less than which implies that trpdq trpd q and inf qpopk kq q lemma fano s lemma yu let be a semi metric space and p a collection of probability measures for any totally bounded t denote mpt the number of t with respect to the metric the maximal number of points in t whoese pairwise minimum distance in is at least define the diameter of t by dkl pt q sup pt then p sup sup dkl pt q log inf sup log mpt t proof of lower bound for any fixed p q and p q where p p p p define u w v z u w p q with u p v z p q j v with v p u f ru w s ru w s j zj for any fixed p sp p sp with q q consider the parametrization for define u v j w z j v u j zw j ru w s rv zs u w v z p it is straightforward to verify that k q for any p i they yield to the parametrization j j upiq vpiq k wpiq zpiq j j vpiq upiq k zpiq wpiq piq piq where upiq wpiq vpiq zpiq p and the canonical vectors are k upiq k vpiq we define a on as q x x k k f f by lemma q q j u v j f q further by the definition of dkl pt q dkl pt q q j j sup f q to bound the diameter for any p by definition s s j j which implies that they are singular value decompositions of the same matrix therefore there exists q p q such that s sq s sq decompose q into four blocks such that q substitute into then ik q ik q ik the second equality is due to the fact that and have orthogonal column space and the third equality is valid because p kq by the same argument we will have ik notice that j j f q q q q then substitute into q q dkl pt q j let tu p kq u f u under the q j we claim that the packing number of h is lower bounded by the packing number of b f to prove this claim it suffices to show that for any u p there exists corresponding w v z such that pu w v zq p first of all by definition u f let w p be the orthogonal complement of u then ru w s p q and therefore there exists q p q such that ru w s sq set rv zs sq p q then ru w s vj zj s j j which implies pu w v zq p let d a q c k kq q a where c p depends on and is chosen small enough such that p kqs q by corollary q q q apply lemma with kq inf sup e sup sup p k x k f x p k kqlog t choose small enough such that kq kqlog then the lower bound is reduced to q k k kq k kq inf sup e p k x k f x q p k q k k k n k by symmetry inf sup e p p k y k y k c f q k n k k the lower bound for operator norm error can be immediately obtained by noticing that p k y has at most rank and y k p x k proof of lemma p x x x k k k f by simple algebra the divergence between two multivariate gaussian distributions satisfies n q tr q log q notice that where j w zj upiq vpiq piq piq j z wj vpiq upiq piq piq then q also notice that n q q pp p q log u upiq j j j j piq u v upiq piq piq vpiq wpiq wpiq j j j j wpiq zpiq wpiq zpiq therefore share the same set of eigenvalues with multiplicity k with multiplicity k with multiplicity k with multiplicity k and with multiplicity q this implies log qq on the other hand by block inversion formula we can compute j w w j j w z j u u u v i p j j j j divide into blocks such that where p p and j j j j j pu u u v v u q q j j j j q j j j j j q q j j j j q we spell out the algebra for q and q can be computed in exactly the same fashion j j j j j j j j j q q j similarly j j q j j w z j u v j w z j we have by the assumption j j j q further j j j j j j j q tr q j j j j tr q j and by the same argument j j j q sum these equations j q j q q repeat the argument for one can show that q q q j q therefore n n q q qq q u v j q q references anderson asymptotic theory for canonical correlation analysis multivariate analysis journal of arora and livescu acoustic features for phonetic recognition across speakers and domains in acoustics speech and signal processing icassp ieee international conference on pp ieee cai ma and wu optimal estimation and rank detection for sparse spiked covariance matrices probability theory and related fields cai ma wu et al sparse pca optimal rates and adaptive estimation the annals of statistics cai and zhang perturbation bounds for singular subspaces with applications to statistics the annals of statistics to appear chaudhuri kakade livescu and sridharan clustering via canonical correlation analysis in proceedings of the annual international conference on machine learning pp acm chen liu and carbonell structured sparse canonical correlation analysis in international conference on artificial intelligence and statistics pp dhillon foster and ungar learning of word embeddings via cca in advances in neural information processing systems nips volume faruqui and dyer improving vector space word representations using multilingual correlation association for computational linguistics foster johnson kakade and zhang dimensionality reduction via canonical correlation analysis technical report friman borga lundberg and knutsson adaptive analysis of fmri data neuroimage fukumizu bach and jordan kernel dimension reduction in regression the annals of statistics gao ma ren zhou et al minimax estimation in sparse canonical correlation analysis the annals of statistics gao ma and zhou sparse cca adaptive estimation and computational barriers the annals of statistics to appear gong ke isard and lazebnik a embedding space for modeling internet images tags and their semantics international journal of computer vision hom and johnson topics in matrix analysis cambridge up new york hotelling relations between two sets of variables biometrika kakade and foster regression via canonical correlation analysis in in proc of conference on learning theory kim wong and cipolla tensor canonical correlation analysis for action classification in computer vision and pattern recognition cvpr ieee conference on pp ieee mathias the hadamard operator norm of a circulant and applications siam journal on matrix analysis and applications rasiwasia costa pereira coviello doyle lanckriet levy and vasconcelos a new approach to multimedia retrieval in proceedings of the acm international conference on multimedia pp acm rudelson and vershynin inequality and concentration electron commun probab no sridharan and kakade an information theoretic framework for learning in servedio and zhang eds colt pp omnipress szarek j nets of grassmann manifold and orthogonal group in proceedings of research workshop on banach space theory iowa city iowa volume pp vershynin introduction to the analysis of random matrices arxiv preprint vu lei et al minimax sparse principal subspace estimation in high dimensions the annals of statistics wang arora livescu and bilmes on deep representation learning in proceedings of the international conference on machine learning pp wedin perturbation bounds in connection with singular value decomposition bit numerical mathematics wedin on angles between subspaces of a finite dimensional inner product space in matrix pencils pp springer witten tibshirani and hastie a penalized matrix decomposition with applications to sparse principal components and canonical correlation analysis biostatistics yu b assouad fano and le cam in festschrift for lucien le cam pp springer
10
vivekkan abstract in the realm of multimodal communication sign language is and continues to be one of the most understudied areas in line with recent advances in the field of deep learning there are far reaching implications and applications that neural networks can have for sign language interpretation in this paper we present a method for using deep convolutional networks to classify images of both the the letters and interpretation asl places convolutional neural networks have been extremely successful in image recognition and classification problems and have been successfully implemented for human gesture recognition in recent years in particular there has been work done in the realm of sign language recognition using deep cnns with that is sensitive to more than just pixels of the images with the use of cameras that sense depth and contour the process is made much easier via developing characteristic depth and the use of technology is quickly growing in popularity and other tools have been incorporated into the process that have proven successful developments such as color gloves have been used to facilitate the recognition process and make the feature extraction step more efficient by making certain until recently however methods of automatic sign language recognition weren t able to make use of the technology that is as widely available today previous works made use of very basic camera technology to generate datasets of simply images with no depth or contour information available just the pixels present attempts at using cnns to handle the task of classifying images of asl letter gestures have had some success most implementations surrounding this task have attempted it via transfer learning but our network was trained from scratch our general architecture was a fairly common cnn architecture consisting of multiple convolutional and dense layers the architecture included groups of convolutional layers followed by a layer and a dropout layer and two groups of fully connected layer followed by a dropout layer and one final we initially trained and tested on a dataset of images we took ourselves this dataset was a collection of images from people for each alphabet and the digits since our dataset was not constructed in a controlled setting it was especially prone to differences in light skin color and other differences in the environment that the images were captured in so we also used a premade dataset to compare our dataset s performance with additionally a pipeline was developed that can be used so people are able to generate and continue adding images to for generating our own dataset we captured the images for each sign then removed the backgrounds from each of the images using techniques when we initially split the dataset into two for training and validation the validation accuracy showed to be high however when we used datasets from two different sources training on ours and testing on the premade and vice versa the validation accuracy drastically decreased since training on one dataset and validating on another was not yielding as accurate of results we used the premade dataset for the different gestures to train the we saw the performances improve differently in our two datasets via data augmentation by transforming our images just a few pixels rotating by degrees translating by on both axes there was an increased accuracy of approximately we also flipped the images horizontally as we can sign using both hands while it wasn t extremely effective we saw that with better and more representative initial training data augmenting improved the performance more drastically this was observed after augmentation of the premade dataset which we observed accuracy on the alphabet gestures and validation set accuracy on digits when using the nz asl dataset on our dataset we observed much lower accuracy measures as was expected since our data was less uniform than that which was collected under studio settings with better equipment we saw accuracy on letters of the alphabet and accuracy on the digits in terms of time complexity gestures of the letters converged in approximately minutes and the we trained with a categorical cross entropy loss function for both our datasets it is a fairly common loss function initially we observed low accuracy measures when testing on the validation set of the data which we accounted largely to the lighting and skin tone variations in the images the higher accuracy measure for the digits was expected since the gestures for the digits are much more distinguishable and easier to classify compared to previous methods working on this same task our network performed quite well considering were using both a color glove and kinect camera the cause of higher accuracy than stanford s method was likely due to their lack of for the images since they used a large dataset from as part of a competition method accuracy didn in this paper we described a deep learning approach for a classification algorithm of american sign language our results and process were severely affected and hindered by skin color and lighting variations in our data which led us to resort to a professionally constructed dataset with a camera like microsoft s kinect that has a depth sensor this problem is easy to solve however such cameras and technology are not widely accessible and can be costly our method shows to have potential in solving this problem using a simple camera if enough substantial training data is provided which can be continuously done and added via the aforementioned processing pipeline since more people have access to simple camera technologies this could contribute to a scalable solution in recognizing that classification is a limited goal we plan on incorporating structured pgms in future implementations of this classification schema that would describe the probability distributions of the different letters occurrences based on their sequential contexts we think that by accounting for how the individual letters interact with each other directly the likelihood for the vowel o to proceed the letter j the accuracy of the classification would increase this hmm approach with sequential pattern boosting has been done with the actual gesture units that occur in certain gestures contexts capturing the movements that precede a certain letter to incorporate that probability weight into the next unit s class and processing sequential phonological information in tandem with gesture recognition but not for tagging with an we also recognize that the representation itself makes a huge difference in the performance of algorithms like ours so we hope to find the best representation of our data and building off our results from this research incorporate it into a learning process we see learning as having the potential to facilitate the translation process from american sign language into english implementing learning for translating the alphabet and numbers from american sign language to written english and comparing it with a pure deep learning heuristic could be successful and have the potential to benefit from error correction via language models recent implementations of adaptation have also had success in solving real world computer vision tasks and effectively trained deep convolutional neural networks using very little data even as limited as datasets we ultimately aim to create a holistic and comprehensive representation learning system for which we have designed a set of features that can be recognized from simple gesture nips barczak reyes abastillas piccio susnjak a new static hand gesture colour image dataset for asl gestures letters in kim taehwan livescu k shakhnarovich greg american sign language fingerspelling recognition with phonological tandem models in slt agarwal anant thakur manish sign language recognition using microsoft kinect in international cooper ong pugeault bowden sign language recognition using journal of garcia brandon and viesca sigberto american sign language recognition with convolutional neural networks in neural networks for cao dong ming leu and zhaozheng yin american sign language alphabet recognition using microsoft kinect in international conference on computer
1
may rate n systematic mds convolutional codes over gf barbero universidad de valladolid valladolid spain email angbar ytrehus simula uib and university of bergen bergen norway email oyvindy abstract a systematic convolutional encoder of rate n and maximum degree d generates a code of free distance at most d d and at best a column distance profile cdp of d a code is maximum distance separable mds if it possesses this cdp applied on a communication channel over which packets are transmitted sequentially and which loses erases packets randomly such a code allows the recovery from any pattern of j erasures in the first j blocks for j d with a delay of at most j blocks counting from the first erasure this paper addresses the problem of finding the largest d for which a systematic rate n code over gf exists for given n and in particular constructions for rates and are presented which provide optimum values of d equal to and respectively a search algorithm is also developed which produces new codes for d for field sizes using a complete search version of the algorithm the maximum value of d and codes that achieve it are determined for all code rates and every field size gf for m and for some rates for m i ntroduction in many practical communication applications such as multimedia transmission over packet erasure channels delivery is an important criterion traditional arq systems for example the one used by tcp for transport layer unicast service suffer from long delays due to erasures when the time is large this has led to an increased interest in the design and analysis of systems based on error correcting codes such coded schemes are also known to be beneficial in other transport layer models for example in the case two main approaches to this coding problem have been discussed in the literature the deterministic approach is to send packets using a fixed convolutional code with a good column distance profile this approach is discussed in subsection random coding was proposed as a solution in in these schemes the sender transmits k uncoded information packets followed by n k parity check packets formed by random linear combinations of all information packets that have not been acknowledged by the receiver so far subsection describes this approach and also discusses a hybrid approach that combines deterministic and random coding a contributions we present new codes in section iii in section we present two new general and optimum constructions of mds convolutional codes in the literature there exist only a few general constructions of convolutional codes as far as we know only the code and their binary generalizations and thms and in we present a simple but as far as we can see not previously described in the literature construction this code has the same rate and viterbi complexity as the binary code but has a better column distance profile we also present a much more interesting algebraic construction in proposition in section we describe a search algorithm and in section we present the codes found by the algorithm for most parameters these codes are better in a sense which will be made more precise than previously known codes further we present simple upper bounds in section iv by convention we will call a convolutional code systematic if is it has a systematic encoder one that preserves all information symbols and obtains redundancy by extra parity symbols systematic rate n convolutional encoders are useful in order to obtain fast recovery of packet erasures in the common case of channels with moderate erasure rates and we will focus only on this class of codes this work is supported by ministerio de industria y competitividad gobierno de through project the estonian research council through project the norwegian research council through the sards project ii background a notation for a thorough introduction to convolutional codes please see in the following we will describe the concept of a mds convolutional code in a way which is convenient for our purposes in this paper let m n k n be integers f gf and define the matrices and vectors k fk where fk is the space of row vectors over f fn where denotes the space of matrices with r rows and c columns over for i define ri ri k fk hi f ri and for an integer l let h l hl f where is matrix with r rows and c columns then h l is the parity check matrix for the lth truncated block code c l of a systematic convolutional code c thus any vector of length l n for l l l l v l vn vn vn f n is a codeword in c l if and only if the syndrome h l v l f a systematic encoder for the code c l is represented g l by gl k f where ik gi for i i and ik and are the k identity and zero matrices respectively it is straightforward to verify that g l l example let f gf with primitive element defined by h and then the parity and generator matrices g define a truncated code c which is rate block code over note that the matrices are completely determined by the parity check coefficients ri j i l j in the conventional polynomial notation of convolutional codes the parity check matrix can be described as d d h x xi ri k xi f x in example h x x similarly the corresponding polynomial generator matrix is x g x mds convolutional codes constructed from superregular matrices in the deterministic approach the goal is to design codes with an optimum column distance profile which we will define below the column distance dl dl c of a convolutional code c is the minimum hamming weight of any truncated codeword c l with the first block cn nonzero and the column distance profile cdp is the sequence dd d d d where d is the free distance of the code and d is the index for which the cdp reaches the cdp was originally studied for its significance on the performance of sequential decoding please see ch of recently the cdp has received renewed attention in the context of codes due to its importance for fast recovery from losses of symbols in an erasure channel recall that we consider only convolutional codes of rate n that have a systematic encoder in this case by the singleton bound for truncated block codes and by similar linear algebra arguments dl for l moreover dl for l so the best column distance profile one can hope to find in a code with a systematic encoder is d j j dd d by an mds convolutional code in this paper we will mean a code with a cdp as in remark the concept of codes was introduced in this concept takes into account that for some codes that do not possess a systematic encoder the free distance may grow beyond where is the memory of a minimal encoder in order not to complicate the notation and since viterbi complexity is not an issue in this paper we omit the details definition consider a lower triangular matrix sr rl where each element ri consider a square submatrix p of size p of sr formed by the entries of sr in the rows with indices i p l and columns of indices j p l p and its corresponding minor are proper if jl il for all l p sr is superregular if all its proper p p minors are non singular for any p l when matrix sr is upper triangular the definition of proper submatrices is analogous a superregular matrix can be used to construct a rate code in two ways a systematic mds convolutional code with cdp as in with d d when a code in general nonsystematic with a parity check matrix of max degree and the same cdp as for the systematic codes in case while superregular matrices are known to exist for all dimensions if the field is large enough general efficient constructions are not known and for the minimum field size for which a superregular matrix exists is not known another problem with the deterministic approach is that the existing design methods do not allow a simple construction of codes of high rate high degree codes of higher rates which are desirable in many practical cases can also be constructed from these superregular matrices but this involves deleting columns so that the conditions on a superregular matrix are too strict this means that in practice only simple codes can be constructed in this way since superregular matrices are so hard to construct the reduction to the superregular matrix problem blocks the code construction therefore we generalize definition as follows definition consider an triangular matrix where s is a positive integer s r r r s s r r r r r s s s ssr s s s rl s s s s consider a square submatrix p of size p of ssr formed by the entries of ssr in the rows with indices i p l and columns of indices j p s l p and its corresponding minor are proper if jl s il for all l p the matrix ssr is called iff all of its proper p p minors for any p l are nonsingular rate field size d table i description superregular superregular ad hoc superregular superregular s ome rate n mds codes not necessarily systematic described in the literature the following lemma is a restatement of theorem in using the terminology of this section lemma let h d be the parity check matrix of the d th truncation of a systematic convolutional code given by d h k k k k rd k k k k k k k k k and let h d be the matrix obtained from h d by removing the columns in positions k k k d k that is k k k k k k d h k k k rd k k k k then the cdp of the convolutional code given by h d is d if and only if h d is a matrix theorem in is stated without proof for reference we include a formal proof in appendix a definition let n be the largest free distance d such that there exists a rate n systematic mds convolutional code over gf with column distance profile as in the main problem that we address in this paper is to determine exact values or constructive lower bounds for n please note that there is no restriction of the degree d in definition there are few known code constructions in the literature beyond those based on superregular matrices table i contains the current world records with respect to rate n mds codes to the best of our knowledge we will describe new codes in section iii although this paper focuses on rate n mds codes we observe that the following lemma that follows directly from theorem in implies that our results will also provide rate mds codes lemma if a systematic rate n mds code of memory d and free distance d exists then its dual code is equivalent to a systematic rate mds code of memory d and free distance n d random convolutional codes in the terminology of this paper the random approach consists of selecting the coefficients of ri j independently at random the advantage of this is that one can pick codes with large degrees and that over large fields the expected performance is reasonably good although the exact loss compared to optimum average performance or optimum guaranteed worst case performance remains to be determined coefficients need to be transmitted in the headers of the data packets but this represents only a small rate loss when large packets are transmitted proposition consider a hybrid scheme where the first blocks of coefficients ri j until time i d are selected fixed and subsequent random coefficients ri j for i d are selected at random thus the parity check equation will be on the form h x hcdp x hrandom x where d d hcdp x xi ri k xi and hrandom x xi ri k xi where all ri j are nonzero randomly selected coefficients and where the degree of the random polynomials does not need to be fixed except by the application protocol then the initial cdp until time d is not affected by the random part of the code construction proof obvious only the first component hcdp x of the parity check matrix determines the initial part of the cdp our suggestion is to use such hybrid codes codes where the terms of degree d of the parity check polynomials are preselected constants yielding an optimum initial column distance profile while subsequent random parity checks are added as needed this guarantees optimum recovery for the simplest and most likely erasure patterns and hence better performance than random codes for light to moderate erasure patterns while still allowing the degree to grow if required by the application iii n ew codes et al use superregular matrices to design codes however the authors also give examples of codes that are better than the ones constructed from superregular matrices and note that the abundance of small examples suggests that such a construction might be possible and might lead to smaller alphabets for given parameters than the construction we will leave this as an open question for future so here comes the future research in this section we present constructions and a new search algorithm that in combination improve our knowledge of n for almost all sets of parameters with respect to what we find in the literature a codes with free distance d we present two optimum constructions for d for d the construction is simple but we have not seen it presented in prior literature we have tacitly assumed the following fact for the constant terms here comes the justification lemma we can assume n proof if there is a j equal to zero then we don t want that then assume some nonzero j if we multiply the corresponding column of g d by j we obtain a new code with the same cdp and weight structure proposition qm qm for q prime and m proof select i and i i qm as the qm distinct nonzero elements of gf qm without loss of generality the parity check matrix of takes the form h qm h qm is qm because it is obvious that all the proper minors of sizes and are nonsingular clearly and remark it is instructive to compare the construction of proposition with the binary codes codes were considered for digital media transmission already in the code of length has the binary polynomial parity check matrix hwa x x it is easy to see that the cdp of the code is this is not an mds code the construction of proposition can be considered as a qm generalization of the code of memory but this code is an mds code with cdp for d we present an optimum construction in proposition complete computer searches for m indicate that the construction is unique and in a sense much better than what can be achieved through other choices of the set of first degree coefficients i lemma for a code with a cdp of its parity check matrix h must satisfy i ri s for i s k ii ri s ri t for i s t k iii t s s for s t k iv s s t t for s t k v s t u s t for s t k u k vi s s u t t u u t u t for s t u proof from lemma we need h to be that all proper minors of h are non singular is equivalent to condition i proper minors of size are of the following types ri s ri s t s r s s ri t r s s t t t the first type are trivially non zero the second type are non zero when condition i is satisfied the third type being nonsingular is equivalent to condition ii the fourth type is guaranteed to be non zero if and only if condition iii is satisfied and the fifth type being nonsingular is equivalent to condition iv finally proper minors can be of four different types s s t s s t t s s t t u s s t t u u those of the first type are trivially nonsingular condition ii takes care of those in the second type to be nonsingular those in the third type are nonsingular if and only if condition v is satisfied the fifth type are non singular if and only if condition vi is satisfied example consider the code in example by checking conditions i vi in lemma we observe that the code has cdp equal to proposition proof let f gf the following construction gives a code that meets the requirements the trace function is defined by trm f x gf trm x x consider the set x x when f is regarded as an vector space over gf the set is a hyperplane an m linear subspace of let k select as an arbitrary nonzero field element select c as an arbitrary constant in f then select ak k as all distinct nonzero elements in and set bs s as as c s s c for s we need to verify that this construction satisfies the conditions in lemma i this holds because bs as as c is a product of two nonzeros ii all as s are distinct assume that bs bt s then as as c at at c as at c as at c as at as at c as at the first factor is nonzero since as at the second factor is also nonzero since as at because is closed under addition while c a contradiction iii assume that as at bs then as at as as c at as c a contradiction since at and as c iv assume that bs bt s then as c at c as at a contradiction v bs bt au as at as as c at at c au as at as at c au as at as at c au as at as at c au which again is a product of nonzero factors because c and as at au and hence nonzero vi bs as bt bu au bt at bu at au as at at c au au c au at at c at au au c at au as at au as c at au at au at au as as c at au as as c as at au as c at au as as c as at as au at au as at as au this follows from theorem in section iv remark by theorem later the construction in proposition is optimum not only in the sense that it offers the maximum distance for the given field size and code rate but also it offers the minimum field size for a code of the given rate and distance and the maximum code rate given the field size and distance moreover complete computer search for field sizes show that the construction is unique for these parameters b computer search algorithm the goal of the search algorithm is to select the coefficients ri j successively ordered first on i and then reversely on j in such a way that the conditions on the minors are met some useful facts first as for the constructions in section we use lemma in order to set k in order to simplify the search we apply the following results lemma we can assume i i k for any choice of ordering lemma consider an mds convolutional code c with polynomial parity check matrix d d h x xi ri k xi f x then the code cc with parity check matrix d d hc x ci xi ci ri k xi f x is also mds for any c f d i i proof let v x x vn x i x vn i x then v x h x iff vc x hc x for d d vc x i xi vn i xi corollary if a systematic mds convolutional code exists we can assume that it has a parity check matrix with k proof assume that a systematic mds convolutional code exists with a parity check matrix with k a then apply lemma with c lemma let m be a matrix over gf qm with q a prime raising each element of m to power q yields another matrix proof given any square matrix n an n with ai j gf qm by definition det a s an n where sn is the group of permutations of n elements and s is the sign of each permutation also we have xn q c qn xnqn qn q qi q and c qn qn when q is prime q is a divisor of coefficient c qn except in the cases qi q q j for j i and this for each i n therefore in characteristic q we have xn q xnq back to the definition of determinant in characteristic q we have q det a q s an n s q aqn n finally s is either or so in case q then s for any sn and s s and in case q is odd we have s s q this gives det a q s q aqn n s aqn n det where denotes the hadamard or schur power of a that is the matrix whose entries are the the entries of a raised to q now it is clear that given any proper minor p of size p of matrix m in gf qm is the corresponding proper minor in m and p is nonsingular if and only if is nonsingular so m is if and only if m is corollary in particular let m be a matrix over gf squaring each element of m yields another matrix proof this is just the particular case for q of the lemma above hence we also have corollary assume that the values for i i k and for k are all fixed to as allowed by lemma and corollary then for it suffices to consider one representative of each cyclotomic coset proof consider any minor of squaring all coefficients in m will not change the values of i i k or k thus if there is a matrix with v then there is also a matrix with the search can be simplified by constant factors o n n by use of lemma corollary and lemma respectively corollary reduces complexity by an extra factor of approximately m but this reduction is not entirely independent of the other reductions in summary the search algorithm is highly exponential in complexity but the tricks allow a deeper search than would otherwise be possible the search algorithm is sketched in algorithm the trickier steps are explained in some detail in remark remark here we explain the steps of algorithm i in essence the algorithm runs through a search tree at each depth of the tree points to one of the variables ri j in abusing notation we will also say that points to the current depth throughout the course of the algorithm goes back and forth along k k along the values of the last row of in reverse order starting at since k k can be assumed to be all equal to by lemma and corollary we will in this context use the ordering to refer to the reverse order of the last row of and addition and subtraction on moves left and right respectively on this row ii line let refer to one element ri j in the last row of then is the set of formal proper submatrices of which have in its left lower corner if all matrices in m for some d are nonsingular where k d d is the target maximum degree then the submatrix of with in the lower left corner is the set can be found by recursion the number of proper submatrices m is related to the catalan numbers we omit the details iii line at depth values have already been assigned for each depth hence by keeping track of subdeterminants already computed for each determinant corresponding to a proper submatrix in the value in gf that would algorithm a computer search algorithm result finds good mds codes of rate n input field size target distance d code length n data points to current position initialization value i i k value k srd precompute the set of proper submatrices m precompute the set of legal values l while rd and more coefficient values to check for do if more coefficient values to check for then assign next value to coefficient at update determinants needed for and l if deepest level so far then record selected values of coefficients end else end end make the determinant zero can be obtained in constant time in other words going once through we can identify the set l of all illegal values for coefficient iv line a a complete search version of the algorithm will successively try all values in l for a faster but incomplete search the algorithm may be set to skip an arbitrary subset of values in l at each depth b the target distance d is an input parameter for the algorithm in order to determine that a code has a maximum distance d it is necessary to verify that a complete search version of the algorithm will pass depth but not depth v line using the set of values currently assigned to coefficients at all depths compute subdeterminants that will be useful for computing determinants in and initialize the set of legal values l for the next depth vi complexity the assumptions enabled by the lemmas of this section together with the efficient computation of the determinants allow a deeper search than would be possible with a search however the depth of the search tree that finds a code of degree d is n d and for many of the early depths a complete search needs to go through almost values the size of the set of proper submatrices also grows exponentially with n and so the overall complexity is at least o d wd where is the number of proper submatrices codes found by computer search with d here we present codes found from computer search for field sizes of characteristic ranging from to and free distances d exact values of qm qm and qm are provided by propositions and in tables each row summarizes what we have discovered about rate n codes the column lists the maximum value of d for which we have found a code with cdp d the absence of a sign in this column indicates that we have established through an exhaustive search that this value of d is indeed maximum for this rate and field size the coefficients column presents one encoder that possesses this cdp in terms of of the coefficients k k k where is the primitive element of the field note that the degree zero terms k are suppressed since they are assumed to be identically the r column contains the rareness of the code which will be explained in section in the reference column we include references in the few cases where similar codes codes over the same field which have the same cdp but that do not necessarily possess a systematic encoder have previously been described in the literature we do not list encoders found by the search if we also found codes with the same set of parameters rate cdp over a smaller field also due to lemma we do not list codes of rate n n if there exist codes of rate n with the same cdp lemma if a systematic mds code with free distance d and rate k exists for k then there is also a systematic mds code with free distance d and rate k proof shorten the matrix h d by selecting any k and removing columns k in h d coefficients r remark table ii table of bounds on n for the field defined by n coefficients r remark table iii table of bounds on n for the field defined by p lease also see e xample n example according to table iii for the finite field gf defined by there exists a systematic code of rate and with an example of such a code is represented by and implicitly k thus the code has a polynomial parity check matrix h x x and matrix g x x obviously g x h x the absence of a symbol in the column in table iii indicates that a complete search of all systematic mds codes of rate reveals that d is maximum the r column as will be explained later indicates that one in seventy random assignments of nonzero values for will give a code with the same cdp codes with these parameters are not very rare a nonsystematic code over gf with degree and was presented in iv u pper bounds and code assessment it would be useful to determine upper bounds on qm n in order to assess how good the codes from random search are with respect to optimum the heller bound relates convolutional codes with a given free distance d with its truncated block codes and uses known bounds on block codes to determine convolutional code parameters that can not be achieved unfortunately the heller bound is of limited use in our case since the truncated code will actually have a much lower minimum distance than d when viewed as a block code and also since exact bounds on block codes in the range of parameters that we are interested in here are not well known moreover the approach of sphere packing for binary codes can not be easily adapted to the current case since the structure of optimum nonbinary codes turns out to be quite different from that of optimum binary a simple bound is described in the next subsection in subsection we present an alternative way of describing how great our codes are through the concept of rareness coefficients r table iv table of bounds on n for the field defined by n coefficients r table v table of bounds on n for the field defined by n optimum binary convolutional codes tend to require parity check matrices with many r j whereas we have seen that in the nonbinary case all degree one coefficients j are nonzero these differences impose different combinatorial constraints in the binary and the nonbinary case coefficients r table vi table of bounds on n for the field defined by n n coefficients r table vii table of bounds on n for the field defined by coefficients r table viii table of bounds on n for the field defined by n n n n coefficients r table ix table of bounds on n for the field defined by coefficients table x table of bounds on n for the field defined by r coefficients r table xi table of bounds on n for the field defined by coefficients r table xii table of bounds on n for the field defined by n n coefficients r table xiii table of bounds on n for the field defined by a a simple bound the following simple bound is tight for d theorem for rate n codes over gf qm with cdp d n qm d proof for d the result follows from proposition assume that d recall that all coefficients are nonzero consider the minors of type s r s t s t s r t s t t s and s s t s s t t from the conditions on the proper minors since all those minors have to be nonzero it follows that in order to have d the values in the sets k and k k must be distinct values in gf qm now consider a code with d then the minors s s t r s t t s s t s r s t s and s t s t s t t s t again they all have to be nonzero and this implies that the set k k is a new set of k different values and they are all different from the values in the sets k and k k so in order to have d we need to have at least different non zero elements in the field generalizing the argument it follows that all ri t t for i d t k are distinct nonzero values rareness in this section we address the probability that a randomly generated convolutional code over gf of rate n will be an mds code with cdp of d by randomly generated code we will mean one generated by a random systematic encoder where each coding coefficient ri j is selected independently and uniformly in gf we define this probability as the rareness of the parameter pair n d for small values of n and d the exact value of the rareness can be determined as a of a complete code search since for large parameters it quickly becomes intractable to determine the best codes it also quickly turns difficult to compute exact results for rareness however it is possible to obtain estimates of rareness as described below first assume that a complete search is applied this will determine the set g n m of distinct sequences k over gf for which all proper submatrices in are nonsingular thus the probability that a given randomly selected sequence corresponds to a path in the search tree that satisfies the conditions at depth is pr n m n m for define pr n pr n m avg m pr n m where avg is the average computed over the complete search pr n is the average conditional probability that a random generator which satisfies depth in the search tree also satisfies depth for large parameters we are not able to carry out a complete search however we can perform deep but incomplete searches which also provide estimates of the conditional probabilities pr n in as these estimates will be quite accurate especially for the first depths and hence they can be changed together to obtain an estimate for pr n m as long as there is a substantial number of different search tree paths leading to depth the estimate pr n should be reasonably good hence we can also estimate pr n as n avg where avg is the weighted average computed over the incomplete search and we can then estimate pr n m as n m n where for k n in tables we include the exact rareness in cases where we can perform a complete search and otherwise we include the estimate we concede that this approach is not foolproof for example the construction in proposition is unique at least for field sizes up to for other choices for the first layer of coefficients than indicated in the proof of proposition it appears that the search tree ends up being considerably shallower the rareness of the construction in proposition the m probability that a random sequence will match that construction exactly is already for m the rareness is about for m less than hence if for an arbitrary set of search parameters there exists a very rare construction that is not caught by the incomplete search the estimates for the deepest values of may be unprecise however we do believe that our estimates of pr n m provide some intuition about the difficulty of reaching a certain depth in the search tree with a random path and in the cases where we are able to carry out a complete search we also note that the estimates as described here are pretty accurate with a modest search effort figure contains exact values for n and estimates for n of pr n please see the figure caption for explanations we have also include rareness estimates in tables c onclusion and open problems motivated by the practical problem of fast recovery of a coded channel we have studied systematic mds convolutional codes over gf we have characterized them in terms of of a certain matrix we have presented new optimum constructions for free distances d tables of new codes found by computer search and a combinatorial upper bound which is tight in the case of small free distances in order to assess how good a code is we have also introduced the concept of rareness it would be interesting to establish upper bounds that are tight also for larger free distances another issue would be to study whether there exist general algebraic constructions similar to the one in proposition for systematic mds codes of free distance d it would also be of some theoretical interest to optimize the cdp of codes over gf under an additional constraint on the degree of their minimal encoders we have not considered this problem since the complexity of viterbi decoding of such codes is prohibitive for all but small values of the product m and since it seems difficult r eferences gabidulin convolutional codes over large alphabets in proc int workshop on algebraic combinatorial and coding theory varna bulgaria pp heide joachim rosenthal and roxana smarandache convolutional codes ieee transactions on information theory vol no february paulo almeida diego napp and raquel pinto a new class of superregular matrices and mdp convolutional codes random code is mds rate rate rate rate number of coefficients starting with figure rareness pr n of codes for gf for n exact rareness pr n for estimates n for n in the figure the search depth is measured in terms of number of coefficients in order to construct a rate encoder of distance d it is necessary to find a sequence of coefficients to get an encoder with distance d it suffices with coefficients similar for the other cases pierre ugo tournoux emmanuel lochin lacan amine bouabdallah and vincent roca erasure coding for video applications ieee transactions on multimedia vol no pp kim j cloud parandeh gheibi urbina fouli leith and network coded tcp ctcp http wyner and ash analysis of recurrent codes ieee transactions on information theory vol issue jul pp ytrehus ascetic convolutional codes proc allerton conference on communications control and computing october robert mceliece the algebraic theory of convolutional codes in handbook of coding theory eds pless and huffman pp lin and costello error control coding stott oliphant osborne digital video error correcting codes and a practical study of a error corrector techn report british broadcasting corporation december justesen and hughes on convolutional codes corresp in ieee transactions on information theory vol no pp mar macwilliams and j sloane the theory of codes elsevier j heller sequential decoding short constraint length convolutional codes space programs summary jpl pasadena ca eirik rosnes and ytrehus bounds for convolutional codes ieee transactions on information theory vol issue pp a ppendix a p roof of l emma proof before starting we will set some notations in h d for each s d let cs be the set of column indices cs s k s k s k taking into account the way h d is constructed it is clear that for any s d the submatrix of h d formed by the first s rows and the columns in is h s the matrix of the truncation also the submatrix formed by the last s rows and the columns in is h s for each set of column indices cs the last index is s k and the corresponding column in h d is the column of the identity in an analogous way we will call the set of column indices s k s k sk in the matrix h d in what follows we will use the same name for a square submatrix and for the corresponding minor since it will create no confusion now we start the proof assume that the cdp is dd d in particular implies that all the entries j for j are non zero let m be a proper minor of h d of size p p formed by the entries of h d in rows with indices s and columns with indices k d since m is proper we have for l let f be the set of row indices in m from m we construct a d d minor m in h d by doing the following the row indices are d for each s d we define the column index js as follows if s f then there exists a unique l s p such that s s note that l s s considering the corresponding column index in m we have jl s ql s k rl s with ql s d and rl s k unique we note that l is an increasing function of s and also that jl s s ks which implies ql s s then define js ql s k s clearly jl s cql s and jl s cql s and actually the corresponding columns are identical if s f then js s k so the corresponding column is the last in the block with column indices cs let us note that d k but those column indices are not guaranteed to be ordered in increasing order as the were the added columns will form a submatrix which is in the rows that were not in f and we have therefore the value of minor m is the same as the value of m and in order to see that h d is we just need to check that m we will proceed in a recursive way using that each truncation will provide minimum distance ds s for each s m has at least one column index in proof if there are no columns in it means that f otherwise k which is in would be in m f implies hence l and k that is k and k k this means which would contradict the assumption if m has exactly column in then the other d columns have indices in and all have in the first position so we have where is the submatrix of m formed by the last d rows and the last d columns since we have m if and only if and we can proceed working with in h in the same way if m has at least two columns in suppose that s is the first index for which we have that at least two columns of m are in at least are in at least s columns are in but there are no s columns in this clearly implies that there are no column of m in now let us consider each t s if t f there exists l t t s such that t q jl t l t l t kil t kt therefore ql t t and from here jt ql t k rl t t k rl t so jt but it can not be in then jt if t f note that in this case t s since s f implies column s k is in m and in contradicting that there were no columns in then jt t k we have proven that even though indices are not ordered in increasing order we have that they are all in on the other hand index hence m can be decomposed as where is the part of m corresponding to the first s rows and columns and we have proven it is contained in the submatrix of h d formed by the first s rows and the first s k columns which actually is h s and it is guaranteed to be non zero because ds s and the minor satisfies the condition that it has at least columns among the first k columns of h s at least three among the first k and at least s among the first s k minor is formed by the last d s rows and columns of m and it is contained in the submatrix of h d formed by the last d s rows and the last d s k columns which is h and the same argument used so far can be used to prove that is is non zero by decomposing it further into blocks each of them nonzero finally we can note that m will have at most one column index in having at least two would imply that m has also at least two columns in and this would contradict the condition of m being proper since d implies j kd kd suppose now that h d is consider a minor m of size d in h d formed by the columns in positions d k and assume that k k d k we construct a minor m by removing from m any column which is in position s k and the corresponding row as before it is clear that where d p is the number of removed columns and p is the size of the remaining minor m with a careful analysis similar to the one done in the reciprocal part of the proof one can prove that m is a proper minor in h d and hence non zero for this we will continue using the same notations as in the demonstration of the reverse consider that the rows remaining in m are d we call this set of indices f as before the other rows correspond to the identity columns that have been suppressed the corresponding column indices in m are and each of those columns is a copy of a column using the same notations as in the j f t in m and it is clear that j f t cb t for some b t d implies cb t other part of the proof if j f t q f t k r f t with q f t d and r f t k then jt q f t k r f t so they will be in the same block of column indices b t q f t note that r t k since column indices that are multiples of k will be removed and will never turn into columns in m first we observe that i p d because there were no columns of m in the block with indices in hence the last column of can not be removed it was never there and row d remains in f the corresponding column will be a copy of some column j f p so hence dk d k so the proper condition is satisfied for the last index in general when we consider row index in position p s we have d s r where r is the number of identity columns after the d s that have been removed column is copy of column j f and f s columns after it have been already considered and r have been removed from here we have j f and this implies d s r k a final observation is that is always in block because block contained at least two columns of m so even is one is removed there will always be at least one column remaining in that first block on the other hand and we have k the first and last observations are not necessary but they help to understand the general case we have proven that minor m is proper and therefore can not be singular
7
fusion systems with some sporadic jun justin lynd and julianne rainbolt abstract aschbacher s program for the classification of simple fusion systems of odd type at the prime has two main stages the classification of systems of subintrinsic component type and the classification of systems of type we make a contribution to the latter stage by classifying systems with a isomorphic to the systems of several sporadic groups under the assumption that the centralizer of this component is cyclic introduction the dichotomy theorem for saturated fusion systems ii partitions the class of saturated systems into the fusion systems of characteristic and the fusion systems of component type this is a much cleaner statement than the corresponding statement for finite simple groups and it has a much shorter proof in the last few years aschbacher has begun work on a program to give a classification of a large subclass of the systems of component type a memoir setting down the outline and first steps of such a program is forthcoming but see for a survey of some of its contents the immediate goal is to give a simpler proof of roughly half of the classification of the finite simple groups by carrying out most of the work in the category of saturated systems let f be a saturated fusion system over a finite s of which the standard example is the fusion system fs g where g is a finite group and s is a sylow of a component is a subnormal quasisimple subsystem the system is said to be of component type if some involution centralizer in f has a component the systems of odd type consist of those of subintrinsic component type and those of type this is a proper subclass of the systems of component type in focusing attention on this restricted class one is expected to avoid several difficulties in the treatment of standard form problems like the ones considered in this paper by carrying out the work in fusion systems it is expected that certain difficulties within the classification of simple groups of component type can be avoided including the necessity of proving thompson s we refer to for the definition of a fusion system of subinstrinsic component type as it is not needed in this paper the fusion system f is said to be of type if it is not of subintrinsic component type and there is a fully centralized involution x s such that the of cs x is equal to the of s and cf x has a component we shall call such a component in an involution centralizer a date march key words and phrases fusion systems sporadic groups involution centralizer components the research of the first author was partially supported by nsa young investigator grant and was supported by an grant which allowed for travel related to this work in this paper we classify saturated systems having a isomorphic to the system of mcl or ly under the assumption that the centralizer of the component is a cyclic a similar problem for the fusion system of q q mod was treated in under stronger hypotheses theorem let f be a saturated fusion system over the finite suppose that x s is a fully centralized involution such that f cf x q k where k is the system of mcl or ly and where q is a cyclic assume further that m cs x m s then k is a component of f here f cf x is the generalized fitting subsystem of the centralizer cf x and m s s is the of s that is the largest rank of an elementary abelian of we mention that any fusion system having an involution centralizer with a component isomorphic to mcl or ly is necessarily of subintrinsic component type by this means that when restricted to those components theorem gives a result weaker than is needed to fit into the subintrinsic type portion of aschbacher s program however we have included mcl and ly here because our arguments apply equally well in each of the four cases there is no almost simple group with an involution centralizer having any of these simple groups as a component but the wreath product g hxi with always has cg x hxi k with k a component that is diagonally embedded in the strategy for the proof of theorem is to locate a suitable elementary abelian subgroup f in the sylow of k and then to show that the normalizer in s of e hxif has at least twice the rank as that of f thus the aim is to force a resemblance with the wreath product in which ng hxif modulo core is an extension of with fi the projection of f onto the ith factor by hxi autk f lemma is important for getting control of the extension of e determined by nf e in order to carry out this argument acknowledgements we would like to thank the department of mathematics and statistics at saint louis university and the departments of mathematics at rutgers university and ohio state university for their hospitality and support during mutual visits of the authors we would also like to thank solomon and lyons for helpful discussions and an anonymous referee for their comments and suggestions background on fusion systems we assume some familiarity with notions regarding saturated fusion systems as can be found in or although some items are recalled below most of our notation is standard whenever g is a group we write g for the set of nonidentity elements of if we wish to indicate that g is a split extension of a group a p g by a group b then we will write g a b for g g denote by cg the conjugation homomorphism cg x xg and its restrictions morphisms in fusion systems are written on the right and in the exponent that is we write or p for the image of an element x or subgroup p of s under a morphism in a fusion system by analogy with the more standard exponential notation for conjugation in a group terminology and basic properties throughout this section fix a saturated fusion system f over the we will sometimes refer to s as the sylow subgroup of f for a subgroup p s we write autf p for homf p p and outf p for autf p inn p whenever two subgroups or elements of s are isomorphic in f we say that they are f conjugate write p f for the set of f of p if e is a subsystem of f on the subgroup t s and t s is a morphism in f the conjugate of e by is the subsystem e over t with morphisms for a morphism in we first recall some of the terminology for subgroups and common subsystems in a fusion system definition fix a saturated fusion system over the s and let p then p is fully f if p q for all q p f p is fully f if p q for all q p f p is f if cs q q for all q p f p is f if op outf p p is weakly f if p f p the centralizer of p in f is the fusion system cf p over cs p with morphisms those homf q r such that there is an extension homf p q p r that restricts to the identity on p the normalizer of p in f is the fusion system nf p over ns p with morphisms those homf q r such that there is an extension p q p r in f such that p p we write f f and f c for the collections of fully f and f respectively and we write f f c for the intersection of these two collections sometimes we refer to an element x of s as being fully f when we actually mean that the group hxi is fully f especially when x is an involution for example this was done in the the statement of the theorem in the introduction whenever p s we write a p for the set of homf ns p s such that p is fully f lemma for each p s a p is not empty moreover for each q p f f f there is a p with p q proof this is b applied with k aut p by a result of puig the centralizer cf p is saturated if p is fully f and the normalizer nf p is saturated if p is fully f we write op f for the unique largest subgroup p of s satisfying nf p f and z f for the unique largest subgroup p of s satisfying cf p f we note that if f fs g for some finite group g with sylow s then op g s is normal in f so that op g op f but the converse does not hold in general the model theorem a subgroup p s is f if and only if cs p p and p is fully f if p is f and fully f then the normalizer fusion system m nf p is constrained that is op m is by the model theorem proposition there is then a unique finite group m up to isomorphism having sylow ns p and such that m op m op m and fns p m then m is said to be a model for m in this case tame fusion systems the main hypothesis of theorem is that the generalized fitting subsystem of the involution centralizer c is the fusion system of a finite group where q is a cyclic and k is simple in this situation cf x is itself the fusion system of a finite group c with f c q k where k mcl or ly since each of these simple groups tamely realizes its system roughly a finite group tamely realizes its fusion system if every automorphism of its fusion system is induced by an automorphism of the group moreover a fusion system is said to be tame if there is some finite group that tamely realizes it we refer to for more details the importance of tameness in the context of standard form problems was pointed out in the discussion there is centered around the notion of strong tameness which was needed for proofs of the results of but the contents of imply that a fusion system is tame if and only if it is strongly tame recently oliver has established the following useful corollary of the results in which we state for our setup here theorem corollary let c be a saturated fusion system over a assume that f c c k where k is simple and tamely realized by a finite simple group then c is tamely realized by a finite group c such that f c c note that upon application of theorem to the involution centralizer c cf x in theorem we have c q c indeed c q since c is normal in c and one sees that q cs k by combining lemma c of with lemma c below however c is normal and in cc k by properties of the generalized fitting subgroup so that cc k c is a group of outer automorphisms of the cyclic c and so is itself a it follows that cc k cs k is a normal of c since k e c and hence q cs k c thus the effect of theorem for our purposes is that we may work in the group c where q is a normal subgroup in particular in the setup of the theorem the quotient is isomorphic to a subgroup of aut k containing inn k where k is one of the simple groups appearing in theorem structure of the components in this section we recall some properties of the simple systems appearing in theorem that are required for the remainder lemma let g be or and v a faithful g of dimension then a g acts transitively on the nonzero vectors of v b cgl v g g c h g v and d if g acts on a homocyclic y with y v then y v proof in each case v is irreducible there is a unique such module for namely the natural g considered as a module over and thus a holds in this case the module for is unique up to taking duals clearly points a and b are independent of the choice between these two modules and c is independent of such a choice by note that acts transitively on v which can be seen by noting that a sylow acts with exactly one fixed point on v and a sylow acts with no fixed points point b holds for g by absolute irreducibility similarly for g one has that cgl v g g v and so b follows in this case as z g c point c for g gl holds because by coprime action z g and so g has a fixed point on any module containing v as a submodule see point c for g holds for example by applying with l where cg cg and the li indeed satisfy the hypotheses of that theorem h li v because each sylow of g has no nontrivial fixed point on v and h li v using a similar argument via coprime action as above we now turn to d which follows from a special case of a result of higman theorem this says that if acts faithfully on a homocyclic y in which an element of order acts without fixed points on y then y is elementary abelian in case g v is the natural module for g and certainly g has with respect to an appropriate basis a diagonal element of order acting without fixed points in the case g we have g and the action of g on v is the restriction of the natural or dual action of restriction of either one to shows that is embedded in g as an moving points in the natural permutation action so is contained in g up to conjugacy and as before it has an element of order acting without fixed points on v hence d holds in this case as well by higman s theorem for a vector space e over the field with two elements the next lemma examines under rather strong hypotheses the structure of extensions of e by certain subgroups of the stabilizer in gl e of a hyperplane lemma let e p na v and n a aut e v e with v u p let l be a complement to u in p acting decomposably on e x e v the fixed point for the action of l and g let h be an extension of e by ug with the given action and let x be the preimage in h of u under the quotient map h ug assume that a g acts transitively on v b cgl v g g and c h g v then there is a subgroup y of x that is elementary abelian or homocyclic of order and a complement to hxi in x proof let since the commutator map x determines a linear isomorphism v g is transitive on the nonzero vectors of by a and z hence if is not elementary abelian then it is extraspecial with center and g preserves the squaring map z this is not the case because g is transitive on the nonzero vectors of and n therefore is elementary abelian assumption c now yields that there is a complement to let y be the preimage of in x we claim that y is abelian assume on the contrary then y y and y are contained in v since is elementary abelian and by assumption neither of these are trivial similarly v is contained in z y which is not y therefore v y y y y z y by a and by the squaring map v is linear isomorphism let be its inverse then the map v v given by v x v is a linear isomorphism commuting with the action of g and so g by b where g gl v is the structure map let g g map to under then y x y g is the squaring map this means that for each y y we have y gx y y g hence for each pair w y y w y w g y g w gx y gx wy gx wy g y g which gives w y wy w y y w thus y is abelian after all it follows that y v or y by a and this completes the proof of the lemma we now examine the structure of the simple systems occupying the role of k in theorem let be a isomorphic to a sylow of this is generated by involutions and such that z i and with additional defining relations a sylow subgroup of or mcl is isomorphic to a sylow of an extension of by a field automorphism this is a semidirect product hf i with f f f f f f a sylow subgroup of is isomorphic to a sylow of extended by a unitary automorphism this is a semidirect product hui with u u u u u a sylow subgroup of ly is isomorphic to a sylow of aut this is a semidirect product hf ui with f u and the relations above denote by a isomorphic to one of hf i hui or hf ui recall that the thompson subgroup j p of a finite p is the subgroup generated by the elementary abelian subgroups of p of largest order lemma let k be mcl or ly with sylow as above then a z i is of order b a where i and i so that j also after suitable choice of notation one of the following holds i k mcl or ly and autk or ii k and autk c there is f a such that the pair autk f f satisfies assumptions a c of lemma in the role of g v d all involutions in i are autk j proof point a holds by inspection of the relations above now and are the elementary abelian subgroups of of maximal rank and so to prove b it suffices to show that each elementary abelian subgroup of maximal rank in is contained in set l and identify inn l with write inndiag l l for the group of automorphisms of then inndiag l contains l with index corresponding to the size of the center of the universal version of l theorem c also aut l is a split extension of of inndiag l by where is generated by a c is generated by a graph automorphism field automorphism of l and theorem by theorems each involution of aut l is aut l conjugate to or and the centralizers in l of these automorphisms are isomorphic to and respectively again by those theorems these centralizers have and respectively since has this shows that j from the relations used in defining each involution in is contained in one of or so we conclude that a a the description of the automizers in b follows from table for and mcl lemma for and for ly now point c follows from b and lemma and point d follows from c and burnside s fusion theorem the statement that the automizer of a weakly subgroup of which is j in this case controls the in its center lemma let k be one of the sporadic groups mcl or ly and let be a sylow of then a out k if k otherwise and or ly and out k b for each involution aut k inn k i ck mcl if k ii ck l if k and c each automorphism of k centralizing a member of a is inner proof points a and b follow by inspection of table of by lemma b the of k is while each of the centralizers in b is of so c holds preliminary lemmas we now begin in this section the proof of theorem and so we fix the notation and hypotheses that will hold throughout the remainder of the paper let f be a saturated fusion system over the s and let x s be an involution assume that hxi is fully f that m cs x m s and that f cf x where q is cyclic set c cf x and t cs x so that c is a saturated fusion system over t by the remark just after lemma let be the sylow subgroup of k and set r q assume that k is the fusion system of one of the sporadic groups k mcl or ly since k tamely realizes k in each case the quotient t induces a of outer automorphisms of k by theorem arguing by contradiction we assume k is not a component of we fix the presentation in section for in whichever case is applicable and we note that q hxi by assumption on lemma notation may be chosen so that t and j t are fully f proof we repeatedly use lemma let a t then as x we have that is still fully f thus we may assume that t is fully f normalized after replacing x t and j t by their conjugates under now let a j t then as ns t nns j t t it follows that t j t t j t t j t t t t and so equality holds because t is fully f hence t is still fully f as before is still fully f lemma hxi is not weakly f in t proof assume on the contrary in which case t s otherwise ns t contains t properly and moves x it follows that x z s which is contained in the center of every f subgroup hence x z f by alperin s fusion theorem theorem and we conclude that k is a component of c f contrary to hypothesis lemma the following hold a j t j r hxi j and b ct i proof suppose a does not hold choose a a t with a hxi j then a r by the structure of r and a acts nontrivially on k by in particular out k and m cr a by lemma hence m a while m r this contradicts the choice of a and establishes a by lemma a cr i also ct cr by part a so b is also established lemma if t s then z s hx i if t s then z s i proof note first that z s t by lemma c and z s r and so z s z r hx i by lemma a thus the lemma holds in case t in case t s let a ns t with t note that j t j t j j i by lemma a so a normalizes z t hx i and z t j t j t hx i i i but a does not centralize x thus z s i as claimed lemma the following hold a x is not f to and b x is conjugate to if and only if t s and in this case x is ns t to proof for part a let f with z t assume first that t by the extension axiom extends to autf t which restricts to an automorphism of j t lemma a shows that x j t j t while j t j t from lemma d hence this shows x is not f to in this case now assume that t then x z s whereas z s i by lemma since hxi is fully f by assumption we conclude that x is not f to in this case either this completes the proof of a if t s then x is ns t to by a while if t s then a and burnside s fusion theorem imply that hxi is weakly f in z s hx i thus b holds the case in this section it is shown that t s that is x is not we continue the notation set at the beginning of section lemma if t s then hxi is weakly f in proof assume t then z s hx i by lemma using burnside s fusion theorem and assumption we see from lemma that hxi is weakly f in hx i by inspection of table k has one class of involutions thus there are exactly three of involutions namely x c and the lemma therefore holds by lemma if t r then t in particular t s in case k is the fusion system of or ly proof assume t r and also to the contrary that t by lemma hxi z s is weakly f and so is fixed by each automorphism of each f subgroup therefore hxi z f and so k is a component of c f contrary to assumption the last statement follows then follows from lemma a lemma assume k is the fusion system of mcl or then t proof assume t then r t by lemma fix f xf t r the extension khf i hf i of k is defined by and khf is the system of aut mcl or aut by theorem thus by lemma b conjugating in khf i if necessary we may assume hf i is fully khf by lemma b all involutions of hf are khf and ckhf i f hf or hf i in particular f is semidihedral or dihedral respectively of order and with center i fix a four subgroup v f then f is conjugate to f for example by an element in the normalizer of hf i f in hf i and hence is f to each element of f v by the structures of and fix homf hf i hxi by the extension axiom extends to a morphism which we also call defined on hf iv ct f therefore x is f to each element in xv now the intersection xv r v r is nontrivial because so as x is not itself in v we see that x has a distinct conjugate in this contradicts lemma and completes the proof proof of theorem continue the notation and hypotheses set at the beginning of section in addition we fix f a satisfying assumptions a c of lemma as guaranteed by lemma c and set e hxif then e a t by lemma a and so m t in this section we finish the proof of theorem by showing that the hypotheses of lemma hold for a model of the normalizer in f of an appropriate f of via lemma d this forces the of s to be at least contrary to the hypothesis that m s m t by lemmas and t lemma autf t autc t proof represent autf t on z t hx i and apply lemma lemma the following hold a autf j t autc j t and b xautf e xf and so autf e autc e proof represent autf j t on z j t hx i now autc j t cautf j t x and the former is transitive on zj by lemma d also since x is ns t conjugate to we conclude from lemma that xautf j t xzj is of size thus a holds similarly to a we have autc e cautf e x and the former is transitive on f by choice of f from lemma a and lemma a t by part a and lemma j t t representing ns j t on a t we see that the kernel has index at most so there is an element of ns j t that normalizes in particular nt e ns e and so x is autf e to a member of xf now by choice of f another appeal to lemma yields that xautf e f x has size which establishes b lemma the following hold a q hxi b e is f proof suppose on the contrary that q hxi and choose w q with w x fix a ns t t such that t then xa and also w a further hw a i is normal in t since hwi is thus hw a i hw a i whereas ct i by lemma b it follows that i hw a i i hxi a contradiction that establishes a let be one of the two elementary abelian subgroups of rank in t and set j then contains x and so cs ct by lemma c ct cr hence cs by part a we can now prove b fix a e since hxi is fully centralized the restriction of to i has an extension cs cs x t which is defined on cs e thus setting cs e t we see from the previous paragraph that e e so that cs e e as e is fully f and contains its centralizer in s this means that e is f as claimed since we will be working in nf e for the remainder we may assume after replacing e by an f if necessary that e is fully f hence e f f c by lemma b fix a model h for nf e cf lemma h satisfies the hypotheses of lemma proof set autc e and observe that autf e by lemma b thus contains g or with index or as g acts transitively on f and centralizes x it follows from lemma b that xf and f are the orbits of autf e on e hence autf e naut e f a nontrivial split extension of an elementary abelian u of order by gl f with the standard action we claim that u autf e suppose that this is not the case now g acts transitively on f and the commutator map x defines an isomorphism of from u to f so g acts transitively on u since u is normalized by autf e we see that autf e u in particular autf e embeds into gl f now g or in the cases under consideration and by lemma b autf e is therefore a subgroup of gl f containing g with index or however has index in and is contained with index in a unique maximal subgroup of a contradiction therefore u autf e as claimed it has thus been shown that autf e contains a subgroup with index or that is a split extension of u naut e f by thus h has a subgroup of index or that is an extension of e by ug assumptions a c of lemma hold via lemma c by the choice of f proof of theorem keep the notation of the proof of lemma by that lemma and lemma there is a y to hxi in h that is homocyclic of order with y f or elementary abelian of order now g is isomorphic to or with faithful action on f so the former case is impossible by lemma hence t s contrary to hypothesis references alperin and daniel gorenstein a vanishing theorem for cohomology proc amer math soc mr michael aschbacher radha kessar and bob oliver fusion systems in algebra and topology london mathematical society lecture note series vol cambridge university press cambridge mr kasper andersen bob oliver and joana ventura reduced tame and exotic fusion systems proc lond math soc no mr michael aschbacher finite group theory second cambridge studies in advanced mathematics vol cambridge university press cambridge mr the generalized fitting subsystem of a fusion system mem amer math soc no mr classifying finite simple groups and systems iccm not no mr on fusion systems of component type preprint carles broto ran levi and bob oliver the homotopy theory of fusion systems amer math soc no electronic david craven the theory of fusion systems cambridge studies in advanced mathematics vol cambridge university press cambridge an algebraic approach mr larry finkelstein finite groups with a standard component isomorphic to algebra no mr finite groups with a standard component isomorphic to hj or hjm algebra no mr george glauberman and justin lynd control of fixed points and existence and uniqueness of centric linking systems invent math no daniel gorenstein richard lyons and ronald solomon the classification of the finite simple groups number part i chapter a mathematical surveys and monographs vol american mathematical society providence ri almost simple mr graham higman odd characterizations of finite simple groups lecture notes of university of michigan ann arbor justin lynd a characterization of the system of q algebra mr bob oliver existence and uniqueness of linking systems chermak s proof via obstruction theory acta math no mr reductions to simple fusion systems bulletin of the london mathematical society no tameness of fusion systems of sporadic simple groups preprint robert wilson the subgroup structure of the lyons group math proc cambridge philos soc no mr institute of mathematics university of aberdeen fraser noble building aberdeen address department of mathematics and statistics saint louis university north grand saint louis mo address rainbolt
4
prediction with a short memory nov sham kakade university of washington sham percy liang stanford university pliang vatsal sharan stanford university vsharan gregory valiant stanford university valiant abstract we consider the problem of predicting the next observation given a sequence of past observations and consider the extent to which accurate prediction requires complex algorithms that explicitly leverage dependencies perhaps surprisingly our positive results show that for a broad class of sequences there is an algorithm that predicts well on average and bases its predictions only on the most recent few observation together with a set of simple summary statistics of the past observations specifically we show that for any distribution over observations if the mutual information between past observations and future observations is upper bounded by i then a simple markov model over the most recent observations obtains expected kl error hence error respect to the optimal predictor that has access to the entire past and knows the data generating distribution for a hidden markov model with n hidden states i is bounded by log n a quantity that does not depend on the mixing time and we show that the trivial prediction algorithm based on the empirical frequencies of length o log windows of observations achieves this error provided the length of the sequence is log where d is the size of the observation alphabet we also establish that this result can not be improved upon even for the class of hmms in the following two senses first for hmms with n hidden states a window length of log is necessary to achieve expected kl error or error second the log samples required to accurately estimate the markov model when observations are drawn from an alphabet of size d is necessary for any computationally tractable algorithm assuming the hardness of strongly refuting a certain class of csps memory modeling and prediction we consider the problem of predicting the next observation xt given a sequence of past observations which could have complex and dependencies this sequential prediction problem is one of the most basic learning tasks and is encountered throughout natural language modeling speech synthesis financial forecasting and a number of other domains that have a sequential or chronological element the abstract problem has received much attention over the last half century from multiple communities including tcs machine learning and coding theory the fundamental question is how do we consolidate and reference memories about the past in order to effectively predict the future given the immense practical importance of this prediction problem there has been an enormous effort to explore different algorithms for storing and referencing information about the sequence these efforts have led to recurrent neural networks encode the past as a real vector of fixed length that is updated after every specific classes of such networks such as long memory lstm networks other recently popular models that have explicit notions of memory include neural turing machines memory networks differentiable neural computer etc these models have been quite successful see nevertheless they seem largely unable to consistently learn dependencies which are crucial in many settings including language in parallel to these efforts to design systems that explicitly use memory there has been much effort from the neuroscience community to understand how humans and animals are able to make accurate predictions about their environment many of these efforts also attempt to understand the computational mechanisms behind the formation of memories memory consolidation and retrieval despite the long history of studying sequential prediction many fundamental questions remain how much memory is necessary to accurately predict future observations and what properties of the underlying sequence determine this requirement must one remember significant information about the distant past or is a memory sufficient what is the computational complexity of accurate prediction how do answers to the above questions depend on the metric that is used to evaluate prediction accuracy aside from the intrinsic theoretical value of these questions their answers could serve to guide the construction of effective practical prediction systems as well as informing the discussion of the computational machinery of cognition and in nature in this work we provide insights into the first three questions we begin by establishing the following proposition which addresses the first two questions with respect to the pervasively used metric of average prediction error proposition let m be any distribution over sequences with mutual information i m between the past observations and future observations xt the best order markov model which makes predictions based only on the most recent observations predicts the p distribution of the next observation with average kl error i m or average error i m with respect to the actual conditional distribution of xt given all past observations the intuition behind the statement and proof of this general proposition is the following at time t we either predict accurately and are unsurprised when xt is revealed to us or if we predict poorly and are surprised by the value of xt then xt must contain a significant amount of information about the history of the sequence which can then be leveraged in our subsequent predictions of etc in this sense every timestep in which our prediction is bad we learn some information about the past because the mutual information between the history of the sequence and the future is bounded by i m if we were to make i m consecutive bad predictions we have captured nearly this amount of information about the history and hence going forward as long as the window we are using spans these observations we should expect to predict well this general proposition framed in terms of the mutual information of the past and future has immediate implications for a number of models of sequential data such as hidden markov models hmms for a hmm with n hidden states the mutual information of the generated sequence is trivially bounded by log n which yields the following corollary to the above proposition we state this proposition now as it provides a helpful reference point in our discussion of the more general proposition corollary suppose observations are generated by a hidden markov model with at most n hidden states the best n order markov model which makes predictions based only on the most recent log n predicts the distribution of the next observation with average kl error or observations error with respect to the optimal predictor that knows the underlying hmm and has access to all past observations in the setting where the observations are generated according to an hmm with at most n hidden states this best th order markov model is easy to learn given sufficient data and corresponds to the naive empirical model based on the previous observations specifically this is the model that given outputs the observed empirical distribution of the observation that has followed this length sequence to predict what comes next in the phrase defer the details to the we look at the previous occurrences of this subsequence and predict according to the empirical frequency of the subsequent word the following theorem makes this claim precise theorem suppose observations are generated by a hidden markov model with at most n hidden states and output alphabet of size for there exists a window length o n and absolute constant c such that for any t dc if t t is chosen uniformly at random then the expected distance between the true distribution of xt given the entire history and knowledge of the hmm and the distribution predicted by the naive empirical order markov model based on xt is bounded by the above theorem states that the window length necessary to predict well is independent of the mixing time of the hmm in question and holds even if the model does not mix while the amount of data required to make accurate predictions using length windows scales exponentially in to the condition in the above theorem that t is chosen uniformly between and t do lower bounds discussed in section argue that this exponential dependency is unavoidable interpretation of mutual information of past and future while the mutual information between the past observations and the future observations is an intuitive parameterization of the complexity of a distribution over sequences the fact that it is the right quantity is a bit subtle it is tempting to hope that this mutual information is a bound on the amount of memory that would be required to store all the information about past observations that is relevant to the distribution of future observations consider the following setting given a joint distribution over random variables p ast and f u t suppose we wish to define a function f that maps p ast to a binary advice string f p ast possibly of variable length such that f u t is independent of p ast given f p ast as is shown in harsha et al there are joint distributions over p ast f u t such that even on average the minimum length of the string necessary for the above task is exponential in the mutual information i p ast f u t this setting can also be interpreted as a communication game where one player generates p ast and the other generates f u t given limited communication the ability to communicate f p ast given the fact that this mutual information is not even an upper bound on the amount of memory that an optimal algorithm computationally unbounded and with complete knowledge of the distribution would require proposition might be surprising implications of proposition and corollary these results show that a markov model that can not capture dependencies or structure of the predict accurately on any distribution provided the order of the markov model scales with the complexity of the distribution as parameterized by the mutual information between the past and future strikingly this parameterization is indifferent to whether the dependencies in the sequence are relatively as in an hmm that mixes quickly or very as in an hmm that mixes slowly or does not mix at all independent of the nature of these dependencies provided the mutual information is small accurate prediction is possible based only on the most recent few observation see figure for a concrete illustration of this result in the setting of an hmm that does not mix and has dependencies figure a depiction of a hmm on n states that repeats a given length n binary sequence of outputs and hence does not mix corollary and theorem imply that accurate prediction is possible based only on short sequences of o log n observations at a time where increasingly complex models such as recurrent neural networks and neural turing machines are in vogue these results serve as a baseline theoretical result they also help explain the practical success of simple markov models such as smoothing which are crucial components in machine translation and speech recognition systems although recent recurrent neural networks have yielded empirical gains see current models still seem largely incapable of successfully capturing in it is worth noting that if the string s is sampled first and then p ast and f u t are defined to be random functions of s then the length of s can be related to i p ast f u t see this latter setting where s is generated first corresponds to allowing shared randomness in the communication game however this is not relevant to the sequential prediction problem one amusing example is the recent short film sunspring whose script was automatically generated by an lstm locally each sentence of the dialogue mostly makes sense though there is no cohesion over longer time frames and no overarching plot trajectory despite the brilliant acting some settings such as natural language capturing such dependencies seems crucial for achieving results indeed the main message of a narrative is not conveyed in any single short segment more generally intelligence seems to be about the ability to judiciously decide what aspects of the observation sequence are worth remembering and updating a model of the world based on these aspects thus for such settings proposition can be interpreted as a negative average error is not a good metric for training and evaluating models it is important to note that average prediction error is the metric that ubiquitously used in practice both in the natural language processing domain and elsewhere our results suggest that a different metric might be essential to driving progress towards systems that attempt to capture dependencies and leverage memory in meaningful ways we discuss this possibility of alternate prediction metrics more in section for many other settings such as financial prediction and lower level language prediction tasks such as those used in ocr or speech recognition average prediction error is the most meaningful metric for these settings the result of proposition is extremely positive no matter the nature of the dependencies in the financial markets it is sufficient to learn a markov model as one obtains more and more data one can learn a higher and higher order markov model and average prediction accuracy should continue to improve for these applications the question now becomes a computational question the naive approach to learning an markov model in a domain with an alphabet of size d might require d space to store and data to learn from a computational standpoint is there a better algorithm what properties of the underlying sequence imply that such models can be learned or approximated more efficiently or with less data our computational lower bounds described below provide some perspective on these computational considerations lower bounds our positive results show that accurate prediction is possible via an algorithmically simple a markov model that only depends on the most recent can be learned in an algorithmically straightforward fashion by simply using the empirical statistics of short sequences of examples compiled over a sufficient amount of data nevertheless the markov model has d parameters and hence requires an amount of data that scales as d to learn where d is a bound on the size of the observation alphabet this prompts the question of whether it is possible to learn a successful predictor based on significantly less data we show that even for the special case where the data sequence is generated from an hmm over n hidden states this is not possible in general assuming a natural assumption a hmms with n hidden states and an output alphabet of size d is defined via only o nd parameters and nd samples are sufficient from an information theoretic standpoint to learn a model that will predict accurately while learning an hmm is computationally hard see this begs the question of whether accurate average prediction can be achieved via a computationally efficient algorithm and and an amount of data significantly less than the log n that the naive markov model would require our main lower bound shows that there exists a family of hmms such that the dlog sample complexity requirement is necessary for any computationally efficient algorithm that predicts accurately on average assuming a natural assumption specifically we show that this hardness holds provided that the problem of strongly refuting a certain class of csps is hard which was conjectured in and studied in related works and see section for a description of this class and discussion of the conjectured hardness theorem assuming the hardness of strongly refuting a certain class of csps for all sufficiently large n and any for some fixed constant c there exists a family of hmms with n hidden states and an output alphabet of size d such that any polynomial time algorithm that achieves average error with respect to the optimal predictor for a random hmm in the family must observe log observations from the hmm as the mutual information of the generated sequence of an hmm with n hidden states is bounded by log n theorem directly implies that there are families of distributions m with mutual information i m and observations drawn from an alphabet of size d such that any computationally efficient algorithm requires i m samples from m to achieve average error the above bound holds when d is large compared to log n or i m but a different but equally relevant regime is where the alphabet size d is small compared to the scale of dependencies in the sequence for example when predicting characters we show lower bounds in this regime of the same flavor as those of theorem except based on the problem of learning a noisy parity function the very slightly subexponential algorithm of blum et al for this task means that we lose at least a superconstant factor in the exponent in comparison to the positive results of proposition proposition let f k denote a lower bound on the amount of time and samples required to learn parity with noise on uniformly random inputs for all sufficiently large n and for some fixed constant c there exists a family of hmms with n hidden states such that any algorithm that achieves average prediction error with respect to the optimal predictor for a random hmm in the family requires at least f log time or samples finally we also establish the information theoretic optimality of the results of proposition in the sense that among even computationally unbounded prediction algorithms that predict based only on the most recent observations an average kl prediction error of i m and error i m with respect to the optimal predictor is necessary proposition there is an absolute constant c such that for all and sufficiently large n there exists an hmm with n hidden states such that it is not possible to obtain average kl prediction error less than or error less than with respect to the optimal predictor while using only the most recent c log observations to make each prediction future directions as mentioned above for the settings in which capturing dependencies seems essential it is worth the choice of average prediction error as the metric used to train and evaluate models one possibility that has a more flavor is to only evaluate the algorithm at a chosen set of time steps instead of all time steps hence the naive markov model can no longer do well just by predicting well on the time steps when prediction is easy in the context of natural language processing learning with respect to such a metric intuitively corresponds to training a model to do well with respect to a question answering task instead of a language modeling task a fertile middle ground between average error which gives too much reward for correctly guessing common words like a and the and error might be a prediction error that provides more reward for correctly guessing less common observations it seems possible however that the techniques used to prove proposition can be extended to yield analogous statements for such error metrics given the many settings for which average error is the most natural metric of prediction accuracy and the upper bounds of proposition it is natural to consider what additional structure might be present that avoids the conditional computational lower bounds of theorem one possibility is a robustness example the property that a markov model would continue to predict well even when each observation were obscured or corrupted with some small probability the lower bound instances in theorem and proposition rely on parity based constructions and hence are very sensitive to noise and corruptions for learning over product distributions there are well known connections between noise stability and approximation by polynomials additionally polynomials can be learned agnostically over arbitrary distributions via polynomial regression it is tempting to hope that this thread could be made rigorous by establishing a connection between natural notions of noise stability over arbitrary distributions and accurate polynomial approximations such a connection could lead to significantly better sample complexity requirements for prediction on such robust distributions of sequences perhaps requiring only poly d i m data additionally such approaches to learning succinct representations of large markov models may inform the many practical prediction systems that currently rely on markov models related work parameter estimation it is interesting to compare using a markov model for prediction with methods that attempt to properly learn an underlying model for example method of moments algorithms allow one to estimate a certain class of hidden markov model with polynomial sample and computational complexity these ideas have been extended to learning neural networks and rnns using different methods arora et al showed how to learn certain random deep neural networks learning the model directly can result in better sample efficiency and also provide insights into the structure of the data the major drawback of these approaches is that they usually require the true distribution to be in or extremely close to the model family that we are learning this is a very strong assumption that often does not hold in practice universal prediction and coding theory on the other end of the spectrum is the class of online learning methods which assume that the data generating distribution can even be adversarial however the nature of these results are fundamentally different from ours whereas we are comparing to the perfect model that can look at the infinite past online learning methods typically compare to a fixed set of experts which is much weaker there is much work on sequential prediction based on from the information theory and statistics communities the philosophy of these approaches are often more adversarial with perspectives ranging from minimum description length and individual sequence settings where no model of the data distribution process is assumed with regards to worst case guarantees where there is no data generation process and regret as the notion of optimality there is a line of work on both minimax rates and the performance of bayesian algorithms the latter of which has favorable guarantees in a sequential setting with regards to minimax rates provides an exact characterization of the minimax strategy though the applicability of this approach is often limited to settings where the strategies available to the learner is relatively small the normalizing constant in must exist more generally there has been considerable work on the regret in and statistical settings such as the works in with regards to more broadly there is considerable work on information consistency convergence in distribution and minimax rates with regards to statistical estimation in parametric and families in some of these settings minimax risk in parametric settings there are characterizations in terms of mutual information there is also work on universal lossless data compression algorithm such as the celebrated algorithm here the setting is rather different as it is one of coding the entire sequence in a block setting rather than prediction loss sequential prediction in practice our work was initiated by the desire to understand the role of memory in sequential prediction and the belief that modeling dependencies is important for complex tasks such as understanding natural language there have been many proposed models with explicit notions of memory including recurrent neural networks long memory lstm networks models neural turing machines memory networks differentiable neural computers etc while some of these models have been quite successful in practice see they still largely fail to capture many the case of lstms for example it is not difficult to show that they forget the past exponentially quickly if they are stable to gain more insight into this problem we began by analyzing the simplest markov predictor and found to our surprise that it performed nearly as well as one could hope proof sketch of theorem we provide a sketch of the proof of theorem which is stronger than proposition but applies specifically to sequences generated from a hidden markov model the core of this proof is the following lemma that guarantees that the markov model that knows the true marginal probabilities of all short sequences will end up predicting well additionally this good expected prediction will hold with respect to only the randomness of the hmm during the short window as opposed to over the randomness of when the window begins as in our more general results for settings such as financial forecasting this additional guarantee is particularly pertinent you do not need to worry about the possibility of choosing an unlucky time to begin your trading regime as long as you plan to trade for a duration that spans an entire short window beyond the extra strength of this result for hmms the proof approach is intuitive and pleasing in comparison to the more direct proof of proposition we first state the lemma and sketch its proof and then conclude the section by describing how this yields theorem lemma consider an hmm with n hidden states let the hidden state at time s be chosen according to an arbitrary distribution and denote the observation at time s by xs let op ts denote the conditional distribution of xs given observations and knowledge of the hidden state at time s let ms denote the conditional distribution of xs given only which corresponds to the naive sth order markov model that knows only the joint probabilities of sequences of the first s observations then with probability at least over the choice of initial state for c log hx i e kop ts ms where the expectation is with respect to the randomness in the outputs x the proof of the this lemma will hinge on establishing a connection between op ts bayes optimal model that knows the hmm and the initial hidden state and at time s predicts the true distribution of xs given the naive order s markov model ms that knows the joint probabilities of sequences of s observations given that the initial state is drawn according to and predicts accordingly this latter model is precisely the same as the model that knows the hmm and distribution but not and outputs the conditional distribution of xs given the observations to relate these two models we proceed via a martingale argument that leverages the intuition that at each time step either op ts ms or if they differ significantly we expect the sth observation xs to contain a significant amount of information about the hidden state at time zero which will then improve our submartingale will precisely capture the sense that for any s where there is a significant deviation between op ts and ms we expect the probability of the initial state being conditioned on xs to be significantly more than the probability of conditioned on more formally let denote the distribution of the hidden state at time conditioned on xs and let denote the true hidden state at time we show that the following expression is a submartingale s p r log kop ti mi p r the fact that this is a submartingale is not difficult define rs as the conditional distribution of xs given observations and initial state drawn according to but not being at hidden state at time note that ms is a convex combination of op ts and rs hence kop ts ms kop ts rs to verify the submartingale property note that by bayes rule the change in the lhs at any time step s is the log of the ratio of probability of observing the output xs according to the distribution op ts and the probability of xs according to the distribution rs the expectation of this is the between op ts and rs which can be related to the error using pinsker s inequality at a high level the proof will then proceed via concentration bounds azuma s inequality to show that with high probability if the error from the first c log timesteps is large then p r is also likely to be large in which case the posterior distribution of the hidden r state will be sharply peaked at the true hidden state unless had negligible mass less than in distribution log there are several slight complications to this approach including the fact that the submartingale we construct does not necessarily have nicely concentrated or bounded differences as the first term in the submartingale could change arbitrarily we address this by noting that the first term should not decrease too much except with tiny probability as this corresponds to the posterior probability of the true hidden state sharply dropping for the other direction we can simply clip the deviations to prevent them from exceeding log n in any timestep and then show that the submartingale property continues to hold despite this clipping by proving the following modified version of pinsker s inequality lemma modified pinsker s inequality for any two distributions h x nand x on x x x define the kl divergence as k log min x c for some fixed c such that log c then k given lemma the proof of theorem follows relatively easily recall that theorem concerns the expected prediction error at a timestep t dc based on the model memp corresponding to the empirical distribution of length windows that have occurred in xt the connection between the lemma and theorem is established by showing that with high probability memp is close to where denotes the empirical distribution of unobserved hidden states ht and is the distribution corresponding to drawing the hidden state and then generating x we provide the full proof in appendix a definitions and notation before proving our general proposition we first introduce the necessary notation for any random variable x we denote its distribution as p r x the mutual information between two random variables x and y is defined as i x y h y h y where h y is the entropy of y and h y is the conditional entropy of y given x the conditional mutual information i x y is defined as i x y h h z ex y z log p r z ey z dkl p r z k p r p r p p x where dkl p k q x p x log q x is the kl divergence between the distributions p and q note that we are slightly abusing notation here as dkl p r z k p r should technically be dkl p r y z z k p r z but we will ignore the assignment in the conditioning when it is clear from the context mutual information obeys the following chain rule i y i y i y given a distribution over infinite sequences xt generated by some model m where xt is random variable denoting the output at time t we will use the shorthand xji to denote the collection of random variables for the subsequence of outputs xi xj the distribution of xt is stationary if the joint distribution of any subset of the sequence of random variables xt is invariant with respect to shifts in the time index hence p r xin p r xin for any l if the process is stationary we are interested in studying how well the output xt can be predicted by an algorithm which only looks at the past outputs the predictor a maps a sequence of observations to a predicted distribution of the next observation we denote the predictive distribution of a at time t as qa xt we refer to the bayes optimal predictor using only windows of length as p hence the prediction of p at time t is p r xt p is just the naive order markov predictor provided with the true distribution of the data let the bayes optimal predictor looking at the entire history of the model be the prediction of at time t is p r xt we will evaluate the predictions of a and p with respect to over a long time window t the crucial property of the distribution that is relevant to our results is the mutual information between past and future observations for a stochastic process xt generated by some model m we define the mutual information i m of the model m as the mutual information between the past and future averaged over the window t t x i m i xt t if the process xt is stationary then i xt is the same for all time steps hence i m i we compare the prediction of the predictor p and a with respect to let f p q be some measure of distance between two predictive distributions in this work we consider the distance and the relative loss between the two distributions the kldivergence and distance between two distributions are defined in the standard way we define the relative loss as the difference between the loss of the optimal predictor and the algorithm a we define the expected loss of any predictor a with respect to the optimal predictor and a loss function f as follows t a i h qa xt f p r xt t x t a a t t a we also define and a for the algorithm a in the same fashion as the error in estimating p xt the true conditional distribution of the model i h t a f p r xt qa xt a t x t a t predicting well with short windows to establish our general proposition which applies beyond the hmm setting we provide an elementary and purely information theoretic proof proposition for any distribution m with mutual information i m between past and future observations the best order markov model p obtains average p i m with respect to the optimal predictor with access to the infinite history also any predictor a with a average in estimating the joint probabilities over windows of length gets average error a i m a proof we bound the expected error by splitting the time interval to t into blocks of length consider any block starting at time we find the average error of the predictor from time to and then average across all blocks to begin note that we can decompose the error as the sum of the error due to not knowing the past history beyond the most recent observations and the error in estimating the true joint t distribution of the data over a length block consider any time recall the definition of a i h t a dkl p r xt k q x t a h i h i dkl p r xt k p xt dkl p r xt k qa xt t p t a t therefore a p a it s easy to verify that p i xt this relation expresses the intuition that the current output xt has a lot of extra information about the past if we can not predict it as well using the most recent observations as can be done by using the entire past we will now upper bound the total error for the window we expand i using the chain rule x x i i xt i xt t note that i xt i xt p as and i x y z i x the proposition now follows from averaging the error across the time steps and using eq to average over all blocks of length in the window t x t i m p i p kl proposition also directly gives guarantees for the scenario where the task is to predict the distribution of the next block of outputs instead of just the next immediate output because kldivergence obeys the chain rule the following easy corollary relating kl error to error yields the following statement which also trivially applies to loss with respect to that of the optimal predictor as the expected relative loss at any time step is at most the loss at that time step corollary for any distribution m with mutual information i m between past and p future observations the best order markov model p obtains average p i m with respect to the optimal predictor with access to the infinite history also any predictor a p with a average in estimating the joint probabilities gets average error a i m a proof we again decompose the error as the sum of the error in estimating and the error due to not knowing the past history using the triangle inequality h i t a kp r xt qa xt i i h h kp r xt p r xt kp r xt qa xt t p t a t therefore a p a by pinsker s inequality and jensen s inequality a t a using proposition t x t i m a a t therefore using jensen s inequality again a p i m lower bound for large alphabets our lower bounds for the sample complexity in the large alphabet case leverage a class of constraint satisfaction problems csps with high complexity a class of boolean is defined via a function p k an instance of such a on n variables xn is a collection of sets clauses of size k whose k elements consist of k variables or their negations such an instance is satisfiable if there exists an assignment to the variables xn such that the predicate p evaluates to for every clause more generally the value of an instance is the maximum over all assignments of the ratio of number of satisfied clauses to the total number of clauses our lower bounds are based on the presumed hardness of distinguishing random instances of or a certain class of csp versus instances of the csp with high value there has been much work attempting to characterize the difficulty of notion which we will leverage is the complexity of a class of csps first defined in and studied in definition the complexity of a class of defined by predicate p k is the largest r such that there exists a distribution supported on the support of p that is r independent uniform and no such independent distribution exists example both and are classes of corresponding respectively to the predicates pxor that is the xor of the k boolean inputs and psat that is the or of the inputs these predicates both support k uniform distributions but not uniform distributions hence their complexity is in the case of the uniform distribution over k restricted to the support of pxor is k uniform the same distribution is also supported by a random instance of a csp with predicate p is an instance such that all the clauses are chosen uniformly at random by selecting the k variables uniformly and independently negating each variable with probability a random instance will have value close to e p where e p is the expectation of p under the uniform distribution in contrast a planted instance is generated by first fixing a satisfying assignment and then sampling clauses that are satisfied by uniformly choosing k variables and picking their negations according to a independent distribution associated to the predicate hence a planted instance always has value a noisy planted instance with planted assignment and noise level is generated by sampling consistent clauses as above with probability and random clauses with probability hence with high probability it has value p our hardness results are based on distinguishing whether a csp instance is random or has a high value as one would expect the difficulty of distinguishing random instances from noisy planted instances decreases as the number of sampled clauses grows the following conjecture of feldman et al asserts a sharp boundary on the number of clauses below which this problem becomes computationally intractable while remaining information theoretically easy the notation is made more explicit in appendix conjectured csp hardness conjecture let q be any distribution over and n variables of complexity r and any randomized algorithm that given access to a distribution d that equals either the uniform distribution over uk or a noisy planted distribution for some n and planted distribution decides correctly whether d or d uk with probability at least needs clauses feldman et al proved the conjecture for the class of statistical algorithms recently kothari et al showed that the polynomial time sos algorithm requires clauses to refute random instances of a csp with complexity r hence proving conjecture for any semidefinite programming relaxation for refutation note that is tight as allen et al give a sos algorithm for refuting random csps beyond this regime other recent papers such as daniely and and daniely have also used presumed hardness of strongly refuting random and random instances with a small number of clauses to derive conditional hardness of learning results a first attempt to encode a as a sequential model is to construct a model which outputs k randomly chosen literals for the first k time steps to k and then their noisy predicate value for the final time step clauses from the csp correspond to samples from the model and the algorithm would need to solve the csp to predict the final time step however as all the outputs up to the final time step are random the trivial prediction algorithm that guesses randomly and does not try to predict the output at time k would be near optimal to get strong lower bounds statistical algorithms are an extension of the statistical query model these are algorithms that do not access samples from the distribution but instead have access to estimates of the expectation of any bounded function of a sample through an oracle feldman et al point out that almost all algorithms that work on random data also work with this limited access to samples refer to feldman et al for more details and examples we will output m functions of the k literals after k time steps while still ensuring that all the functions remain collectively hard to invert without a large number of samples we use elementary results from the theory of error correcting codes to achieve this and prove hardness due to a reduction from a specific family of csps to which conjecture applies by choosing k and m carefully we obtain the dependence on the mutual information and error the upper bounds implied by proposition we provide a short outline of the argument followed by the detailed proof in the appendix sketch of construction and proof we construct a sequential model m such that making good predictions on the model requires distinguishing random instances of a c on n variables from instances of c with a high value the output alphabet of m is ai of size we choose a mapping from the characters ai to the n variables xi and their n negations for any clause c and planted assignment to the csp c let c be the string of values assigned by to literals in the model m randomly uniformly outputs k characters from time to k which correspond to literals in the csp c hence the k outputs correspond to a clause c of the csp for some m to be specified later we will construct a binary matrix a which will correspond to a good code for the time steps k to k m with probability the model outputs y m where y av mod and v c with c being the clause associated with the outputs of the first k time steps with the remaining probability the model outputs m uniformly random bits note that the mutual information i m is at most m as only the outputs from time k to k m can be predicted we claim that m can be simulated by a hmm with m m hidden states this can be done as follows for every time step i from to k we maintain hidden states corresponding to vi and hidden states corresponding to vi each of these states stores the current value of the m bits of y this takes a total of hidden states we use hidden states for each time step k through k m for the k output bits finally we need an additional m hidden states to output m uniform random bits from time k to k m with probability this accounts for a total of m hidden states note that the larger m is with respect to k the higher the cost in terms of average prediction error of failing to correctly predict the outputs from time k to k m tuning k and m allows us to control the number of hidden states or the mutual information and average error incurred by a computationally constrained predictor we define the csp c in terms of a collection of predicates p y for each y m while conjecture does not directly apply to c as it is defined by a collection of predicates instead of a single one we will later show a reduction from a related csp defined by a single predicate for which conjecture holds for each y the predicate p y of c is the set of v k which satisfy y av mod hence each clause has an additional label y which determines the satisfying assignments this label is just the output of our sequential model m from time k to k m hence for any planted assignment the set of satisfying clauses c of the csp c are all clauses such that av y mod where y is the label of the clause and v c we define a noisy planted distribution over clauses by first uniformly randomly sampling a label y and then sampling a consistent clause with probability otherwise with probability we sample a uniformly random clause let uk be the uniform distribution over all with uniformly chosen labels y we will show that conjecture implies that distinguishing between the distributions and uk is hard without sufficiently many clauses this gives us the hardness results we desire for our sequential model m if an algorithm obtains low prediction error on the outputs from time k through k m then it can be used to distinguish between instances of the csp c with a high value and random instances as no algorithm obtains low prediction error on random instances hence hardness of strongly refuting the csp c implies hardness of making good predictions on we now sketch the argument for why conjecture implies the hardness of strongly refuting the csp we define another csp which we show reduces to the predicate p of the csp is the set of all v k such that av mod hence for any planted assignment the set of satisfying clauses of the csp are all clauses such that v c is in the nullspace of a as before the planted distribution over clauses is uniform on all satisfying clauses with probability with probability we add a uniformly random for some if we can construct a such that the set of satisfying assignments v which are the vectors in the nullspace of a supports a uniform distribution then by conjecture any polynomial time algorithm can not distinguish between the planted distribution and uniformly randomly chosen clauses with less than clauses we show that choosing a matrix a whose null space is uniform corresponds to finding a binary linear code with rate at least and relative distance the existence of which is guaranteed by the bound we next sketch the reduction from to the key idea is that the csps and c are defined by linear equations if a clause c xk in is satisfied with some assignment t k to the variables in the clause then at mod therefore for some w k such that aw y mod t w mod satisfies a t w y mod a clause c with assignment t w mod to the variables can be obtained from the clause c by switching the literal if wi and retaining xi if wi hence for any label y we can efficiently convert a clause c in to a clause c in c which has the desired label y and is only satisfied with a particular assignment to the variables if c in is satisfied with the same assignment to the variables it is also not hard to ensure that we uniformly sample the consistent clause c in c if the original clause c was a uniformly sampled consistent clause in we provide a small example to illustrate the sequential model constructed above let k m and n let a the output alphabet of the model m is ai i the letter maps to the variable maps to similarly let be some planted assignment to which defines a particular model if the output of the model m is for the first three time steps then this corresponds to the clause with literals for the final time step with probability the model outputs y av mod with v c for the clause c and planted assignment and with probability it outputs a uniform random bit for an algorithm to make a good prediction at the final time step it needs to be able to distinguish if the output at the final time step is always a random bit or if it is dependent on the clause hence it needs to distinguish random instances of the csp from planted instances we theorem below deferring its proof to appendix theorem assuming conjecture for all sufficiently large t and c for some fixed constant c there exists a family of hmms with t hidden states and an output alphabet of size n such that any polynomial time prediction algorithm that achieves average error or relative error less than with probability greater than for a randomly chosen hmm in the family needs requires log t samples from the hmm over any window length which the algorithm uses for prediction lower bound for small alphabets our lower bounds for the sample complexity in the binary alphabet case are based on the average case hardness of the decision version of the parity with noise problem and the reduction is straightforward in the parity with noise problem on n bit inputs we are given examples v n drawn uniformly from n along with their noisy labels hs mod where s n is the unknown support of the parity function and is the classification noise such that p r where is the noise level let be the distribution over examples of the parity with noise instance with s as the support of the parity function and as the noise level let un be the distribution over examples and labels where each label is chosen uniformly from independent of the example the strength of of our lower bounds depends on the level of hardness of parity with noise currently the fastest algorithm for the problem due to blum et al runs in time and samples log n we define the function f n as definition define f n to be the function such that for a uniformly random support s n with probability at least over the choice of s any randomized algorithm that can distinguish between and un with success probability greater than over the randomness of the examples and the algorithm requires f n time or samples our model will be the natural sequential version of the parity with noise problem where each example is coupled with several parity bits we denote the model as m for some a m from time through n the outputs of the model are and uniform on let v n be the vector of outputs from time to n the outputs for the next m time steps are given by y av mod where m is the random noise and each entry of is an random variable such that p r where is the noise level note that if a is full and v is chosen uniformly at random from n the distribution of y is uniform on m also i m a m as at most the binary bits from time n to n m can be predicted using the past inputs as for the higher alphabet case m can be simulated by an hmm with m m hidden states see section we define a set of a matrices which specifies a family of sequential models let s be the set of all m n matrices a such that the of a corresponding to all rows but only the first columns is full row rank we need this restriction to lower bound i m a as otherwise there could be small or no dependence of the parity bits on the inputs from time to we denote r as the family of models m a for a lemma shows that with high probability over the choice of a distinguishing outputs from the model m a from random examples un requires f n time or examples lemma let a be chosen uniformly at random from the set then with probability at least over the choice a s any randomized algorithm that can distinguish the outputs from the model m a from the distribution over random examples un with success probability greater than over the randomness of the examples and the algorithm needs f n time or examples the proof of proposition follows from lemma and is similar to the proof of theorem proposition with f t as defined in definition for all sufficiently large t and c for some fixed constant c there exists a family of hmms with t hidden states such that any algorithm that achieves average relative loss average loss or average kl loss less than with probability greater than for a randomly chosen hmm in the family needs requires f log t time or samples samples from the hmm over any window length which the algorithm uses for prediction information theoretic lower bounds we show that information theoretically windows of length ci m are necessary to get expected relative loss less than as the expected relative loss is at most the loss which can be bounded by the square of the this automatically implies that our window length requirement is also tight for loss and kl loss in fact it s very easy to show the tightness for the kl loss choose the simple model which emits uniform random bits from time to n and repeats the bits from time to m for time n through n m one can then choose n m to get the desired error and mutual information i m to get a lower bound for the loss we use the probabilistic method to argue that there exists an hmm such that long windows are required to perform optimally with respect to the loss for that hmm we state the lower bound and a rough proof idea deferring the details to appendix proposition there is an absolute constant c such that for all and sufficiently large n there exits an hmm with n states such that it is not information theoretically possible to get average relative loss or loss less than using windows of length smaller than c log and kl loss less than using windows of length smaller than c log we illustrate the construction in fig and provide the proof idea with respect to fig below figure lower bound construction n we want show that any predictor p using windows of length can not make a good prediction the transition matrix of the hmm is a permutation and the output alphabet is binary each state is assigned a label which determines its output distribution the states labeled emit with probability and the states labeled emit with probability we will randomly and uniformly choose the labels for the hidden states over the randomness in choosing the labels for the permutation we will show that the expected error of the predictor p is large which means that there must exist some permutation such that the predictor p incurs a high error the rough proof idea is as follows say the markov model is at hidden state at time this is unknown to the predictor the outputs for the first three time steps are the predictor p only looks at the outputs from time to for making the prediction for time we show that with high probability over the choice of labels to the hidden states and the outputs the output from the hidden states is close in hamming distance to the label of some other segment of hidden states say hence any predictor using only the past outputs can not distinguish whether the string was emitted by or and hence can not make a good prediction for time we actually need to show that there are many segments like whose label is close to the proof proceeds via simple concentration bounds a proof of theorem theorem suppose observations are generated by a hidden markov model with at most n hidden states and output alphabet of size for there exists a window length o n and absolute constant c such that for any t dc if t t is chosen uniformly at random then the expected distance between the true distribution of xt given the entire history and knowledge of the hmm and the distribution predicted by the naive empirical order markov model based on xt is bounded by proof let be a distribution over hidden states such that the probability of the ith hidden state under is the empirical frequency of the ith hidden state from time to t normalized by t for s consider the predictor pt which makes a prediction for the distribution of observation given observations xt based on the true distribution of xt under the hmm conditioned on the observations xt and the distribution of the hidden state at time t being we will show that in expectation over t pt gets small error averaged across the time steps s with respect to the optimal prediction of the distribution of which knows the hidden state ht at time in order to show this we need to first establish that the true hidden state ht at time t does not have very small probability under with high probability over the choice of lemma with probability over the choice of t t the hidden state ht at time t has probability at least under proof consider the ordered set si of time indices t where the hidden state ht i for the sets corresponding to hidden states j which have probability less than under the cardinality t the sum of the cardinality of all such small sets is at most t and hence the probability that a uniformly random t t lies in one of these sets is at most now consider the set of time indices si corresponding to some hidden state i which has probability at least under for all t which are not among the first t time indices in this set the hidden state i has probability at least under as the fraction of the bad time steps t corresponding to any hidden state which has probability at least under is at most the total fraction of these bad time steps t is at most therefore using a union bound with failure probability the hidden state ht at time t has probability at least under consider any time index t for simplicity assume t and let op ts denote the conditional distribution of xs given observations and knowledge of the hidden state at time s let ms denote the conditional distribution of xs given only given that the hidden state at time has the distribution lemma for if the true hidden state at time has probability at least under then for c log x i e kop ts ms where the expectation is with respect to the randomness in the outputs from time to by lemma for a randomly chosen t t the probability that the hidden state i at time has probability less than in the prior distribution is at most hence using lemma the expected average error of the predictor pt across all t is at most for log now consider the predictor which for s predicts given xt according to the empirical distribution of given xt based on the observations up to time we will now argue that the predictions of are close in expectation to the predictions of pt recall that prediction of pt at time t s is the true distribution of xt under the hmm conditioned on the observations xt and the distribution of the hidden state at time t being drawn from for any s let refer to the prediction of at time t s and refer to the prediction of pt at time t we will show that is small in expectation we do this using a martingale concentration argument consider any string r of length let r be the empirical probability of the string r up to time t and r be the true probability of the string r given that the hidden state at time t is distributed as our aim is to show that r r is small define the random variable p r i r p where i denotes the indicator function and is defined to be we claim that yi is a martingale with respect to the filtration xt to verify note that e p r e i r p r e i r therefore e and hence is a martingale also note that as p r and i r hence using azuma s inequality lemma p r k note that t s r r by azuma s inequality and doing a union bound over all ds d strings r of length s for c and t t dc dc with failure probability at most similarly for all strings of length s the estimated probability of the string has error at most with failure probability as the conditional distribution of given observations xt is the ratio of the joint distributions of xt and xt therefore as long as the empirical distributions of the length s and length s strings are estimated with error at most and the string xt has probability at least the conditional distributions and satisfy by a union bound over all ds d strings and for c the total probability mass on strings which occur with probability less than is at most for c therefore with overall failure probability hence the expected distance between and is at most by using the triangle inequality and the fact that the expected average error of pt is at most for log it follows that the expected average error of is at most note that the expected average error of is the average of the expected errors of the empirical markov models for s hence for log there must exist at least some s such that the markov model gets expected error at most proof of lemma let the prior for the distribution of the hidden states at time be let the true hidden state at time be without loss of generality we refer to the output at time t by xs let i p r be the posterior probability of the ith hidden state at time after seeing the observations up to time t and having the prior on the distribution of the hidden states at time for convenience denote us and vs us define pis j p r xs i as the distribution of the output at time t conditioned on the hidden state at time being i and s observations note that op ts as before define rs as the conditional distribution of xs given observations xp and initial distribution but not being at hidden state at time rs i pis note that ms is a convex combination of op ts and rs ms us op ts vs rs hence kop ts ms kop ts rs define kop ts ms our proof relies on a martingale concentration argument and in order to ensure that our martingale has bounded differences we will ignore outputs which cause a significant drop in the posterior of the true hidden state at time let b be thep set of all outputs j at some time t p rs j ts j such that op clog op ts j clog n n hence by a union rs j clog n note that ts j bound with failure probability at most any output j such that op rs j clog n is not emitted in a window of length clog hence we will only concern ourselves with sequences of outputs such ts j that the output j emitted at each step satisfies op rs j clog n let the set of all such outputs be note that p r let x be the expectation of any random variable x conditioned on the output sequence being in the set consider the sequence of random variables xs log us log vs for t defining log log let xs be the change in xs on seeing the output at time s let the output at time s be j we will first find an expression for the posterior probabilities after seeing the s th output get updated according to bayes rule p r p r x s p r x s us op j p r x s p r x s j let p r x s dj note that i i j if the output at time s is j we can write n i n x i n i j vs j therefore we can write and its expectation e as log e x op j j op j log j op j d op k j as min log log n to keep martingale differences bounded e we define then equals a truncated version of the which we define as follows definition hfor two n distributions x and x define the truncated as k e log min x x c for some fixed we are now ready to define our martingale consider the sequence of random variables pn for t with define note that s xs is with respect lemma where the expectation to the output at time ps hence the sequence of random variables is a submartingale with respect to the outputs s and e s op ts k rs c log by taking an proof by definition expectation with respect to only sequences instead of all possible sequences we are removing s hence events which have a negative contribution to e s e s op ts k rs we can now apply lemma lemma modified pinsker s inequality for any two distributions h x nand x on x x x define the kl divergence as k log min x c for some fixed c such that log c then k s kop ts rs hence es hence s we now claim that our submartingale has bounded differences lemma log log n can be at most z z proof note that s by definition log log n s log clog as we restrict ourselves to sequences in the set hence also log clog log clog we now apply lemma let zi be a submartingale with then p r zs exp applying lemma we can show p r n exp n clog we now bound the average error in the window to with failure probability at most over the randomness in the outputs n by eq let be the set of all sequences in which satisfy note that log consider the last point after which vs decreases below and remains below that for every subsequent step in the window let this point be if there is no such point define to be the total contribution of the error at every step after the th step to the average error is a term as the error after this step is note that log log as xs hence for all sequences in log log log x log log log n p c log by eq as log c log n by jensen s inequality p c log p as the total probability of sequences outside is at most e whenever the hidden state i at time has probability at least in the prior distribution proof of modified pinsker s inequality lemma lemma modified pinsker s inequality for any two distributions h x nand x on x x x define the kl divergence as k log min x c for some fixed c such that log c then k proof we rely on the following lemma which bounds the for binary distributionslemma for every q p we have p log pq p log p q p log p q proof for the second result first observe that log q and p q p as q both the results then follow from standard calculus let a x x x x and b x x x x let a p b a q and b note that a a by the k x x log x x x x x x log x log x x x log c p log case p p log p log c p log p q k case p p p log p log p p log c p log p p log q p p log c p log p log q k log c p log a log pq p p log q p log p q k p log b log pq p p k log c log p log p log q q p q b proof of lower bound for large alphabets csp formulation we first go over some notation that we ll use for csp problems we follow the same notation and setup as in feldman et al consider the following model for generating a random csp instance on n variables with a satisfying assignment the is defined by the predicate p k we represent a by an ordered of literals from xn with no repetition of variables and let xk be the set of all such for a c lk let c k be the string of values assigned by to literals in c that is lk where li is the value of the literal li in assignment in the planted model we draw clauses with probabilities that depend on the value of c let q k p k q t be some distribution over satisfying assignments to p the distribution is then defined as followsq c c p c q c recall that for any distribution q over satisfying assignments we define its complexity r as the largest r such that the distribution q is r uniform also referred to as r independent in the literature but not uniform consider the csp c defined by a collection of predicates p y for each y m for some m let a be a matrix with full row rank over the binary field we will later choose a to ensure the csp has high complexity for each y the predicate p y is the set of solutions to the system y av mod where v c for all y we define qy to be the uniform distribution over all consistent assignments all v k satisfying y av mod the planted distribution y is defined based on qy according to eq each clause in c is chosen by first picking a y uniformly at random and then a clause from the distribution y for any planted we define to be the distribution over all consistent clauses along with their labels y let uk be the uniform distribution over with each clause assigned a uniformly chosen label y define for some fixed noise level we consider to be a small constant less than this corresponds to adding noise to the problem by mixing the planted and the uniform clauses the problem gets harder as becomes larger for it can be efficiently solved using gaussian elimination we will define another csp which we show reduces to c and for which we can obtain hardness using conjecture the label y is fixed to be the all zero vector in hence the distribution over satisfying assignments for is the uniform distribution over all vectors in the null space of a over the binary field we refer to the planted distribution in this case as let be the uniform distribution over with each clause now having the label for any planted assignment we denote the distribution of consistent clauses of by as before define for the same let l be the problem of distinguishing between uk and for some randomly and uniformly chosen n with success probability at least similarly let be the problem of distinguishing between and for some randomly and uniformly chosen n with success probability at least l and can be thought of as the problem of distinguishing random instances of the csps from instances with a high value note that l and are at least as hard as the problem of refuting the random csp instances uk and as this corresponds to the case where we claim that an algorithm for l implies an algorithm for lemma if l can be solved in time t n with s n clauses then can be solved in time o t n s n and s n clauses let the complexity of be with we demonstrate how to achieve this next by conjecture distinguishing between and requires at least clauses we now discuss how a can be chosen to ensure that the complexity of is ensuring high complexity of the csp let n be the null space of a note that the rank of n is k m for any subspace d let w d wk be a randomly chosen vector from to ensure that has complexity it suffices to show that the random variables w n wk are uniform we use the theory of error correcting codes to find such a matrix a a binary linear code b of length k and rank m is a linear subspace of our notation is different from the standard notation in the coding theory literature to suit our setting the rate of the code is defined to be the generator matrix of the code is the matrix g such that b gv v m the parity check matrix of the code is the matrix h such that b c k hc the distance d of a code is the weight of the minimum weight codeword and the relative distance is defined to be for any codeword b we define its dual codeword b t as the codeword with generator matrix ht and parity check matrix gt note that the rank of the dual codeword of a code with rank m is k m we use the following standard result about linear fact if b t has distance l then w b is l uniform hence our job of finding a reduces to finding a dual code with distance and rank m where and m we use the bound to argue for the existence of such a code let h p be the binary entropy of lemma bound for every and h there exists a code with rank m and relative distance if h taking h hence there exists a code b whenever which is the setting we re interested in we choose a gt where g is the generator matrix of b hence the null space of a is uniform hence the complexity of is with hence for all k and m we can find a a to ensure that the complexity of is sequential model of csp and sample complexity lower bound we now construct a sequential model which derives hardness from the hardness of here we slightly differ from the outline presented in the beginning of section as we can not base our sequential model directly on l as generating random without repetition increases the mutual information so we formulate a slight variation of l which we show is at least as hard as we did not define our csp instance allowing repetition as that is different from the setting examined in feldman et al and hardness of the setting with repetition does not follow from hardness of the setting allowing repetition though the converse is true constructing sequential model consider the following family of sequential models r n where a is chosen as defined previously the output alphabet of all models in the family is x ai i of size with even we choose a subset s of x of size n each choice of s corresponds to a model m in the family each letter in the output alphabet is encoded as a or which represents whether or not the letter is included in the set s let u be the vector which stores this encoding so ui whenever the letter ai is in let n determine the subset s such that entry is and is when i is and is and is when i is for all i we choose uniformly at random from n and each choice of represents some subset s and hence some model we partition the output alphabet x into k subsets of size each so the first letters go to the first subset the next go to the next subset and so on let the ith subset be xi let si be the set of elements in xi which belong to the set at time m chooses v k uniformly at random from k at time i i if vi then the model chooses a letter uniformly at random from the set si otherwise if vi it chooses a letter uniformly at random from xi si with probability the outputs for the next m time steps from k to k m are y av mod with probability they are m uniform random bits the model resets at time k m and repeats the process recall that i m is at most m and m can be simulated by an hmm with m m hidden states see section reducing sequential model to csp instance we reveal the matrix a to the algorithm this corresponds to revealing the transition matrix of the underlying hmm but the encoding is kept secret the task of finding the encoding given samples from m can be naturally seen as a csp each sample is a clause with the literal corresponding to the output letter ai being x whenever i is odd and when i is even we refer the reader to the outline at the beginning of the section for an example we denote c as the csp c with the modification that the ith literal of each clause is the literal corresponding to a letter in xi for all i define as the distribution of consistent clauses for the csp c define as the uniform distribution over with the additional constraint that the ith literal of each clause is the literal corresponding to a letter in xi for all i define note that samples from the model m are equivalent to clauses from we show that hardness of follows from hardness of lemma if can be solved in time t n with s n clauses then l can be solved in time t n with o s n clauses hence if conjecture is true then can not be solved in polynomial time with less than clauses we can now prove the theorem using lemma theorem assuming conjecture for all sufficiently large t and c for some fixed constant c there exists a family of hmms with t hidden states and an output alphabet of size n such that any polynomial time prediction algorithm that achieves average error or relative error less than with probability greater than for a randomly chosen hmm in the family needs requires log t samples from the hmm over any window length which the algorithm uses for prediction proof we describe how to choose the family of sequential models r n for each value of and t recall that the hmm has t m m hidden states let t k m note that t t let t log t we choose m t log log and k to be the solution of m t m log k m hence k m note that for k let we claim to verify note that k m therefore t log log k m for sufficiently large t and for a fixed constant hence proving hardness for obtaining error implies hardness for obtaining error we choose the matrix as outlined earlier for each vector n we define the family of sequential models r n a as earlier let m be a randomly chosen model in the family we first show the result for the relative loss the idea is that any algorithm which does a good job of predicting the outputs from time k through k m can be used to distinguish between instances of the csp with a high value and uniformly random clauses this is because it is not possible to make good predictions on uniformly random clauses we relate the error from time k through k m with the relative error from time k through k m and the average error for all time steps to get the required lower bounds let a be the average loss of some polynomial time algorithm a for the output a be the average relative loss of a for the outtime steps k through k m and put time steps k through k m with respect to the optimal predictions for the distribution it is not possible to get a as the clauses and the label y are independent and y is chosen uniformly at random from m for it is information theoretically possible to get a hence any algorithm which gets error a can be used to distinguish tween and therefore by lemma any polynomial time algorithm which gets a with probability greater than over the choice of m needs at least samples note that a a as the optimal predictor p gets therefore a a note that a a m this is because a is the average error for all k time steps and the contribution to the error from time steps to k m a a is also therefore a hence any polynomial time algorithm which gets average relative loss less than with probability greater than needs at least samples the result for loss follows directly from the result for relative loss we next consider the kl loss a be the average kl error of the algorithm a from time steps k through k m let a a by application of jensen s inequality and pinsker s inequality a needs samtherefore by our previous argument any algorithm which gets a hence any polynomial time algorithm which ples but as before a succeeds with probability greater than and gets average kl loss less than needs at least samples we lower bound k by a linear function of log t to express the result directly in terms of log t we claim that log t is at most this follows log t k m hence any polynomial time algorithm needs log t samples to get average relative loss loss or kl loss less than on proof of lemma lemma if l can be solved in time t n with s n clauses then can be solved in time o t n s n and s n clauses proof we show that a random instance of can be transformed to a random instance of c in time s n o k by independently transforming every clause c in to a clause c in c such that c is satisfied in the original csp with some assignment t to x if and only if the corresponding clause c in c is satisfied with the same assignment t to x for every y m we and store a random solution of the system y av mod let the solution be v y given any clause c xk in choose y m uniformly at random we generate a clause c in c from the clause c in by choosing the literal if vi y and xi if vi y by the linearity of the system the clause c is a consistent clause of c with some assignment x t if and only if the clause c was a consistent clause of with the same assignment x we next claim that c is a randomly generated clause from the distribution uk if c was drawn from and is a randomly generated clause from the distribution if c was drawn from by our construction the label of the clause y is chosen uniformly at random note that choosing a clause uniformly at random from is equivalent to first uniformly choosing a of unnegated literals and then choosing a negation pattern for the literals uniformly at random it is clear that a clause is still uniformly random after adding another negation pattern if it was uniformly random before hence if the original clause c was drawn to the uniform distribution then c is distributed according to uk similarly choosing a clause uniformly at random from y for some y is equivalent to first uniformly choosing a of unnegated literals and then choosing a negation pattern uniformly at random which makes the clause consistent as the original negation pattern corresponds to a v randomly chosen from the null space of a the final negation pattern on adding v y corresponds to the negation pattern for a uniformly random chosen solution of y av mod for the chosen y therefore the clause c is a uniformly random chosen clause from y if c is a uniformly random chosen clause from hence if it is possible to distinguish uk and for some randomly chosen n with success probability at least in time t n with s n clauses then it is possible to distinguish between and for some randomly chosen n with success probability at least in time t n s n o k with s n clauses proof of lemma lemma if can be solved in time t n with s n clauses then l can be solved in time t n with o s n clauses hence if conjecture is true then can not be solved in polynomial time with less than clauses proof define e to be the event that a clause generated from the distribution of the csp c has the property that for all i the ith literal belongs to the set xi we also refer to this property of the clause as e for notational ease it s easy to verify that the probability of the event e is k we claim that conditioned on the event e the csp c and c are equivalent this is verified as follows note that for all y y and y are uniform on all consistent clauses let u be the set of all clauses with probability under y and u be the set of all clauses with probability under y furthermore for any v which satisfies the constraint that y av mod let u v be the set of clauses c u such that c similarly let u v be the set of clauses c u such that c note that the subset of clauses in u v which satisfy e is the same as the set u v as this holds for every consistent v and the distributions y and y are uniform on all consistent clauses the distribution of clauses from is identical to the distribution of clauses conditioned on the event the equivalence of uk and conditioned on e also follows from the same argument note that as the in c are chosen uniformly at random from satisfying with high probability there are s n tuples having property e if there are o k k s n clauses in as the problems l and are equivalent conditioned on event e if can be solved in time t n with s n clauses then l can be solved in time t n with o k k s n clauses from lemma and conjecture l can not be solved in polynomial time with less than clauses hence can not be solved in polynomial time with less than k clauses as k is a constant with respect to n can not be solved in polynomial time with less than clauses c proof of lower bound for small alphabets proof of lemma lemma let a be chosen uniformly at random from the set then with probability at least over the choice a s any randomized algorithm that can distinguish the outputs from the model m a from the distribution over random examples un with success probability greater than over the randomness of the examples and the algorithm needs f n time or examples proof suppose a is chosen at random with each entry being with its distribution uniform on let be the of a corresponding to the first columns and all the m rows recall that s is the set of all m n matrices a such that the is full we claim that p a s to verify consider the addition of each row one by one to the probability of the ith row being linearly dependent on the previous i rows is hence by a union bound is full with failure probability at most from definition and a union bound over all the m parities any algorithm that can distinguish the outputs from the model m a for uniformly chosen a from the distribution over random examples un with probability at least over the choice of a needs f n time or examples as p a s for a uniformly randomly chosen a with probability at least over the choice a s any algorithm that can distinguish the outputs from the model m a from the distribution over random examples un with success probability greater than over the randomness of the examples and the algorithm needs f n time or examples proof of proposition proposition with f t as defined in definition for all sufficiently large t and c for some fixed constant c there exists a family of hmms with t hidden states such that any algorithm that achieves average relative loss average loss or average kl loss less than with probability greater than for a randomly chosen hmm in the family needs requires f log t time or samples samples from the hmm over any window length which the algorithm uses for prediction proof we describe how to choose the family of sequential models for each value of and t recall that the hmm has t m m hidden states let t n m note that t t let t log t we choose m t log log and n to be the solution of m t m log n m hence n m note that for n let we claim to verify note that n m therefore t log log n m for sufficiently large t and for a fixed constant hence proving hardness for obtaining error implies hardness for obtaining error we choose the matrix as outlined earlier the family is defined by the model m defined previously with the matrix chosen uniformly at random from the set let a be the average loss of some algorithm a for the output time steps n through a be the average relative loss of a for the output time steps n through and n m with respect to the optimal predictions for the distribution un it is not possible to get a as the clauses and the label y are independent and y is chosen uniformly at random from m for it is information theoretically possible to get a hence any algorithm which gets error a can be used to distinguish between un and therefore by lemma any algorithm which gets a with probability greater than over the choice a a as the optimal of m a needs at least f n time or samples note that a a note that predictor gets therefore a m this is because a is the average error for all n m time steps and the a m contribution to the error from time steps to n is also therefore a a a hence any algorithm which gets average relative loss less than with probability greater than over the choice of m a needs f n time or samples the result for loss follows directly from the result for relative loss we next consider the kl loss a be the average kl error of the algorithm a from time steps n through n m let a a by application of jensen s inequality and pinsker s inequality a needs f n samples therefore by our previous argument any algorithm which gets a hence any algorithm which gets average kl loss but as before a less than needs f n time or samples we lower bound n by a linear function of log t to express the result directly in terms of log t we claim that log t is at most this follows log t n m hence any algorithm needs f log t samples and time to get average relative loss loss or kl loss less than with probability greater than over the choice of m a d proof of information theoretic lower bound proposition there is an absolute constant c such that for all and sufficiently large n there exits an hmm with n states such that it is not information theoretically possible to get average relative loss or loss less than using windows of length smaller than c log and kl loss less than using windows of length smaller than c log proof consider a hidden markov model with the markov chain being a permutation on n states the output alphabet of each hidden state is binary each state i is marked with a label li which is or let g i be mapping from hidden state hi to its label li all the states labeled emit with probability and with probability similarly all the states labeled emit with probability and with probability fig illustrates the construction and provides the proof idea figure lower bound construction n a note on notation used in the rest of the proof with respect to this example r corresponds to the label of and and is in this case similarly r in this case the segments between the shaded nodes comprise the set and are the possible sequences of states from which the last outputs could have come the shaded nodes correspond to the states in and are the possible predictions for the next time step in this example and assume n is a multiple of where c log for a constant c we will regard as a constant with respect to let we refer to the hidden states by hi i n hji refers to the sequence of hidden states i through j we will show that a model looking at only the past outputs can not get average loss less than o as the optimal prediction looking at all past outputs gets average loss o as the hidden state at each time step can be determined to an arbitrarily high probability if we are allowed to look at an arbitrarily long past this proves that windows of length do not suffice to get average error less than o with respect to the optimal predictions note that the bayes optimal prediction at time to minimize the expected loss given outputs from s where s is the sequence of time to is to predict the mode of the distribution p r x outputs from time to also note that p r x s i p r hi s p r x where hi is the hidden state at time hence the predictor is a weighted average of the prediction of each hidden state with the weight being the probability of being at that hidden state we index each state hi of the permutation by a tuple f i g i j k where j i mod i and k b c hence j k t and i k j we help the predictor to make the prediction at time by providing it with the index f i i mod of the true hidden state hi at time hence this narrows down the set of possible hidden states at time in fig the set of possible states given this side information are all the hidden states before the shaded states the bayes optimal prediction at time given outputs s from time to and index f hi j is to predict the mode of p r x s f hi j note that by the definition of bayes optimality the average loss of the prediction using p r x s f hi j can not be worse than the average loss of the prediction using p r x s hence we only need to show that the predictor with access to this side information is poor we refer to this predictor using p r x s f hi j as we will now show that there exists some permutation for which the average loss of the predictor p is o we argue this using the probabilistic method we choose a permutation uniformly at random from the set of all permutations we show that the expected average loss of the predictor p over the randomness in choosing the permutation is o this means that there must exist some permutation such that the average loss of the predictor p on that permutation is to find the expected average loss of the predictor p over the randomness in choosing the permutation we will find the expected average loss of the predictor p given that we are in some state hi at time without loss of generality let f i and g i hence we were at the th hidden state at time fix any sequence of labels for the hidden states h emitted by the hidden states h from time to let e for any string be the expected average error p over the randomness in the rest of the p of the predictor permutation also let e h s e p r s be the expected error averaged across all outputs we will argue that e h o the set of hidden states hi with g i k k defines a segment of the permutation let r k be the label g h of the segment k excluding its last bit which corresponds to the predictions let r k k be the set of all the labels excluding the first label r and g hk k be the set of all the predicted bits refer to fig for an example consider any assignment of r to begin we show that with of high probability over the output s the hamming distance d r of the output from r is at least the set of hidden states h this follows directly from hoeffding s inequality as all the outputs are independent conditioned on the hidden p r d s r e we now show that for any k with decent probability the label r k of the segment k is closer than r then we argue that with high probability in hamming distance to the output s in hamming distance than r hence there are many such segments which are closer to s these other segments are assigned as much weight in predicting the next output as r which means that the output can not be predicted with a high accuracy as the output bits corresponding to different segments are independent we first find the probabilityp that the segment corresponding to some k with label r k has a hamming distance less than log from any fixed binary string x of length let f l m p be the probability of getting at least l heads in m trails with each trial having probability p of giving a head f l m p can be bounded below by the following standard l f l m p exp mdkl p m h where dkl q k p q log pq q log we can use this to lower bound p r d r k x i p log h i p p p r d r k x log f log for n independent random variables xi lying in the interval with e in our case t and n n p i xi p r x e t exp dkl r log t note that dkl v k by using the inequality log v we can simplify the using this and h i p p r d r k x log t p let d be the set of all k such that d r k x log for some fixed x we argue that with high probability over the randomness of the permutation is large this follows from eq and the chernoff bound as the labels for all segments r k are chosen h i p p r q p note that n therefore for any fixed x with probability t q t segments in a randomly chosen permutation which have hamming disthere are n p p tance less than log from x note that by our construction log because log log hence the segments in d are closer in hamming distance to the output if d s s r therefore if d s r then with high probability over randomly choosing the segments there is a subset d of segments in with such that all of the such that segments in d have hamming distance less than d s r from pick any d r consider any set of segments which has such a subset d with respect to the string s for all such permutations the predictor p places at least as much weight on the hidden states hi with g i k with k such that r k d as the true hidden state h the prediction for any hidden state hi is the corresponding bit in notice that the bits in are independent and uniform as we ve not used them in any argument so far the average correlation of an equally weighted average of m independent and uniform random bits with any one of the random bits is at most hence over the randomness of the expected loss of the predictor is at least hence we can writep e s p r n by using equation for any assignment r to h h i h i e h p r d s r e d r o as this is true for all assignments r to h and for all choices of hidden states at time using linearity of expectations and averaging over all hidden states the expected average loss of for independent random variables xi lying in the interval with x p p r x exp in our case and p i xi and e x the predictor p over the randomness in choosing the permutation is o this means that there must exist some permutation such that the average loss of the predictor p on that permutation is o hence there exists an hmm on n states such that is not information theoretically possible to get average error with respect to the optimal predictions less than o using windows of length smaller than c log for a fixed constant therefore for all and sufficiently large n there exits an hmm with n states such that it is not information theoretically possible to get average relative loss less than o using windows of length smaller than log the result for relative loss follows on replacing by and setting the result follows immediately from this as the expected relative loss is less than the expected loss for we use pinsker s inequality and jensen s inequality references yoshua bengio patrice simard and paolo frasconi learning dependencies with gradient descent is difficult ieee transactions on neural networks hochreiter and schmidhuber long memory neural computation felix a gers schmidhuber and fred cummins learning to forget continual prediction with lstm neural computation alex graves greg wayne and ivo danihelka neural turing machines arxiv preprint weston chopra and bordes memory networks in international conference on learning representations iclr alex graves greg wayne malcolm reynolds tim harley ivo danihelka agnieszka sergio colmenarejo edward grefenstette tiago ramalho john agapiou et al hybrid computing using a neural network with dynamic external memory nature luong pham and manning effective approaches to neural machine translation in empirical methods in natural language processing emnlp pages wu schuster chen le norouzi macherey krikun cao gao macherey et al google s neural machine translation system bridging the gap between human and machine translation arxiv preprint zhe chen and matthew a wilson deciphering neural codes of memory during sleep trends in neurosciences zhe chen andres d grosmark hector penagos and matthew a wilson uncovering representations of hippocampal ensemble spike activity scientific reports matthew a wilson bruce l mcnaughton et al reactivation of hippocampal ensemble memories during sleep science prahladh harsha rahul jain david mcallester and jaikumar radhakrishnan the communication complexity of correlation in annual ieee conference on computational complexity ccc pages ieee kneser and ney improved for language modeling in international conference on acoustics speech and signal processing icassp volume pages chen and goodman an empirical study of smoothing techniques for language modeling in association for computational linguistics acl mossel and roch learning nonsingular phylogenies and hidden markov models in theory of computing pages vitaly feldman will perkins and santosh vempala on the complexity of random satisfiability problems with planted solutions in proceedings of the annual acm on symposium on theory of computing pages acm sarah r allen ryan o donnell and david witmer how to refute a random csp in foundations of computer science focs ieee annual symposium on pages ieee pravesh k kothari ryuhei mori ryan o donnell and david witmer sum of squares lower bounds for refuting any csp arxiv preprint kim jernite sontag and rush neural language models arxiv preprint avrim blum adam kalai and hal wasserman learning the parity problem and the statistical query model journal of the acm jacm ryan o donnell analysis of boolean functions cambridge university press eric blais ryan odonnell and karl wimmer polynomial regression under arbitrary product distributions machine learning adam tauman kalai adam r klivans yishay mansour and rocco a servedio agnostically learning halfspaces siam journal on computing hsu kakade and zhang a spectral algorithm for learning hidden markov models in conference on learning theory colt anandkumar hsu and kakade a method of moments for mixture models and hidden markov models in conference on learning theory colt sedghi and anandkumar training recurrent neural networks through spectral methods arxiv preprint janzamin sedghi and anandkumar beating the perils of guaranteed training of neural networks using tensor methods arxiv preprint arora bhaskara ge and ma provable bounds for learning some deep representations in international conference on machine learning icml pages and lugosi prediction learning and games cambridge university press barron rissanen and yu the minimum description length principle in coding and modeling ieee trans information theory grunwald a tutorial introduction to the minimum description length principle advances in mdl theory and applications dawid statistical theory the prequential approach royal statistical society shtarkov universal sequential coding of single messages problems of information transmission azoury and warmuth relative loss bounds for density estimation with the exponential family of distributions machine learning foster prediction in the worst case annals of statistics opper and haussler worst case prediction over sequences under log loss the mathematics of information coding extraction and distribution nicolo and gabor lugosi bounds for the logarithmic loss of predictors machine learning vovk competitive statistics international statistical review kakade and ng online bounds for bayesian algorithms proceedings of neural information processing systems seeger kakade and foster bounds for some bayesian methods clarke and barron asymptotics of bayes methods ieee transactions on information theory david haussler and manfred opper mutual information metric entropy and cumulative relative entropy risk annals of statistics barron characterization of bayes performance and the choice of priors in parametric and nonparametric problems in bernardo berger dawid and smith editors bayesian statistics pages barron schervish and wasserman the consistency of posterior distributions in nonparametric problems annals of statistics diaconis and freedman on the consistency of bayes estimates annals of statistics zhang learning bounds for a generalized family of bayesian posterior distributions proceedings of neural information processing systems ziv and lempel compression of individual sequences via coding ieee transactions on information theory rumelhart hinton and williams learning representations by errors nature bahdanau cho and bengio neural machine translation by jointly learning to align and translate arxiv preprint vitaly feldman elena grigorescu lev reyzin santosh vempala and ying xiao statistical algorithms and a lower bound for detecting planted cliques in proceedings of the annual acm symposium on theory of computing pages acm amit daniely and shai complexity theoretic limitations on learning dnf s in annual conference on learning theory pages amit daniely complexity theoretic limitations on learning halfspaces in proceedings of the annual acm sigact symposium on theory of computing pages acm
2
an optimal algorithm for range search on multidimensional points department of computer science engineering anna university chennai india jul abstract this paper proposes an efficient and novel method to address range search on multidimensional points in t time where t is the number of points reported in k space this is accomplished by introducing a new data structure called bits k this structure also supports fast updation that takes time for insertion and o log n time for deletion the earlier best known algorithm for this problem is o logk n t time in the pointer machine model keywords bits k threaded trie range search introduction k introduced by are multidimensional binary search trees commonly used for storing k dimensional points they are also used to perform search operations such as exact match partial match and range queries range queries are mostly used in gis applications to locate cities within a certain region in a map similarly in the geometrical view of a database one can use orthogonal range search to perform a query generally k with n nodes have a height n and hence the complexity for insertion and search are high although many search structures are found in the literature they differ from the standard k mainly in the methods used recall that a tree stores point data of the form x y a tree splits primarily on the x coordinate of a point at even level and then on the corresponding y coordinate at the odd level and so on hence the trees are unbalanced and are not efficient search operations also the worst case time complexity for range search on a tree is o n t where t is the number of points reported and for k dimensions it is o t in general most of the k variants get unbalanced when the data is clustered thereby affecting query operations p r k tree bucket p r k tree p m r k trees and path level compressed p r k trees are some of the kd trees used to store point data however these trees are not always balanced especially when the data is clustered one of the dynamic versions of k tree is the divided k trees for which the range query time is o n t the best known dynamically balanced tree uses bitwise interlaced data over k mapping k dimensions to one dimension although their search time is o k log n t for reporting t points bitwise interlacing leads to discarded areas during range search in the case of squarish k an x y discriminant is based on the longest side of rectangle enclosing the problem space instead of alternating the keys recently hybrid versions of squarish k relaxed k and median k have overcome the problem of height balancing an amortized worst case efficiency of range search for the email hema easwara corresponding hybrid squarish k relaxed and median trees for k partial match queries are n n and n respectively their experimental results match the aforementioned theoretical results where they show that the hybrid median trees outperform the other variants however as far as query handling is concerned these structures perform only partial match queries for two dimensions efficiently the most recent work in the pointer machine model is an orthogonal range reporting data structure with o n log log logn d space that address range queries in o log n log log n time where d range trees of bentley and maurer are yet another class of balanced binary search trees used for rectangular range search which showed improvement in the query time of o logk n t over o t of where k is the dimension for a set of n points and k is the number of reported points this was later improved to o n t using fractional cascading in layered range trees but the space requirements are relatively high of o n n a k performs range search in o logk time was proposed in recently chan have proposed two data structures for orthogonal range search in the word ram model the first structure takes o n lg lg n space and o lg lg n query time they show improved performance over previous results of which o n lg n space and o lg lg n query time or with o n lg lg n space and o lg lg n query time the second data strucure is based on o n space and answers queries in o lg n time that outperforms previous o n space data structure answers queries in o lg lg n time furthermore they also propose an efficient data structure for orthogonal range reporting with o n lg n space and o lg lg n k query time for points in rank space where this improves their previous results with o n lg n space and o lg lg query time or with o n lg n space and o lg lg n k query time where k points are reported finally they have extended range search to higher dimensions also since such range queries are common among queries in database applications we have mainly considered an orthogonal range search on points our contributions in this work we make use of the bit a segment tree variant that performs stabbing and range queries on segments efficiently in logarithmic time most importantly the distribution of the data points uniform or skewed does not affect the height of the bit and in turn facilitates faster search time here we actually use the bit structure to store points related to each dimension and thereby form a tree called bit s k in addition certain nodes of the bit associate to a variant of the trie data structure called threaded trie to facilitate fetching a required node in constant time unlike k trees it does not associate axis level wise for comparison to locate or insert a point instead the tree at the first level has nodes with a key on only distinct values of first of the points therefore this tree corresponds to the one dimensional data this tree is then augmented with another tree at second level and there in key values of the nodes associated with distinct first two of the points in general ith tree corresponds to the distinct first i of the set of points given moreover in each tree the inorder sequence provides the sorted sequence that is bit s k trees is a tree and its construction is illustrated in the subsequent sections bit originally the bit balanced inorder threaded segment tree is a dynamic structure that stores segments and also answers both stabbing and range queries efficiently unlike segment trees it also permits insertion of segment with any interval range figure a set of segments b bit for the given segments definition a bit is a height balanced binary tree t that satisfies the following properties each node v of t is represented as v a b l where a b is the range associated with the node v and l is the list of segments containing the range a b if c d l then a b c d given then if if otherwise ranges can either overlap only at end points or do not overlap at all suppose appears before in the inorder sequence then it has a special node called dummy node denoted by d with range and list as empty suppose and vn an bn ln are the first and last nodes of the inorder sequence respectively then inp red insucc vn d and the range say a b of any node contained in bn a b bn here the functions inpred and insucc respectively returns inorder predecessor and successor a sample bit is shown in figure note that the dangling threads actually point to a dummy node which is not shown in the figure the bit is originally developed for storing segments but we use this for a different purpose of storing points thus we modify this structure to suit our requirement as described below each node v a b l is replaced by v p t where p is a point in k k and is a pointer to the list of collinear points in dimension k having p for the first k however this list is maintained in the tree at the next level which is described in section now t is either null or a pointer to a threaded trie which is elaborated in the next section for any two points and stored in a tree suppose appears before in the inorder sequence then as per the following definition definition let and be two points in a kdimensional space then implies for each or implies head j head j and or for some j in the subsequent sections for better clarity we use hyphen for a certain parameter of a node to denote that the particular parameter is irrelevant with respect to the context for instance p t denotes that the list contents are irrelevant for that point p at this time threaded trie figure a sample threaded trie threaded tries are variants of tries that consists of two types of nodes viz trie node and data node for instance in figure a b and c are trie nodes and the rest are data nodes unlike in tries the trie node here does not have a field for blank however each of these trie nodes contain two segments one is the index pointer and the other is a tag value which is either or where denotes the corresponding index point in a thread and otherwise it will be here all null pointers are replaced by threaded pointers which point to the next valid node if one exists for instance the thread pointers of of node a points to the node c as this is the next valid node similarly thread pointers of and in c points to the data node note here that ordering on the nodes provides the sorted sequence also data nodes appear at the same level this is accomplished by having uniform width for all data for instance the data is treated as in figure construction of bit bit are constructed using a collection of bit one at each level and interlinking the trees of two consecutive levels in a specified manner which are due to the following definitions these bit are termed here as bit d trees definition given a point p xk and an integer l k the head of p and tail of p are defined respectively as head p l xl and tail p l xk also having head p l ym xl ym leads to head p l tail p k l definition given s as the set of points in k and the set is defined as sn x p s and head p sn x that is is the set of distinct x values of the points in in general sj xj p s and head p j xj where j definition for a point p xj in sj the term xj is said to be the dimensional value of p as the set of points in sj is used to construct j th level bit figure a bit s a spatial representation of points b bit tree for points shown in a definition a bit s is a tree which is constructed as follows create separate bit s trees tj for each sj j let xj xj now for each node say vj xj l of tj j k the list l points to the node xj of where min head p j xj and head p j we term these links as cross links and the node as a cross link node in in there is only one cross link node which is the first node in the in order sequence and the tree pointer always points to this node for each node v t in tj j k t is pointer to the threaded tree if v is a cross link node and otherwise t is set to be null for every cross link node vj t in tj the data node of t for a key say k points to the node k in tj that is t provides links to the nodes in xj xj sj and these links are termed as trie links a bit s for the sample points in figure a is shown in figure b since bit s k are trees with binary inorder threaded search trees at each level the height of the trees at each level is o log n also each node in has a cross link to a node in ti which has the least value for the ith with respect to the head value of the node in note here that at least one such point exists this link is useful to locate a list of collinear points in the ith dimension associated with a point in also the trie links are useful to locate a point in a given range window in constant time the cross link and trie links also make the structure much suitable to address range queries efficiently normally k perform insertion by a simple comparison between the respective coordinates at each level however deletion is tedious due to candidate replacement this is because candidate for replacement can be anywhere in the subtree also it requires a little more work when the right subtree is empty now to find a candidate for replacement it is required to find the smallest element from the left subtree to avoid violation of the basic rules of k and then it is required to perform a swap of left and right subtrees as many possible candidate keys exist in the left subtree to handle such a situation we make use of a collection of bit one for each dimension here deleting a point may or may not require a replacement but if so it is only the inorder successor and that can be located in time as inorder links exist for each node also the cross links that exist between two consecutive levels practically provide a faster search on next level trees another advantage of this structure is that when a node is pruned out at a particular level it need not be considered in the subsequent levels that is nodes that have head values as these will be ignored in the subsequent levels to the best of our knowledge there is no such structure using of balanced binary search trees with threads introduced in this work for storing point data and to perform range search efficiently search for window query given a rectangular range in the form of a window a range query finds all points lying within this window let be a given query range first we use a trie stored in the first node which is the only cross link node in the tree at level to find the smallest point larger than or equal to in the of such a point is determined from the trie itself on the other hand once such a point p is located subsequent points that fall within can be determined using the inorder threads as the inorder sequence is in sorted order let us say that the reported set of points as s however if the dimensional value of the point p is greater than it implies absence of required candidates now using cross links of the node in that corresponds to each point in s further search is performed at in a similar fashion note that each cross link node in has a trie structure that supports quick access to a node in where the dimensional value is in in case the dimensional value of the cross link node is within the respective trie structure need not be looked into instead the inorder threads are used to find the remaining candidates example for instance let us consider figure with search range first we use the only cross link node present in as its dimension value ie lies within the range we do not use the respective trie instead we use the inorder threads to identify the candidate points which are and now for each of these candidates further search is continued respectively from and in as these are the corresponding cross link nodes now by looking at the tries of these cross link nodes we find a point whose dimensional value is the smallest one is thus tries of yields yields and yields nothing further by performing inorder traversal from and the final reported points for are e and c also for no points will be reported notice that one can stop the search at without traversing if there is no candidate node in within the given range this is also applicable in trees because if there is no candidate node in the higher tree the lower level trees need not be searched thus this structure prunes the search in some cases and thereby practically reduces the time for reporting a query k range search a range search on points can be performed by extending the search on tk similar to that of as in the case of range search however in and we need to perform the search as described for range search that is when we take the query range as xk the search is performed to find candidates within the range of in in in and so on finally the points reported from tk will be in q it is important to note that the search requires comparison of keys within the given range of the particular dimension in each of tk this simplifies subsequent searches at the next level implementation details two dimensions given a set of two dimensional points in a tree bit is constructed in o n time as a point may require at most two insertions one at and the other at but the position at which insertion is to be made in and could be determined in constant time as described in the proof of lemma thus to insert n nodes requires o n time also it may be required to create a cross link for each node of in the case of bit s since can not have more than n points the number of cross links created can not exceed also the number of trie links created can not exceed the number of nodes in and which is o n moreover construction of a trie requires only constant time as the height of the trie is constant due to fixed size of the key thus all these factors lie within o log n for each insertion regarding space requirements in a bit s it is o n as the second tree is the one that contains all the n points and fewer or equal number of points in the first tree also the number of trie nodes is o n as the height of a trie is constant which is due to the size of number of digits of the key thus we obtain the following lemma lemma construction of bit s for n points requires o n time and o n space now searching a candidate node in is done through the trie in and that requires only constant time as the height of the trie is fixed once such a point is identified subsequent points are identified through inorder threads thus for identifying candidate points it takes only time if there are candidate points in now using cross links of each of these nodes we can locate the required tries in constant time and further search is to be done in a similar fashion as described earlier thus it leads to the following lemma lemma range search for window query using bit s can be addressed in t time where t stands for the number of points reported higher dimensions a straight forward extension of bit s to k dimensions is made easy by connecting cross links to the corresponding nodes in the tree at next level unlike range trees which build another range tree at a given node from the main tree we maintain the trees tk such that the inorder traversal provides an ordered sequence of points stored in the tree this definitely reduces the overall time taken for range search across k dimensions as described in the previous section the time required to find a candidate point in any ti i k is only a constant thus it leads to the following lemma lemma let s be a set of points in space k a range search on bit s reports all points that lie within the rectangular query range in t time where t is the number of points reported lemma given a set of n points a bit s can be constructed in o n time and o n space proof since we construct tk such that tk at level k has at most n nodes it follows that n ti n i k and n tk where n ti is the number of nodes in ti note that levels correspond to dimensions and hence may be used interchangeably also the number of trie nodes is o n as its height is constant therefore for k levels a bit s k uses o n storage in the worst case as k is a constant now construction of bit s k is considered as a sequence of insertions each insertion may or may not alter ti i k a bit s tree of a particular level however if a bit tj is altered due to insertion all trees tk will be altered let j be the least index such that table theoretical comparison of k divided k trees range trees k range dsltrees layered range trees and the proposed bit s k description k divided k trees range trees k layered range trees bits k storage o n o n o n n o n n o n n o n construction o n log n o n log n o n n o n logk n o n n o n update o logk n o n o logk n o n n o logk n ins del o log n range search o t o n t o logk n t o logk n t o n t t of points of points reported the tree tj is altered thus for with trie links and cross links one can determine that the required values are already stored in those trees within constant time now from a particular cross link in followed by a trie link in tj one can find a position for the new value in tj this requires only constant time then while inserting the value if the tree is unbalanced atmost one rotation is required to balance the tree so for tj too it requires constant time let nj be the new node inserted in tj now by taking cross link of inorder successor of nj one can determine the position of the new node in and that as inorder predecessor of cross link node of inorder successor of nj this new node in need to have a trie which again be created in constant time then the process is to be continued for tk here updation in each ti i k takes only constant time and hence each insertion takes time so construction of bit s k for n points requires o n time lemma insertion and deletion of a point in a bit s can be respectively done in and o log n time proof as per the description given in the proof of lemma insertion of a point in bits k takes only time but for deletion finding a node to be removed from a bit stree requires only constant time however if that node is not a leaf node a cascading replacement with inorder successor is required until reaching a leaf node to be removed physically certainly the number of such replacements to be done can not exceed o log n after that it may require a sequence of rotations on the path from the physically removed leaf to the root and that too in at most o log n rotations so deletion of a point in bit s k requires o log n time performance table summarizes the performance of k divided k trees range trees k and the bit sk tree proposed in this work furthermore our theoretical comparison of the bit s k is made with k adapted for internal memory pointer machine model and not with any of the other bulk loading k ram model the results give an t query time using the bit s k that shows a reduction in time as compared to the existing bounds since we try to capitalize on the efficiency of balanced search trees at all the levels by using cross links and trie links we ensure that the number of nodes visited during a range query is considerably reduced in bit s k observe that the storage is increased from o n in k to o n n in range trees while bit s k still maintains an o n notice that the update time for bitsk has been reduced considerably to summarize although the storage requirements of bit s k dtree are comparable to k trees divided k trees the construction and update time are improved considerably moreover the overall query time is improved to t time where t is the number of points reported as it prunes points falling outside the query region for each dimension conclusion a bit s k for storing points having update and query operations efficiently than is proposed the main advantage of this tree is that it effectively handles the collinear points as a result number of nodes visited during search is much less compared to other k variants that are either not height balanced or update operation is complex in the case of height balanced k having better search efficiency insertion is tedious a k range dsl tree gives a logarithmic amortized worst case search time with efficient updates mainly for partial match queries and not for window queries in bit s k overall insertion time is moreover points can be dynamically updated at each level since dimensions at each level are distributed and using threaded tries we quickly find points falling within the query range also points falling above and below the search range are pruned efficiently using cross links to the next level and inorder threads similar to the bit in addition threaded tries introduced in this work link the node having cross link by means of trie links to find the points within the given range in constant time therefore range search for points in a rectangular region using bit s k tree takes t time where t is the number of points reported and therefore the logarithmic factor in earlier worst case bounds is reduced hence it is definitely a remarkable improvement over o t of k and o logk n t time of k range dsl trees references afshani arge and larsen orthogonal range reporting and rectangle stabbing in the pointer machine model in proceedings of the twentyeighth annual symposium on computational geometry pages acm agarwal range searching in goodman and orourke editors crc handbook of discrete and computational geometry crc press inc alstrup brodal and rauhe new data structures for orthogonal range searching in foundations of computer science proceedings annual symposium on pages ieee bentley multidimensional binary search tress used for associative searching communications of acm bentley decomposable search problems information processing letters june bentley multidimensional binary search trees in database applications ieee transactions on software engineering bentley multidimensional divide and conquer communications of the acm april berg cheong kreveld and overmars computational geometry algorithms and applications new york usa third edition chan persistent predecessor search and orthogonal point location on the word ram in proceedings of the annual symposium on discrete algorithms soda pages siam chan larsen and orthogonal range searching on the ram revisited in proceedings of the annual symposium on computational geometry socg pages new york ny usa acm crespo design analysis and implementation of new variants of master thesis universitat politecnica de catalunya departament de llenguatges i sistemes informatics devroye jabbour and squarish siam journal of computing new data structures for orthogonal queries harvard university easwarakumar and hema efficient data structure for segment storage and query processing international journal of computers and technology december lamoureux and nicolson determinisitic skip lists for range search technical report pages novemeber lee and wong analysis for region and partial region searches in multidimensional binary search trees and balanced quad trees acta informatica pages nekrich orthogonal range searching in linear and space computational geometry nilsson and an experimental study of compression methods for dynamic tries algorithmica orienstein multidimensional tries used for associative searching information processing letters june preparata and shamos computational geometry an introduction springerverlag new york and a consistent hierarchical representation for vector data in proceedings of the siggraph conference dallas volume pages august samet fundamentals of and metric data structures academic press new york usa samet the design and analysis of spatial data structures addison wesley tropf and multidimensional range search in dynamically balanced trees applied informatics vieweg verlag germany van kreveld and overmars divided trees algorithmica
8
slow links fast links and the cost of gossip dec suman sourav national university of singapore sourav peter robinson royal holloway university of london seth gilbert national university of singapore abstract consider the classical problem of information dissemination one or more nodes in a network have some information that they want to distribute to the remainder of the network in this paper we study the cost of information dissemination in networks where edges have latencies sending a message from one node to another takes some amount of time we first generalize the idea of conductance to weighted graphs by defining to be the critical conductance and to be the critical latency one goal of this paper is to argue that characterizes the connectivity of a weighted graph with latencies in much the same way that conductance characterizes the connectivity of unweighted graphs we give near tight lower and upper bounds on the problem of information dissemination up to polylogarithmic factors specifically we show that in a graph with weighted diameter d with latencies as weights and maximum degree any information dissemination algorithm requires at least min time in the worst case we show several variants of the lower bound for graphs with small diameter graphs with small etc by reduction to a simple combinatorial game we then give nearly matching algorithms showing that information dissemination can be solved in o min d n log n time this is achieved by combining two cases we show that the classical algorithm is near optimal when the diameter or the maximum degree is large for the case where the diameter and the maximum degree are small we give an alternative strategy in which we first discover the latencies and then use an algorithm for known latencies based on a weighted spanner construction our algorithms are within polylogarithmic factors of being tight both for known and unknown latencies while it is easiest to express our bounds in terms of and in some cases they do not provide the most convenient definition of conductance in weighted graphs therefore we give a second nearly equivalent characterization namely the average conductance introduction consider the problem of disseminating information in a distributed system nodes in the network have information that they want to with others real world network communication often has a time delay which we model here as edges with latencies the latency of an edge captures how long communication takes how many rounds it takes for two neighbors to exchange information low latency on links imply faster message transmission whereas higher latency implies longer delays in the case of unweighted graphs all edges are considered the same and are said to have unit latencies however this is not true in real life and link latencies can vary greatly in fact even if nodes are connected directly it might not be the fastest route for communication due to large latency of the link which might arise due to poor connection quality hardware or software restrictions etc often choosing a lower latency path leads to faster distribution of information for unweighted graphs without latencies there exists a significant amount of literature characterizing the connectivity of a graph referred as the conductance of a graph which exactly indicates how efficient information dissemination will be we would like to do the same for graphs with latencies however due to the presence of latencies not all edges can be regarded as the same and therefore connectivity alone is no longer enough thus we introduce a new notion of the critical conductance that generalizes the notion of classical conductance using we give nearly tight lower and upper bounds for information dissemination for some cases might not be the most convenient definition of conductance in weighted graphs alternatively we give a nearly equivalent characterization namely the average conductance model we model the network as a connected undirected graph g v e with n nodes each node knows the identities of its neighbors and a polynomial upper bound on the size of the network nodes communicate bidirectionally over the graph edges and communication proceeds in synchronous rounds an edge is said to be activated whenever a node sends any message over the edge latencies occur in the communication channel and not on the nodes for simplicity we assume that each edge latency is an integer if not latencies can be scaled and rounded to the nearest integer also the edge latencies here are symmetric problems for arbitrarily large latencies are at least as hard as directed unweighted networks for which many tasks are impossible to achieve efficiently let d be the weighted diameter of the graph with latencies as weights and let max be the maximum edge latency we consider both cases where nodes know the latencies of adjacent edges section and cases where nodes do not know the latencies of adjacent edges the rest of the paper nodes do not know d or max in each round each node can choose one neighbor to exchange information with it sends a message to that neighbor and automatically receives a if the edge has latency then this exchange takes time this model is within constant factors equivalent to a more standard model in which a involves first sending a message with latency receiving it at the other end and then sending a response at a cost of latency notice that each node can initiate a new exchange in every round even if previous messages have not yet been delivered communication is information dissemination we focus in this paper on information dissemination a designated source node begins with a message the rumor and when the protocol completes every node should have received the message classic examples include distributed database replication sensor network data aggregation and systems this fundamental problem has been widely studied under various names information dissemination rumor spreading global broadcast in real world settings nodes are often aware of their neighbors however due to fluctuations in network quality and hence latency a node can not necessarily predict the latency of a connection notice that this model of communication is essentially equivalent to the traditional where each node can either push data to a neighbor or pull data from a neighbor here we assume a node always does both simultaneously without the ability to pull data it is easy to see that information exchange takes nd time in a star simple flooding matches this lower bound multicast and information spreading as a building block we look at local broadcast the problem of every node distributing a message to all of its neighbors conductance in weighted graphs our goal in this paper is to determine how long it takes to disseminate information in a graph with latencies clearly the running time will depend on the weighted diameter d of the graph typically such algorithms also depend on how well connected the graph is and this is normally captured by the conductance unfortunately conductance is no longer a good indicator of connectivity in a graph with latencies as slow edges with large weights are much worse than fast we begin by generalizing the idea of conductance to weighted graphs we give two nearly equivalent definitions of conductance in weighted graphs which we refer to as the critical conductance definition and the average conductance definition while they give approximately the same value for every graph there are times when one definition is more convenient than the other in fact we show that the values of and are closely related as in dlog max e theorem we compare these definitions further in section we use in determining the lower and upper bounds for information dissemination as it makes our analysis simpler and then use the above relation to determine the bounds for a core goal of this paper is to argue that the notion of and defined herein well captures the connectivity of weighted graphs and may be useful for understanding the performance of other algorithms lower bounds these constitute some of the key technical contributions of this paper for a graph g with diameter d maximum degree critical conductance and critical latency we show that any information dissemination algorithm requires min d rounds that is in the worst case it may take time d to distribute information however if the graph is well connected then we may do better and the time is characterized by the critical conductance we show that this lower bound holds even in various special cases for graphs with small diameter or with small etc by the relation provided in theorem we determine the lower bound in terms of average conductance as min d the main technique we use for showing our lower bounds is a reduction to a simpler combinatorial guessing game see for a demonstration of how other variants of guessing games can be used to prove lower bounds for radio networks we first show that the guessing game itself takes a large number of rounds thereafter we reduce the problem of solving the game to that of solving information dissemination via a simulation upper bounds we then show nearly matching upper bounds algorithms for solving information dissemination in this regard we differentiate our model into two cases for the case where nodes are not aware of the adjacent edge latencies we show that the classical random phone call algorithm in which each node initiates a connection with a randomly chosen neighbor in each round completes in o log n rounds by using the relationship between and we give a o log max log n upper bound in terms of for the case where nodes do know the latencies of the incident edges we obtain nearly tight bounds that are independent of and we give a o d n algorithm which is within polylogarithmic factors of the trivial d lower bound the key idea of the algorithm is to build a weighted spanner based on that in this spanner is then used to distribute information this algorithm however requires knowledge of a polynomial upper bound on n hence for completeness we also provide an alternate algorithm in appendix that does not require the knowledge of n but takes an additional log d factor instead of log n making it unsuitable for graphs with large diameters finally we observe that we can always discover the latencies of the important adjacent edges in notice you might model an edge with weight w as a path of w edges with weight if you calculate the conductance of the resulting graph you do not get a good characterization of the connectivity of the original graph for a few different reasons for consider the ability of the imaginary nodes on the edge to pull data from the endpoints d after which we can use the algorithm that works when latencies are known hence even if latencies are unknown combining the various algorithms we can always solve the information dissemination in o min d n log n time or o min d n log max log n time matching the lower bounds up to polylogarithmic factors with respect to the critical conductance summary of our contributions to the best of our knowledge this work provides a first ever characterization of conductance in graphs with latencies in this regard we provide two different parameters namely and note that we provide the summary here only in terms of however for each case there exists an alternate version in terms of for lower bounds we show that there exists graphs with a o log n diameter with maximum degree where local broadcast requires rounds b o diameter with critical conductance where local broadcast requires rounds c diameter where information dissemination requires min d rounds showing the among the various parameters affecting information dissemination for upper bounds on information dissemination we show that d the algorithm takes o log n rounds e a algorithm takes o d n rounds we view our results as a step towards a more accurate characterization of connectivity in networks with delays and we believe that the metrics and can prove useful in solving other graph problems prior work there is a long history studying the time and message complexity of disseminating information when all the links have the same latency it is interesting to contrast what can be achieved in the weighted case with what can be achieved in the unweighted case the classic model for studying information dissemination is the random phone call model introduced by in each round each node communicates with a single randomly selected neighbor if it knows the rumor then it pushes the information to its neighbor if it does not know the rumor then it pulls it from its neighbor see an important special case is when the graph is a clique any pair of nodes can communicate directly in a seminal paper karp et al show that a rumor can be disseminated in a complete graph in o log n rounds with o n log log n message complexity fraigniaud and giakkoupis show how to simultaneously achieve optimal communication complexity except for extremely small rumor sizes when the graph is not a clique the performance of the classical protocol wherein a node exchanges information with a random neighbor in each round typically depends on the topology of the graph specifically how well connected the graph is an exciting sequence of papers see and references therein eventually showed that rumor spreading in this manner takes time o n where is the conductance of the graph the question that remained open was whether a more careful choice of neighbors lead to faster information dissemination in a breakthrough result et al gave a randomized algorithm for solving information dissemination in any unweighted graph in time o d polylogn where d here is the nonweighted diameter of the graph of note the protocol has no dependence on the conductance of the graph but only on the diameter which is unavoidable there were two key ingredients to their solution first they gave a local broadcast protocol where each node exchanges information with all its neighbors in o n time second as a of this protocol they obtain a spanner which they use in conjunction with a simulator defined therein to achieve information dissemination in o d polylogn time haeupler then showed how local broadcast could be achieved in o n time using a simple deterministic algorithm the conclusion then is that in an unweighted graph with unit latency edges information dissemination can be achieved in time o d polylogn or in time o log n the notation hides polylogarithmic factors which arise due to d and being unknown other related works the problem has been well researched in several other settings as well for graphs modeling social networks doerr et al show a log n time bound for solving broadcast for the case of direct addressing haeupler and malkhi show that broadcast can be performed optimally in o log log n rounds information dissemination in random geometric graphs has been studied by et al in wireless sensor networks and adhoc networks by boyd et al sarwate and dimakis and gandhi et al giakkoupis et al study the problem in dynamic graphs conductance in weighted graphs in this section we provide two different approaches to characterize conductance in weighted graphs namely the critical conductance and the average conductance and show the relationship between them in the sections that follow where we determine the bounds on information dissemination we use the critical conductance as it makes our analysis simpler corresponding bounds for average conductance are obtained by the application of the given relationship in theorem critical conductance we now define the critical conductance of a graph generalizing the classical notion of conductance for a given graph g v e and for a set of edges s e we define e s to be the subset of edges of s that have latency for a set of nodes u v and cut c u v u we define p e c to be the subset of edges across the cut c with latency and we define the volume vol u degv where degv refers to the degree of node we first define the critical conductance of a cut for a given latency and then define the conductance as the minimum critical conductance across all cuts definition conductance consider a graph g v e for any cut c in the set of all possible cuts of the graph g and an integer we define c c min vol u vol v u the conductance is given by g min c c definition critical conductance we define the critical conductance g as g is maximum for any max g g we call the critical latency for g if and g g we simply write or instead of g or g when graph g is clear from the context if all edges have latency then is exactly equal to the classical graph conductance average conductance for a given graph g v e we first define dlog max e different latency classes where the first class contains all the edges of latency and the subsequent ith latency class consists of all the edges in the latency range of for a set of nodes u v and the cut c u v u we define ki c to be the subset of edges across the cut c belonging to latency class i all cut edges of latency and for a cut c we first define the average cut conductance as c and then define the average conductance as the minimum average cut conductance across all cuts definition average cut conductance consider a graph g v e a set of nodes u v and the cut c u v u let s be the min vol u vol v u s c dlog max e x c definition average conductance let be the set of all possible cuts of the graph we define the average conductance as g min c c we simply write instead of g when graph g is clear from the context if all edges have latency then is exactly equal to the classical graph conductance comparing critical and average conductances conductance in general is a characterization of the bottleneck in communication of a graph for unweighted graphs the only bottleneck in communication can be the connectivity of the graph however for weighted graphs the bottleneck can arise either due to the graph connectivity or due to the edge latency even if the nodes are directly connected by a slow edge there might exist a different faster path our aim is to capture both aspects of this bottleneck in communication having good connectivity facilitates faster communication whereas large latencies result in ideally we would want the best connectivity along with the least slowdown for faster communication we obtain the definition of by directly optimizing these orthogonal parameters the connectivity that maximizes this ratio is defined as the critical conductance and the corresponding latency is defined as the critical latency in other words captures the bottleneck due to connectivity whereas captures the bottleneck due to latency the definition of the average conductance is inspired by the classical notion of conductance each cut edge s contribution towards the overall connectivity is normalized by dividing it with its latency rounded to the upper bound of its latency class so as to account for the surprisingly we see that and are closely related and to show the relationship between them we first define l as the number of latency classes in the given graph latency class i is said to be if there is at least one edge in the graph g that has a latency and the maximum value that l can take is dlog max e which is the total number of possible latency classes theorem proof consider any weighted graph g that has critical conductance and critical latency as we first show the upper bound let c be the cut from which was obtained and let s be the minimum volume among either side of the cut by the definition of conductance pi c c s and from the definition of we know that c is dlog max e pi c s c s for any which implies i c c i s note that in the definition of the terms corresponding to the empty latency classes becomes zero we replace each remaining term in the definition of c by and using the above inequality we get c combining with the fact that is the minimum average cut conductance we obtain c l next we show the lower bound for this we consider the cut c that determines and let s be the minimum volume among either side of the cut on this cut c consider the latency class of the critical latency say lies in the latency class x which implies that from the definition of conductance we get c c c c s rewriting as from definition max e c c c s max e s and comparing the first x terms of to that of c we observe that each term in the expression of is at least as large as the corresponding term in the above upper bound on c also there are some additional positive terms in combining this with the fact that c as by definition is chosen as the minimum value among all possible cuts we obtain c this proves the lower bound and completes the proof lower bounds we proceed to lower bound the time for completing information dissemination the main goal of this section as found in theorems and is to show that every gossip algorithm requires min d on graphs with diameter d critical conductance and critical latency throughout this section we assume that nodes do not know the latencies of their adjacent links when nodes do know the latencies the trivial lower bound of d is sufficient we begin by defining a combinatorial guessing game a similar approach as in and show a lower bound for we then construct several different graphs and reduce the guessing game to solving information dissemination on these graphs thereby showing our lower bound the guessing game we define a guessing game played by alice against an oracle conceptually the game is played on a bipartite graph of nodes the oracle selects a subset of the edges as the target in each round alice guesses a set of at most edges and the oracle reveals any target edges that have been hit at the same time if any edge u v in the target set is guessed by alice then all adjacent edges x v in the target set are removed from the target set fix an integer let a and b be two disjoint sets of m integers each the left and right group of nodes in the bipartite graph the winning condition of the game depends on a predicate p which returns a subset of edges from a b for example p randomp returns a subset t that contains elements of a b where each element is chosen with probability p or discarded with probability the results of do not apply directly to our setting as their proposal set of the player must intersect the target set in exactly element by contrast the guessing game here requires us to discover sufficiently many target elements such that every element in the target set occurs at least once we now define the game guessing p which begins when alice receives two disjoint sets a and b the oracle chooses a target set a b returned by the predicate throughout we assume that alice has access to a source of unbiased random bits alice s goal is to eliminate all the elements in the target set in each round r alice submits a set xr a b of size at most as her round r guesses to the oracle the oracle replies by revealing the items she guessed correctly xr tr the oracle then computes the round r target set by removing the items that alice hit all the items in tr that have the same as an item in xr tr tr tra trb xrb this concludes round r and the next round begins the game is solved in the first round where alice s guesses result in an empty target set at this point the oracle answers halt in other words the game ends in round if for every b there was some a such that b xr tr in some r alice s aim is to minimize the number of rounds until the target set becomes empty we say that a protocol solves guessing p with probability in r rounds if always terminates within r rounds and with probability for any target set t in this case we call an protocol guessing by gossiping our lower bound results use variants of an distributed network that has a guessing game gadget of nodes embedded as a subgraph in our gadget construction we use predicate p to specify a set of hidden low latency edges which we call fast edges we show that the execution of a gossip algorithm on an network can be simulated by alice when playing the guessing game guessing p where n we use the notation id v to denote the id of a vertex v which by construction is unique for a given instance of the guessing game alice creates a set of nodes l vm where id vi ai a for i m and similarly maps the integers in b to the ids of the vertex set r um in a fashion next alice creates a complete bipartite graph on sets l and r by adding cross edges and adds a clique on the vertices in l where all clique edges are considered to have latency for given integer parameters lo and hi we construct the network in a way that only some cross edges in the target set are useful to the algorithm by giving them a low latency lo whereas all other cross edges are assigned a large latency value hi formally the latencies of a cross edge e vi uj is lo iff id vi id uj p otherwise e has latency hi we denote this constructed gadget as g lo hi p where the parameters refer to the size of the gadget the low latency value lo the high latency value hi and the predicate p respectively we also consider a symmetric variant called gsym lo hi p where alice creates a clique on r in addition to the one on see figure since alice does not know the target set t in advance she also does not know when a cross edge should have latency lo or latency hi nevertheless implicitly these latency assignments are fixed a priori by the target set unknown to alice which in turn depends on the predicate p whenever a cross edge e is activated in our simulation alice submits the id pair of the vertices of e as a guess to the oracle whose answer reveals the target set membership and hence also the latency of lemma gossip protocol simulation suppose that there is a algorithm a that solves local broadcast on a given network h that contains g h p or gsym h p such that the cross edges of the gadget form a cut of h for h t n and a predicate p then there is an protocol for guessing p that terminates in t rounds proof we argue that alice can simulate the execution of a on network h and in particular on the subgraph g h p until the gossip algorithm a terminates or the oracle answers halt it is straightforward to vm l r vm um a l r um b figure guessing game gadgets red edges correspond to fast links whereas the blue edges are slow links with high latency extend the argument to a subgraph gsym h p at the same time alice can use the behavior of a on the subgraph g h p to derive a protocol for guessing p for a given instance of the guessing game alice creates the network h by first assigning all edges in the subgraph h g h p a latency of moreover she creates the edges of the subgraph g h p as described in section we will see below that the latency of a cross edges is only set when it is first activated if an edge vi vj a clique edge on l or an edge in e h g h p is activated by the algorithm alice locally simulates the bidirectional message exchange by updating the state of nodes vi and vj accordingly in each round r of the gossip algorithm a set of at most cross edges is activated by the vertices simulated by alice s for each activated cross edge vi uj alice uses id vi id uj as one of her round r guesses consider some round r and suppose the oracle returns the empty set for each one of alice s submitted round r guess ai bj that was not contained in the oracle s answer alice sets the latency of ai bj to h by updating the local state of ai here ai id vi and bj id uj that are chosen in round r for some vi l and uj it follows by a simple inductive argument that the state of every vertex in the simulation is equivalent to executing the algorithm on the network we now argue that the above simulation of a gossip algorithm for local broadcast solves the game guessing p in at most t rounds with probability for any predicate p recall that the guessing game ends if t becomes empty which happens when alice s correct guesses have included every b t b at least once by the premise of the lemma the cross edges of g h p form a cut of h which tells us that a can not solve local broadcast without using the cross edges between l since every such b r is a neighbor of a node in l the only way it can receive a local broadcast message is via a fast in t hence if the local broadcast algorithm terminates we know that b was hit by one of alice s guesses guessing game lower bounds the following lemma is instrumental for showing the lower bound of theorem which holds when there are no other assumptions on the critical conductance of the graph lemma let guessing be the guessing game where the target set is a single pair chosen uniformly at random from a b if protocol is an protocol for guessing where then the number of rounds until terminates is at least m proof for the sake of a contradiction suppose that solves guessing in t m rounds we define time to be the random variable of the number of rounds until termination in a given execution of consider a round r of the protocol and suppose that the game has not yet ended alice has not yet guessed all of t correctly and has made at most r incorrect guesses in the previous rounds let xr denote the at most pairs from a b chosen by alice in round since from alice s point of view the adversary has chosen the single element of t uniformly at random from the elements in a b the probability that alice guesses the element of t in round r is at most let correct denote the event that protocol correctly solves the game it follows that pr time r correct pr tr xr correct in the remainder of the proof we will lower bound the probability of event time t observe that pr time t pr time t correct pr correct if time t then none of the rounds t guesses of alice were successful q pr time t correct pr time i correct applying to each round i t we get pr time t qt m since the running time of never exceeds t rounds pr time t and we get a contradiction to t m the next lemma bounds the number of guesses required when the target set is less restricted and its edges form a random subset of the cross edges between a b this allows us to derive a lower bound on the local broadcast time complexity in terms of the critical conductance in theorem lemma for the guessing game input sets a and b let randomp be the predicate that defines the target set t by adding each element of a b to t with probability p for some p m then any protocol that solves guessing randomp requires rounds in expectation on the other hand if alice uses the protocol where she submits her guesses in each round by choosing for each a b uniformly at random and for each b b an a uniformly at random then logpm rounds are required in expectation proof recall that the game ends when the guesses of alice have hit each element in t b b at least once whereas t b is itself a random variable let y be the maximum number of guesses required by alice protocol for the sake of our analysis we will consider alice s guesses as occurring sequentially and hence we can assume that elements of t b are discovered one by one for each j we define zj to denote the number of guesses required to guess the element of t b after having already guessed j elements we will first consider general protocols considering that each edge is in the target set with probability p we can assume that the target membership of an edge e is determined only at the point when alice submits e as a guess recalling that alice has full knowledge of the remaining elements in t b that she still needs to guess cf we can assume that her guess is successful with probability p as she will only guess edges that potentially discover a new element in t b for this guessing strategy this remains true independently of the current target set and the set of previously discovered elements which we denote by dj formally pr zj dj t pr zj and hence e zj dj t e zj note that any b b will be part of some target edge in t b t b with probability p m since p and therefore e m it follows that hp i hp i m e y e e y dj t e e z d t e e z i j i p considering that alice can guess up to elements per round it follows that the time is which completes the proof for general algorithms now consider the case where alice uses the protocol where she submits her guesses in each round by choosing for each a a an element b uniformly at random and for each b b an a uniformly at random note that this process of selecting her guesses is done obliviously of her correct and incorrect guesses so far observe that zj depends on a random variable fj which is the size of t after the j successful guess since zj is the number of times that the protocol needs to guess until a new element in t b is discovered the distribution of zj corresponds to a geometric distribution according to alice s protocol the f b probability of guessing a new element is given by and hence e zj fj m fj let u u is the number of all elements in b that are part of an edge in t initially we have m e y m e e y fi u hp i u m e z f u i i x e e zi fi u m x e fi u m where the last inequality follows from e x for any positive random variable x due to jensen s inequality since alice has already correctly guessed i elements from t b we discard all elements that intersect with successful guesses when updating the target set at the end of each round according to it can happen that the protocol discovers multiple elements of t b using the round r guesses which we have assumed to happen sequentially in this analysis in that case the target set is not updated guesses however it is easy to see that this does not increase the probability of guessing a new element of t b we get e fi u m m i mp and thus e y m m x p this sum is the harmonic number which is log m for sufficiently large m and hence e y m m log m p by the law of total expectation it follows that e y e y u m pr u m finally a standard probability calculation shows that u m happens with large probability assuming that c p m for a sufficiently large constant c the time bound follows since alice can submit guesses per round lower bounds for information dissemination in this section we show three different lower bounds together these show what properties cause poor performance in information dissemination protocols in some graphs high degree is the cause of poor performance theorem in other graphs poor connectivity is the cause of poor performance theorem and finally we give a family of graphs where we can see the between d and theorem we begin with a result showing that is a lower bound theorem for any there is an network that has a weighted diameter of o log n and a maximum node degree where any algorithm requires rounds to solve local broadcast with constant probability proof consider the network h of n nodes that consists of the guessing game gadget gsym p where predicate p returns an arbitrary singleton target set combined with a constant degree regular expander of n vertices if any of which any one node is connected to all the vertices on the left side of the gadget all the edges of and connected to the expander have latency and the latencies of the edges in the gadget are assigned as in lemma clearly the weighted diameter of h is o log n diameter of the expander by we know that any guessing game protocol on guessing requires rounds for the predicate that returns exactly pair as the target set lemma tells us that any gossip algorithm that solves local broadcast in h must require rounds we next show that every local broadcast algorithm requires time at least note that we get this lower bound just for local broadcast and not information dissemination which is in contrast to the results in the unweighted case the following result is given in terms of the conductance for any and thus also holds for and in the proof we construct a network that corresponds to the bipartite guessing game graph with a target set where each edge is fast with probability that way we obtain a network with critical conductance hop diameter o and a weighted diameter of o the guessing game lower bound of lemma tells us that the cost of information dissemination still depends on theorem for any n and where log n there is a network of nodes that has a weighted diameter o and critical conductance such that any gossip algorithm requires rounds for solving local broadcast in expectation also solving local broadcast using requires log rounds in expectation proof our goal is to reduce the game guessing to local broadcast hence we consider the graph g random as our guessing game gadget defined in section since we want to show the time bound of t log n rounds for for the high latency edges we can use n the value log as log n and n we assign each cross edge latency independently with probability and latency with probability the fast cross edges have the same distribution as the target set implied by the predicate which we have used to show a lower bound of for general protocols on guessing in n lemma and also a stronger lower bound of log for random guessing protocols which choose a random edge for each vertex as their guesses it is straightforward to see that gossip corresponds exactly to this random guessing game strategy applying lemma this means that local broadcast requires n in expectation time for general algorithms and log time for the additional term of in the theorem statement is required to actually send the broadcast over the latency edge once it is discovered since each edge of l r is assigned latency with probability log n it follows that each u r is connected by a latency edge to some node in l with high probability hence the weighted diameter of g is o with high probability in the remainder of the proof we show that g has a conductance of with high probability we point out that several previous works prove bounds on the network expansion and however as these results were shown for random graphs we can not employ these results directly and thus need to adapt these proof techniques to show a conductance of for our guessing game gadget we assume that there is an function f f n such that nf noting that this assumption does not change the asymptotic behavior of our bounds for readability we only consider and note that the extension to the general case is straightforward by construction g randomf consists of edges with latencies or and we have n where the last inequality follows from the assumption logn n thus we know that and hence we need to prove f consider a set s l r of at most n vertices and let l and r we first assume that l r since the number of latency cross edges is symmetric for vertices in l and r subsequently we will remove this assumption by a union bound argument for vertex sets a and b let a b be the set of the randomly sampled latency edges in the cut a b and define a b a b given the set s our goal is to show that many latency edges originating in s l have their other endpoint in r s assuming that there are sufficiently many latency cross edges to begin with in other words we need to bound from above the probability of the event s l s r f l conditioned that there are sufficiently many latency cross edges claim sufficiently many latency cross edges there exist constants c such that events lr n s l r cf l s r l cf r lr n s l r c f l s r l c f r occur with high probability proof according to the construction of g randomf the latency cross edges are chosen independently each with probability f note that each cross edges is assigned latency independently with probability f logn n thus for each node v the expected number of cross edges is f log n and by a standard chernoff bound we know that the number of latency cross edges to v is in f f with high probability for suitable constants after taking a union bound over all nodes in v g we can conclude that the claim holds for any set s v g conditioning on lr is equivalent with choosing a subset of at least cf l edges among all possible edges in the cut s l r uniformly at random and assigning them latency consider an edge v u s l r it follows that u s r and hence v u s l s r with probability r n and we need to exclude the event bad s s l s r cf l cf l subsets of latency edges incident to vertices in s in addition we need to bound the probability that bad s happens for s chosen in any of the nl ways of choosing s that satisfy for all cf l claim pr bad s lr proof of claim combining the above observations we get n cf l r cf l pr bad s lr n l cf l first we assume that r and l are both large l r n for a sufficiently small positive constant k m then we can apply stirling s approximation of the form m where x k x x log x is the binary entropy function thus for sufficiently large n we get r cf l nl pr bad s lr n cf l where to derive the second inequality we have used the facts that nl and nr since r l n and r by the premise of the theorem f log n which implies c f l n log n together with the fact that this means that the term c f l dominates in the exponent of and hence pr bad s lr n log n em k next we consider the case where r l applying the upper bound of the form m to k k tells us that en r cf l pr bad s lr l en e cf l l since r we get pr bad s lr exp l log n log l cf l log c e exp l log n cf l log e by assumption and hence the term c f l log e in the exponent is negative moreover recall that f log n and thus we can assume that c f l log e log n for a sufficiently large constant this term dominates the other terms in the exponent thereby completing the proof of the claim cf considering that l the above bound implies that at least b c latency edges incident to s are connected to nodes outside in s with probability at least taking a union bound over all possible choices for the values of l and r adhering to r l and r l n g shows that h i pr n s l r s lr observe that the latency cross edges are constructed symmetrically for the left and right side of the bipartite graph g and thus we can apply the above argument in a similar manner for a set s where r l conditioned on s r l cf thus we can conclude that h i pr n s v g lr we can remove the conditioning in by virtue of claim since h i h pr n s v g pr n s v g cf i lr pr lr to upper bound vol s for any set s we take into account the n cross edges of each node in also if v l then we need to account for the n incident clique edges of v yielding vol s considering the upper bound on the number of latency cross edges given by we have min s min s s s v g s cf min nf s vol s where the inequality is true with high probability to see that this bound observe that l by and we know that l r f n and hence l nf with high probability as required this completes the proof of theorem finally we give a family of graphs that illustrate the among the parameters intuitively when the edge latencies are larger it makes sense to search for the best possible path and the lower bound is d when the edge latencies are smaller then we can simply rely on connectivity and the lower bound is note that we can individually obtain a lower bound of log n using the technique in where we show that there exists a graph with diameter log unlike here that lower bound is simply theorem for a given o and any integer o there is a class of networks of nodes critical conductance maximum degree and weighted diameter d such that any gossip algorithm that solves broadcast with at least constant probability requires min d rounds proof we create a network g consisting of a series of k node layers vk that are wired together q ring using the guessing game gadgets introduced above we define k where c this implies that c as o each layer consists of s nodes as it does not change our asymptotic bounds we simplify the notation by assuming that and are integers v v v vk figure guessing game gadgets wired together as a ring for each pair vi and v mod k i k we construct the symmetric guessing game gadget gsym p in section for simulating a gossip algorithm to solve the game guessing that is we create a complete bipartite graph on vi and v mod k and form cliques on vi and v mod k see figure we assign latency to every cross edge between vi and v mod k except for a uniformly at random chosen edge that forms the singleton target set which we assign latency observe that the conductance can not be maximal for any j other than or observation let s graph g is proof for a layer vi we call v mod k the predecessor layer and v mod k the successor layer the size of a layer is s each node has edges to its neighbors in the predecessor resp successor layer and s edges to nodes in its own layer this means that g is a graph we define a cut c that divides the ring into two equal halves such that none of the internal clique edges are cut edges by a slight abuse of notation we also use c to denote the set of vertices present in the smaller side of the partition created by the cut c ties broken arbitrarily lemma c proof since c partitions g into two sets of identical size the volume can be determined by considering either partition of size n thus we focus on the node set also by observation we know that g is the volume of c can be calculated to be n the number of cut edges of latency is by the construction of c according to definition the conductance is given by c n by plugging in the value of c we can verify that c is exactly equal to using the conductance bound of lemma for cut c we know that in the proof of the next lemma we show that lemma the conductance of the constructed ring network is proof by lemma we know that as the actual graph conductance is always to any cut conductance we will now show as well by observation we know that g is and therefore for a set of nodes u the volume vol u is exactly equal to this clearly implies that for any two sets u and v vol u vol v if and only if now consider an arbitrary cut u v g u of g and suppose that u contains at most half of the nodes of g n since g has nodes if there are at least cut edges then using the fact that n we get u and we are done in the remainder of the proof we will show that there are cut edges we distinguish two cases we classify each node in u either as good if it has at least adjacent edges across the cut u v u and as bad otherwise thus our goal is to identify s good nodes which in turn implies cut edges let s be an arbitrary subset of nodes in u if all nodes in s are good we are done otherwise let x s be a bad node it is important to note that the following properties are true for every bad node a node x is in a layer in g which contains at least nodes inside u b the successor layer from x has at least nodes inside u to see why a holds assume that it was not true then x would have at least neighbors in its own layer across the cut contradicting the assumption that x is bad similarly if b was false x would be connected to at least nodes in the successor layer outside u this is true of the predecessor layer too let a be the successor layer to the layer containing x we now run the following procedure invariant a contains at least nodes in u if at least half of the nodes in a are good we are done terminate and claim cut edges otherwise let y be a bad node in a let be the successor layer of the layer a then start again at step with a and y x from the assertion b contains at least nodes in if this procedure ever terminates in step we are done otherwise it continues around until every layer has been explored in that case the invariant implies that every layer contains at least nodes in u this implies that of the nodes of g are in u which contradicts the choice of u thus the procedure does terminate which means there must be at least cut edges implying let m be the number of nodes in u since g is the volume of u is m each node in u now contains at least neighbors outside of u since it has s neighbors and there are only other nodes in u so the cut size is at least thus the conductance of this graph m since and it is clearly the case that which is what we wanted to prove combining lemmas and and again using cut c we argue that the critical latency is lemma for any o proof to prove that is in fact which by lemma is we need to show that to this end let us consider the cut c defined above we will show that c and since conductance cf definition can not be maximal for any j other than or we get there are two latency cross edges in the cut c and the volume of c can be calculated as in the proof of lemma to be n thus we need to show that n as c is a constant the above inequality is true as long as o which is ensured by the premise of the theorem the weighted diameter of the network d since each pair of adjacent node layers is connected by a latency edge and internally each layer forms a latency clique using the fact that c it can be shown that d implying that d by lemma now consider a source node in layer that initiates the broadcast of a rumor each node can either spend time in finding the required fast edge which we assume can be done in parallel or instead it can instantly use an edge of latency to forward the rumor lemma tells us that finding the single latency cross edge with constant probability for the guessing game gadget corresponding to any pair of node layers requires rounds and then forwarding the rumor takes d additional rounds alternatively the algorithm can forward the rumor along latency edges across node layers and spread the rumor using the latency edges within each clique it follows that the required time for broadcast is min d we obtain the following corollary that gives a lower bound on information dissemination in terms of either by a similar analysis as above or by the application of theorem corollary for a given o and any integer o there is a class of networks of nodes average conductance maximum degree and weighted diameter d such that any gossip algorithm that solves broadcast with at least constant probability requires min d rounds proof observe that in the given graph there exists edges with latency either or and as such the number of latency classes here is now theorem reduces to this implies that for this case alternatively as in this case replacing this value of in theorem gives us the above required corollary algorithms for unknown latencies we divide the upper bounds on information dissemination into two and later combine them n to obtain a unified result first we analyze classical showing that it completes in time o which is optimal when d is large alternatively for graphs where d is small we give an algorithm wherein each node first spends d time discovering the neighboring latencies after which nodes use the local information to build a spanner across which data can be distributed in d time to show the time required for information dissemination in a weighted graph g using we define e as the set of all edges of latency eu as the set of incident edges of vertex u and eu e eu n rounds in a network g where theorem the protocol achieves broadcast in o is the critical conductance of g and is the corresponding critical latency proof we construct a strongly graph g which is a generalization of the strongly vertex induced subgraph defined in and which has the same vertex set as the edges of g have a defined by the edge multiplicity function given by if u v e u v if u v otherwise it is easy to see that the unweighted conductance g corresponds to g as a at node u is counted as u u edges when computing the volume we also define another unweighted graph that is derived from g by dropping all edge latencies now we consider the markov chain process describing the informed node set the vertex set that is in possession of some message m originating from a vertex v when running formally the state space of the markov chain consists of all possible informed node sets only paths that correspond to monotonically growing informed node sets have nonzero probability we argue that this process on resp g dominates the respective process in the graph g we observe that each node v selects an incident edge in e from g in the protocol with the same probability as in the probability of choosing an edge eu e p a self loop in case of g is u u u v in both graphs clearly choosing a self loop of a node u can not help in the propagation of the message in g but choosing the corresponding edge in might it follows that the markov process of reaching any informed node set s in dominates over the one in g the probability of reaching any informed node set s by using the markov chain in is at least as large as the probability of reaching the same set s by using the markov chain for g to translate this result back to our actual network g with weighted edges we charge each round of in g to rounds in with similar arguments it follows that the markov process of the informed node set given by considering consecutive rounds of in g at a time dominates the one in g the multiplicity of an edge is called edge weight in we use a different terminology here to avoid confusion with the latencies of edges and consider edge weight as a synonym to edge latency instead from and it is known that o log n g rounds suffice to solve broadcast in g hence achieving broadcast in g requires o log n g rounds since the above analysis applies for any and in particular for the critical latency the theorem follows we combine theorem with theorem to obtain the following corollary that gives the upper bound on information dissemination using in terms of n corollary the protocol achieves broadcast in o rounds in a network g where avg is the average conductance of g and l is the number of latency classes in an d algorithm in section we provide an algorithm that solves information dissemination when each node knows the latencies of all its adjacent edges the same algorithm can be naturally extended for the case where nodes do not know the adjacent latencies by first discovering the edge latencies and then running the algorithm as such when both d and are known for rounds each node broadcasts a request to each neighbor sequentially and then waits up to d rounds for a response to determine the adjacent edge s latency if both or either values are unknown the guess and double strategy described in section can be used as we can efficiently detect when information dissemination has completed correctly by similar arguments as in section we obtain an algorithm that solves information dissemination in o d n time algorithms for known latencies in this section we discuss the case where each node knows the latencies of the adjacent edges we focus on the problem of information dissemination instead of information dissemination as it will simplify certain issues to solve the seemingly harder problem of course information dissemination also solves information dissemination and most information dissemination algorithms can be used to solve information dissemination by using them to collect and disseminate data in section we use the fact that nodes know a polynomial upper bound on the network size and this is the only place where we rely on that assumption when edge latencies are known the spanner algorithm described below solves information dissemination in o d n which differs from the trivial lower bound of d by only polylog factors spanner algorithm preliminaries we initially assume that the weighted diameter d is known to all nodes later in section we do away with the assumption via a technique it is assumed that every edge has latency d clearly we do not want to use any edges with latency local broadcast an important building block of our algorithms is local broadcast for unweighted graphs the randomized superstep algorithm by et al and the deterministic tree gossip dtg algorithm by haeupler solve this problem we make use of the dtg algorithm which runs in o n rounds on unweighted graphs see and appendix for details observe that for the unweighted case if any algorithm solves local broadcast in o t rounds it obtains a as a direct consequence which thereafter can be used for propagating information however for graphs with latencies just solving local broadcast might take o d time resulting in a o d and leading to an o solution for information dissemination recall that a subgraph s v e of a graph g v e is called an if any two nodes u v with distance in g have distance at most in for weighted graphs we are mainly interested in the broadcast problem in which each node disseminates some information to all its neighbors that are connected to it by edges of latency while dtg assumes edges to be unweighted uniform weight we can execute the same protocol in a graph with latencies simply by ignoring all edges with a latency larger than and simulating round of the dtg protocol as rounds in our network we refer to this protocol as the protocol it follows immediately that within o n time the protocol ensures that each node has disseminated the information to all its neighbors connected to it with edges of latency note that we can trivially solve the information dissemination problem in o n time using protocol if d were known by simply repeating it d times with the challenge now given the restriction that finding neighbors by a direct edge might be costly is to somehow find sufficiently short paths to all of them we show here that with sufficient exploration of the local neighborhood up to o log n steps and using only favorable weights we are able to obtain a global spanner an intermediate goal of our algorithm is to construct an o log n and to obtain an orientation of the edges such that each node has a small o log n once we have such a structure we achieve information dissemination by using a flooding algorithm that repeatedly activates the in order spanner construction and broadcast in a seminal work baswana and sen provide a spanner construction algorithm for weighted graphs where weights did not correspond to latency in the local model of communication as our goal here is to find a low stretch low spanner we modify the algorithm of by carefully associating a direction with every edge that is added to a spanner such that each node has o log n to deal with latencies we choose to locally simulate the algorithm on individual nodes after obtaining the log neighborhood information by using the protocol we show that this log neighborhood information is sufficient for obtaining the required spanner the algorithm in also assumes distinct edge weights we can ensure this by using the unique node ids to break ties we first show that the size of the obtained spanner does not increase significantly when running the algorithm of with an estimate of n namely spanner construction algorithm each node v executes a set of rules for adding edges explained below and each time one of these rules is triggered v adds some of its incident edges to the spanner while assigning them as outgoing direction this way we obtain a low stretch spanner undirected stretch where nodes also have a low which we leverage in the subsequent phases of our algorithm for a given parameter k the algorithm computes a by performing k iterations at the beginning of the iteration for i k every node that was a cluster center in the previous iteration chooses to become an active cluster with probability for some n poly n note that for i every node counts as a previously active center then every active center c broadcasts this information to all cluster members as a cluster grows by at most hop in each round this message needs to be disseminated throughout the of then every cluster member broadcasts its membership information to all its neighbors to ensure that every node is aware of its adjacent active clusters for adding edges to the spanner nodes also remember its set of incident clusters that were active in iteration i with this information in hand every node u adds some of its incident edges to its set of spanner edges hu and also permanently discards some edges as follows it is clearly impossible to guarantee small degree in an undirected sense for example if the original graph is a star by slight abuse of notation we use c to denote cluster centres and the cluster itself when the distinction is clear from the context rule if none of u s adjacent clusters in were sampled in iteration i then u adds its least weight edge to cluster c as an outgoing edge to hu and discards all other edges to nodes in c for every c rule if u has active adjacent clusters then u will add the edge ev to some cluster c with the minimum weight among all these clusters and for each adjacent cluster that has a weight less than ev node u also adds one outgoing edge to the respective node in all other edges from v to nodes in clusters c and are discarded in the iteration every vertex v adds the least weight edge to each adjacent cluster in to hv lemma consider a synchronous network of n nodes where nodes know only where n nc for some constant c for any k c there s a distributed algorithm based on that computes a spanner and terminates in o k rounds in the local model and each node s is o log n proof note that the running time of the algorithm is o k rounds if used with a restricted message size of o log n inspecting the algorithm reveals that the computation at each node only depends on its neighborhood in the graph also because the decision to remove an edge u v can be taken by either node u or v each node needs to simulate the running of the algorithm at all its neighbors to know when to remove the edge u v from consideration and hence we can simulate the execution of the algorithm locally by first collecting this information regarding k neighborhood in k rounds in the local model we now analyse the difference when running the algorithm with instead of first we observe that sampling clusters with probability does not affect the stretch guarantee for the sake of our analysis we assume that the spanner is directed we count every incident edge of v that it adds to its set of spanner edges hv as an outgoing edge of the degree bound will follow by showing an upper bound on the number of outgoing edges of each node consider any iteration i in phase of the algorithm i we call a cluster sampled in iteration i if it is among the sampled clusters in all iterations i every cluster that was sampled in the previous iteration is sampled again with probability in the very first iteration every node counts as a previously sampled cluster to bound the number of edges that contribute to the of a node v we consider the clusters adjacent to v that were sampled in iteration i and order them as cq in increasing order of the weight of their least weight edge incident to let ai be the event that v adds at least l edges to its outdegree in iteration i note that ai occurs if and only if none of the clusters cl is sampled in iteration i and there are at least l active clusters in iteration i by the description of phase first k iterations of the algorithm we only add an edge from v to a node in cluster cj in iteration i if ai does not happen we have pr ai l and taking a union bound over the first k iterations and over all n nodes it follows that the probability of any node adding more than l edges to the spanner in any of the first k iterations is at most exp l log k log n by choosing l log n log k this probability is as required in phase final iteration every vertex u adds a least weight outgoing edge to every cluster that was sampled in iteration k let xv be the indicator random variable that vertex v is the center of a cluster sampled in iteration k that is incident to u we have pr xv setting x p v u v xv c k c k it follows that c c e x k n k since c since each cluster is sampled independently all xv are independent we can apply a standard chernoff bound to show that for some sufficiently large constant depending on c it holds that h i c pr x n k log n n log n by taking a union bound over all vertices we can see the number of edges that each vertex adds to the spanner c in phase is at most o n k log n with high probability combining this with the bound that we have derived for phase completes the proof theorem there is an o d n time algorithm a in the gossip model that yields an o log n that has o n log n edges moreover a also computes an orientation of the edges that guarantees that each node has an of o log n proof to convert the classic synchronous algorithm for the local model assumed in lemma to an algorithm that works in the gossip model with latencies we use the protocol and simulate each of the k log n iterations of the spanner algorithm by first discovering the log neighborhood the neighborhood discovery takes o d n rounds in our model and then all computations are done locally to broadcast on this directed spanner we use the rr broadcast algorithm which is a deterministic exchange of information among nodes each node sends all the rumors known to it to all its neighbors one by one in a round robin fashion the algorithm with a parameter k is run on the directed spanner of the graph gk g without edges of latency k rr broadcast k for each vertex v in parallel do for iteration i equals to k do propagate rumor set rv along the of length k in a round robin fashion add all received rumors to rv algorithm rr broadcast u edges edges edges kh v edges figure an example of message propagation from node u to node lemma after the execution of rr broadcast algorithm with a parameter k on the directed spanner of graph gk any two nodes u and v at a distance k in g have exchanged rumors with one another in o k rounds where is the maximum of any node in gk proof consider a path from a node u to another node v at a distance k or less from it clearly all edges in this path would have a weight of therefore we can work on gk g without edges of latency k as well without affecting the correctness of the algorithm also let us assume that the number of hops between u and v to be h which again would be k since there are no fractional weights let the latency between each hop be denoted by ki as shown in figure messages reach the next node when either of the nodes initiate a bidirectional exchange for example u s rumor could reach node either by a request initiated by node u or by depending upon the direction of the edge in the worst case nodes have to try all other links before initiating a connection along the required edge where is the maximum of any node after a connection is initialized it takes time to exchange rumors by generalization we observe that in the model the delay that can be incurred before rumor exchange among any two adjacent nodes ui and can be ki in the worst case in this way u s rumor proceeds towards v in individual steps each step incurring a maximum cost of ki a node might receive multiple rumors to propagate in the next round which its adds to its rumor set and forwards to its neighbors in a round robin fashion as such the total worst case delay in rumor exchange among node u and v would be represented by h x ki h x ki ph but we know that both h and ki can have a maximum value equal to k therefore we conclude that for any two nodes v and u in gk v s rumor would have reached u and u s rumor would have reached v if all nodes forward rumors in a round robin fashion for k rounds here on the created spanner with stretch of o log n the maximum distance between any two nodes can be o d log n since the maximum is o log n we get the following corollary corollary the rr broadcast algorithm on the constructed spanner takes o d n time and solves information dissemination we combine all the previously defined techniques to a single algorithm called efficient information dissemination or eid eid d for each vertex v in parallel do for iteration i to o log n do perform call spanner construction algorithm call algorithm rr broadcast o d log n to gain neighborhood information executed locally algorithm efficient information dissemination lemma for a graph g with diameter d efficient information dissemination eid algorithm takes o d n time for solving information dissemination when d is known to all the nodes unknown diameter for unknown diameter we apply the standard strategy begin with an initial guess of for try the algorithm and see if it succeeds if so we terminate otherwise double the estimate and repeat the challenge here is to correctly determine the termination condition how does a particular node determine whether information dissemination has been achieved for all other nodes early termination might lead to partial dissemination whereas late termination might cause the time complexity to increase the critical observation is as follows if two nodes u and v can not communicate in one execution of information dissemination protocol rr broadcast for a given estimate of the diameter then there must be some edge w z on the path from u to v where in one execution u is able to communicate with w but not with z there are two cases if w is not able to communicate with z then it is aware that it has an unreachable neighbor and can flag the issue the next time that u and w communicate node u learns of the problem otherwise if w can communicate with z then the next time that u and w communicate node u learns that there was a node it did not hear from previously in either case u knows that the estimate of d was not correct and should continue each node also checks whether it has heard from all of its neighbors and raises an error flag if not we then repeat broadcast so that nodes can check if everyone has the same rumor set and that no one has raised an error flag in total checking termination has asymptotic complexity of o d n the algorithm checks for every node that v contacts or is contacted by either directly or indirectly whether that node has i exactly the same rumor set as v and ii the value as its flag bit the flag bit of a node is set to if a neighbor of that node is not present in its rumor set or if the node has not yet exchanged all the rumors known to it presently with all of its neighbors in g that are at a distance to the current estimate of d say k this condition is easily checked by either doing an additional which does not affect the complexity or can be checked in parallel with the execution of rr broadcast if both of the above conditions are not met then node v sets its status to failed and v uses a broadcast algorithm for propagating the failed message any broadcast algorithm that given a parameter k is able to broadcast and collect back information from all nodes at a distance k from v can be used it is easily seen that rr broadcast satisfies this criteria and can be used in this case note that broadcast is achieved here for algorithm by execution of rr broadcast however when algorithm described later invokes broadcast is achieved by execution of the sequence t k also described later here the rumor set known to a particular vertex v is denoted by rv v represents all its neighbors in g whereas refers to only those nodes that are connected with v with an edge of latency k or less also initially of all nodes is set to default k if node w v and w rv or node v has not exchanged rumors with all then set flag bit vf lag else set flag bit vf lag broadcast and gather all responses from any node u in v s neighborhood if any u such that rv ru or uf lag then set failed broadcast failed message to the neighborhood if received message failed then set failed algorithm we prove the following regarding the termination detection lemma no node terminates until it has exchanged rumors with all other nodes moreover all nodes terminate in the exact same round proof suppose that a node v terminates without having exchanged rumors with some other node considering any path from node v to node w let u be the farthest node in hop distance with which v has exchanged rumors with and let x be the next node in the path case u has exchanged rumors with x it implies that v has also exchanged rumors with x from the condition that all nodes that exchange rumors with one another have the same rumor set thus contradicting the fact that u is the farthest node on the path that v has exchanged rumors with case u has not exchanged rumors with x if u had not exchanged rumors with x then u would have set its flag bit as which would have been detected by v during the broadcast and it would not have terminated this also gives us a contradiction thus no such node w exists and v terminates only after it has exchanged rumors with all the other nodes for the second part of the proof let consider u and v to be nodes such that v is set for termination and has not set its status to failed in the algorithm whereas in the same iteration node u has set its status to failed and hence is set to continue we show that there can not be two such nodes in the same round the node v did not set its status to failed implying all the nodes that it exchanged rumors with had exactly the same set of rumors none of the nodes had set its flag bit as and in addition it did not receive a failed message from any other node from the first part we know that the set of nodes that v exchanged rumors with is the entire vertex set of the graph that implies v has also exchanged rumors with u node u also has the exact set of rumors which essentially is all the rumors from all the nodes and does not have a set flag bit so in the current iteration if any other node broadcasted a failed message both v and u would have received it resulting in both nodes to set their status as failed again since the rumor sets of both nodes are identical both nodes would observe the same flag bits of all the nodes then node u will also not satisfy the termination condition and will not set its status as failed this gives us a contradiction that completes the proof k repeat call algorithm eid k call algorithm k if failed then k set to default else terminate algorithm code for vertex combining the dissemination protocol with the termination detection we get the following theorem there exists a randomized gossip algorithm that solves the information dissemination problem and terminates in o d n rounds an alternative information dissemination algorithm we propose an alternate algorithm to solve information dissemination without any global knowledge polynomial upper bound of n need not be known that takes o d n log d time this algorithm works even when nodes can not initiate a new exchange in every round and wait till the acknowledgement of the previous message communication is blocking the algorithm involves repeatedly invoking the algorithm with different parameters determined by a particular pattern the intuition behind the choice of the pattern is to make minimal use of the heavier latency edges by collecting as much information as possible near the heavier latencies before making use of that edge the pattern for k is derived according to a sequence t k that is recursively defined as follows t t t t t t t t k t t we show that when the above sequence is run for the particular pattern for length k it guarantees that any node u and v in the graph g at a distance of k have exchanged their rumors with one another overall the pattern of values of the parameter is k and for each value we perform the protocol that is t k is a sequence of calls to with varying parameters according to a known pattern lemma after the execution of t k any node in the weighted graph g v e has exchanged rumors with all other nodes that are at distance k or less from it proof we proceed by induction over the path length for the base case recall from that after running t on the subgraph of g induced by edges with latency any node v has exchanged rumors with all its distance neighbors for the inductive step suppose that the claim is true for t k after running the sequence any node v has exchanged rumors with all other nodes at a weighted distance to prove the claim for t t k t k we consider the various possibilities of forming a path of length case the path consists only of edges with latencies here we distinguish two case there exists a node m which is equidistant from both end points u and v see figure by the induction hypothesis both nodes u and v would have exchanged rumors with node m in the initial t k in the next t k node m propagates all rumors that it received from u to v and path of length k path of length k u m v figure case case no such node middle exists as depicted in figure then after the initial t k node u must have exchanged rumors with and node v with due to the induction hypothesis in the invocation of the node propagates all rumors gained from u to and also propagates all rumors gained from v to this information then travels from to u and from to v in the final t k path of length k or less u edge of length or less path of length k or less v figure case case there exists at most one edge e with latency value in between k this situation can yield one of the following two case edge e is located at one end of the path see figure by the induction hypothesis node v would have exchanged rumors with m in the initial t k in the u gets to know this and other rumors from m and m also gets to know u s rumors in the next t k node m propagates all rumors gained from u to path of length or less edge of length k u m figure case v case the edge is located between two inner nodes on the path see figure in this case by the induction hypothesis node u has exchanged rumors with whereas node v has exchanged rumors with node in the initial t k in the node propagates all rumors gained from u to moreover propagates all rumors gained from v to these rumors then propagate from to u and from to v in the final t k path of length or less u path of length or less v figure case lemma for known diameter solving information dissemination by executing the sequence t d takes o d n log d time proof from the way the sequence is constructed we observe the recurrence relation t k k using standard methods to solve the recurrence completes the proof when the graph diameter is known to all nodes nodes can just invoke t d to solve information dissemination for completeness we also present an algorithm called that uses the sequence of invocations of to solve information dissemination when the graph diameter is unknown this algorithm is similar in flavour to that of the algorithm described in section and also makes use of the algorithm albeit with a different broadcasting technique calling t k rather than rr broadcast k repeat execute sequence t k call algorithm k if failed then k set to default else terminate algorithm code for vertex lemma algorithm takes o d n log d time to solve information dissemination applying techniques similar to section the complexity can be easily shown for the case with unknown diameter as well unified upper bounds combining the results we can run both and the spanner algorithm in parallel to obtain unified upper bounds for both the known and the unknown latencies cases however we point out that for single source broadcast works with small message sizes whereas the spanner algorithm does not because of its reliance on dtg also exchanging messages with the help of the spanner does not have good robustness properties whereas is inherently quite robust theorem there exists randomized gossip algorithms that solves the information dissemination problem in o min n log n time when latencies are unknown and in o min d n log n time when latencies are known corollary there exists randomized gossip algorithms that solves the information dissemination problem in o min d n log n time when latencies are unknown and in o min d n log n time when latencies are known conclusion we have presented two different new concepts namely the critical conductance and the average conductance that characterize the bottlenecks in communication for weighted graphs we believe that these parameters will be useful for a variety of applications that depend on connectivity a question that remains is whether the running time of o d n for information dissemination can be improved using better spanner constructions or more efficient local broadcast to save the polylogarithmic factors recall that in the unweighted case there are information dissemination protocols that run in o d polylogn time another interesting direction would be the development of reliable robust algorithms in this regard another issue is whether we can reduce the number of incoming messages in a round recently daum et al have considered such a more restricted model yielding interesting results it would also be interesting to look at the bounds where each node is only allowed o connections per round whether initiated by the node itself or by its neighbor acknowledgment we thank george giakkoupis for the helpful conversations and useful ideas a appendix the dtg local broadcast protocol in this section we describe in more detail the dtg protocol that was originally developed in as well as the algorithm it is clear that the algorithm solves local broadcast because it keeps on contacting new neighbors until it has exchanged rumors with all of its neighbors the author makes use of binomial trees to derive the time complexity and better explain the working of the algorithm the key idea used for deriving the time complexity is to show that when information is propagated in a pipelined manner along the binomial trees created then for any node that is still active in the ith iteration it has a binomial tree of order of depth i see figure rooted at it furthermore it is shown that for any two different nodes that are still active in iteration i their are vertex disjoint since an is formed by joining two i the growth rate of an is exponential which limits the number of iterations to o log n also each node on an average needs to contact o log n nodes o i nodes in the ith round thus the overall complexity of the algorithm becomes o n in our case for the additional waiting time of increases the time complexity to o n figure for i the can be seen as witness structures that provides an explanation as to why a node was active in that particular iteration the rooted at a particular node is built recursively as the rounds progress and essentially store the information about which other nodes communicated with one another in which particular round as viewed from the root node for example in figure the labels on the edges denote the time in which the node of the higher level contacted the lower level node as observed by the root node the root contacts the nodes in first level in rounds according to their label the nodes on the first level similarly contact the nodes in the second level in rounds according to their label and so on this observation also helps in the realization of the key idea of a node being active in the ith round having an rooted at it the nodes in the first level did not contact the root previously as they were busy contacting the nodes of the second level the nodes of the second level did not contact nodes on the first level as they were busy contacting the nodes in the third level and so on figure with edge labels as shown in the pseudo code in the initial push sequence the message is propagated in a decreasing order of connection round number as observed by the root node given by the labels on the edges of figure helping in the roots message to all other nodes of the similarly during the initial pull sequence the message from the nodes is pipelined up to the root the subsequent sequence helps in maintaining the symmetry of the algorithm such that if node u learns about node v then node v also learns about node u finally the collection of rumors r is updated to the union of rumors collected in the aforementioned sequences for being an integer we run the modified dtg algorithm on a of g g rather than on g where g contains only the edges of length up to lets denote this algorithm as the algorithm is presented below and each node v belonging to g runs it in parallel v can be considered as the neighborhood of v comprising of set of nodes that are node v s neighbors r v for i until v do link to any new neighbor ui v v push for j i downto do send rumors in to uj wait for time to receive uj s rumors add all received rumors to pull for j to i do send rumors in to uj wait for time to receive uj s rumors add all received rumors to v perform pull push with r algorithm references john augustine gopal pandurangan peter robinson scott roche and eli upfal enabling robust and efficient distributed computation in dynamic networks in ieee annual symposium on foundations of computer science focs berkeley usa pages surender baswana and sandeep a simple and linear time randomized algorithm for computing sparse spanners in weighted graphs random structures and algorithms stephen boyd arpita ghosh balaji prabhakar and devavrat shah randomized gossip algorithms trans si june milan robert tobias friedrich thomas sauerwald and alexandre stauffer efficient broadcast on random geometric graphs in proceedings of the annual symposium on discrete algorithms soda pages philadelphia pa usa keren bernhard haeupler jonathan kelner and petar maymounkov global computation in a poorly connected world fast rumor spreading with no dependence on conductance in proceedings of the annual acm symposium on theory of computing stoc pages new york ny usa acm keren and hadas shachnai fast information spreading in graphs with large weak conductance in proceedings of the annual symposium on discrete algorithms soda pages siam flavio chierichetti silvio lattanzi and alessandro panconesi almost tight bounds for rumour spreading with conductance in proceedings of the acm symposium on theory of computing stoc pages ny usa acm flavio chierichetti silvio lattanzi and alessandro panconesi rumour spreading and graph conductance in proceedings of the annual symposium on discrete algorithms soda pages pa usa siam sebastian daum fabian kuhn and yannic maus rumor spreading with bounded pages springer international alan demers dan greene carl hauser wes irish john larson scott shenker howard sturgis dan swinehart and doug terry epidemic algorithms for replicated database maintenance in proceedings of the annual acm symposium on principles of distributed computing podc pages new york ny usa acm benjamin doerr mahmoud fouz and tobias friedrich social networks spread rumors in sublogarithmic time in proceedings of the annual acm symposium on theory of computing stoc pages new york ny usa acm benjamin doerr mahmoud fouz and tobias friedrich why rumors spread so quickly in social networks commun acm june uriel feige david peleg prabhakar raghavan and eli upfal randomized broadcast in networks in algorithms volume of lecture notes in computer science pages springer berlin heidelberg pierre fraigniaud and george giakkoupis on the bit communication complexity of randomized rumor spreading in proceedings of the annual acm symposium on parallelism in algorithms and architectures spaa pages ny usa acm gandhi mishra and parthasarathy minimizing broadcast latency and redundancy in ad hoc networks networking transactions on aug george giakkoupis tight bounds for rumor spreading in graphs of a given conductance in proceedings of the international symposium on theoretical aspects of computer science stacs pages march george giakkoupis thomas sauerwald and alexandre stauffer randomized rumor spreading in dynamic graphs pages springer berlin heidelberg berlin heidelberg bernhard haeupler simple fast and deterministic gossip and rumor spreading in proceedings of the annual symposium on discrete algorithms soda pages siam bernhard haeupler and dahlia malkhi optimal gossip with direct addressing in proceedings of the acm symposium on principles of distributed computing podc pages new york ny usa acm shlomo hoory nathan linial and avi wigderson expander graphs and their applications bull amer math mark jerrum and alistair sinclair conductance and the rapid mixing property for markov chains the approximation of permanent resolved in proceedings of the annual acm symposium on theory of computing stoc pages new york ny usa acm karp schindelhauer shenker and vocking randomized rumor spreading in foundations of computer science proceedings annual symposium on pages david kempe jon kleinberg and alan demers spatial gossip and resource location protocols in proceedings of the annual acm symposium on theory of computing stoc pages new york ny usa acm damon and devavrat shah computing separable functions via gossip in proceedings of the annual acm symposium on principles of distributed computing podc pages new york ny usa acm calvin newport radio network lower bounds made easy in distributed computing international symposium disc austin tx usa october proceedings pages sarwate and dimakis the impact of mobility on gossip algorithms in infocom ieee pages april salil p vadhan pseudorandomness foundations and trends in theoretical computer science vol no pp
8
cooperative control of systems to locate source of an odor nov abhinav sinha rishemjit kaur ritesh kumar and amol bhondekar work targets the problem of odor source localization by systems a hierarchical cooperative control has been put forward to solve the problem of locating source of an odor by driving the agents in consensus when at least one agent obtains information about location of the source synthesis of the proposed controller has been carried out in a hierarchical manner of group decision making path planning and control decision making utilizes information of the agents using conventional particle swarm algorithm and information of the movement of filaments to predict the location of the odor source the predicted source location in the decision level is then utilized to map a trajectory and pass that information to the control level the distributed control layer uses sliding mode controllers known for their inherent robustness and the ability to reject matched disturbances completely two cases of movement of agents towards the source under consensus and formation have been discussed herein finally numerical simulations demonstrate the efficacy of the proposed hierarchical distributed control index source localization systems mas sliding mode control smc homogeneous agents cooperative control i ntroduction overview inspiration of odor source localization problem stems from behavior of biological entities such as mate seeking by moths foraging by lobsters prey tracking by mosquitoes and blue crabs and is aimed at locating the source of a volatile chemical these behaviors have long been mimicked by autonomous robot s chemical source tracking has attracted researchers around the globe due to its applications in both civilian and military domains a plethora of applications are possible some of which include detection of forest fire oil spills release of toxic gases in tunnels and mines gas leaks in industrial setup search and rescue of victims and clearing leftover mine after an armed conflict a plume containing filaments or odor molecules is generally referred to the downwind trail formed as a consequence of mixing of contaminant molecules in any kind of movement of air the dynamical optimization problem of odor source localization can be effectively solved using multiple robots working in cooperation the obvious advantages of leveraging multiagent systems mas are increased probability of success sinha is with school of mechatronics robotics indian institute of engineering science and technology and central scientific instruments organization csio india email kaur kumar bhondekar are with csio emails riteshkr amolbhondekar redundancy and improved overall operational efficiency and spatial diversity in having distributed sensing and actuation b motivation odor source localization is a three stage sensing maneuvering and control some of reported literature on odor source localization date back to when larcombe et al discussed such applications in nuclear industry by considering a chemical gradient based approach other works in relied heavily on sensing part using techniques such as chemotaxis infotaxis anemotaxis and fluxotaxis the efficiency of such algorithms was limited by the quality of sensors and the manner in which they were used these techniques also failed to consider turbulence dominated flow and resulted in poor tracking performance algorithms have been reported to maneuver the agents some of which include braitenberg style coli algorithm zigzag dung beetle approach silkworm moth style and their variants a tremendous growth of research attention towards cooperative control has been witnessed in the past decade but very few have addressed the problem of locating source of an odor hayes et al proposed a distributed cooperative algorithm based on swarm intelligence for odor source localization and experimental results proved multiple robots perform more efficiently than a single autonomous robot a particle swarm optimization pso algorithm was proposed by marques et al to tackle odor source localization problems to avoid trapping into local maximum concentrations jatmiko et al proposed modified pso algorithms based on electrical charge theory where neutral and charged robots has been used lu et al proposed a distributed coordination control protocol based on pso to address the problem it should be noted that simplified pso controllers are a type of controller and the operating region gets limited between global and local best this needs complicated obstacle avoidance algorithms and results in high energy expenditure lu et al also proposed a cooperative control scheme to coordinate multiple robots to locate odor source in which a particle filter has been used to estimate the location of odor source based on wind information a movement trajectory has been planned and finally a cooperative control scheme has been proposed to coordinate movement of robots towards the source motivated by these studies we have implemented a robust and powerful hierarchical cooperative control strategy to tackle the problem first layer is the group level in which the information about the source via instantaneous sensing and swarm intelligence is obtained second layer is designed to maneuver the agents via a simplified silkworm moth algorithm third layer is based on cooperative sliding mode control and the information obtained in the first layer is passed to the third layer as a reference to the tracking controller contributions major contributions of this paper are summarized below as opposed to existing works on cooperative control to locate source of odor we have considered a more general formulation by taking nonlinear dynamics of mas into account when the uncertain function is zero the problem reduces to stabilizing integrator dynamics the control layer is designed on the paradigms of sliding mode a robust and powerful control with inherent robustness and disturbance rejection capabilities the reaching law as well as the sliding manifold in this study are nonlinear and novel resulting in smoother control and faster reachability to the manifold use of sliding mode controller also helps in achieving a finite time convergence as opposed to asymptotic convergence to the equilibrium point the proposed control provides stability and ensures robustness even in the presence of bounded disturbances and matched uncertainties odor propagation is odor arrives in packets leading to wide fluctuations in measured concentrations plumes are also dynamic and turbulent as odor tends to travel downwind direction of the wind provides an effective information on relative position of the source hence we have used wind information based on a measurement model describing movement of filaments and concentration information from swarm intelligence to locate the source of odor formation keeping of agents to locate source of odor has also been demonstrated in this work paper organization after introduction to the study in section i remainder of this work in organized as follows section ii provides insights into preliminaries of spectral graph theory and sliding mode control section iii presents dynamics of mas and mathematical problem formulation followed by hierarchical distributed cooperative control scheme in section iv results and discussions have been carried out in section v followed by concluding remarks in section vi ii p reliminaries spectral graph theory for systems a directed graph also known as digraph is represented throughout in this paper by g v e a v is the nonempty set in which finite number of vertices or nodes are contained such that v n e denotes directed edge and is represented as e i j i j v i j a is the weighted adjacency matrix such that a a i j the possibility of existence of an edge i j occurs iff the vertex i receives the information supplied by the vertex j i j hence i and j are termed neighbours the set ni contains labels of vertices that are neighbours of the vertex i for the adjacency matrix a a i j if i j e a i j if i j e or i j a i j the laplacian matrix l is central to the consensus problem and is given by l d a where degree matrix d is a diagonal matrix pn d diag dn whose entries are di a i j a directed path from vertex j to vertex i defines a sequence comprising of edges i il j with distinct vertices ik v k incidence matrix b is also a diagonal matrix with entries or the entry is if there exists an edge between leader agent and any other agent otherwise it is furthermore it can be inferred that the path between two distinct vertices is not uniquely determined however if a distinct node in v contains directed path to every other distinct node in v then the directed graph g is said to have a spanning tree consequently the matrix l b has full rank physically each agent has been modelled by a vertex or node and the line of communication between any two agents has been modelled as a directed edge b sliding mode control sliding mode control smc is known for its inherent robustness the switching nature of the control is used to nullify bounded disturbances and matched uncertainties switching happens about a hypergeometric manifold in state space known as sliding manifold surface or hyperplane the control drives the system monotonically towards the sliding surface trajectories emanate and move towards the hyperplane reaching phase system trajectories after reaching the hyperplane get constrained there for all future time sliding phase thereby ensuring the system dynamics remains independent of bounded disturbances and matched uncertainties in order to push state trajectories onto the surface s x a proper discontinuous control effort usm t x needs to be synthesized satisfying the following inequality st x x x k with being positive and is referred as the reachability constant f t x usm x st x f t x usm x the motion of state trajectories confined on the manifold is known as sliding sliding mode exists if the state velocity vectors are directed towards the manifold in its neighbourhood under such consideration the manifold is called attractive trajectories starting on it remain there for all future time and trajectories starting outside it tend to it in an asymptotic manner hence in sliding motion f t x usm usm ueq is a solution generally referred as equivalent control is not the actual control applied to the system but can be thought of as a control that must be applied on an average to maintain sliding motion and is mainly used for analysis of sliding motion x iii dynamics of m ulti s ystems p roblem f ormulation consider first order homogeneous mas interacting among themselves and their environment in a directed topology under such interconnection information about the predicted location of source of the odor through instantaneous plume sensing is not available globally however local information is obtained by communication among agents whenever at least one agent attains some information of interest the governing dynamics of first order homogeneous mas consisting of n agents is described by nonlinear differential equations as t f xi t usmi t i n where f x rm is assumed to be locally lipschitz over some fairly large domain dl with lipschitz constant and denotes the uncertain nonlinear dynamics of each agent also x rm is a domain in which origin is contained xi and usmi are the state of ith agent and the associated control respectively represents bounded exogenous disturbances that enter the system from input channel k the problem of odor source localization can be viewed as a cooperative control problem in which control laws usmi need to be designed such that the conditions kxi xj k and kxi k are satisfied here xs represents the probable location of odor source is an accuracy parameter iv h ierarchical d istributed c ooperative c ontrol s cheme in order to drive the agents towards consensus to locate the source of odor we propose the following hierarchy a group decision making this layer utilizes both concentration and wind information to predict the location of odor source then the final probable position of the source can be described as tk pi tk qi tk with pi tk as the oscillation centre according to a simple particle swarm optimization pso algorithm and qi tk captures the information of the wind denotes additional weighting coefficient remark the arguments in represent data captured at t tk instants k as the sensors equipped with the agents can only receive data at discrete instants it should be noted that is the tracking reference that is fed to the controller now we present detailed description of obtaining pi tk and qi tk simple pso algorithm that is commonly used in practice has the following form vi tk upso tk xi xi tk vi here is the inertia factor vi tk and xi tk represent the respective velocity and position of ith agent this commonly used form of pso can also be used as a type controller however for the disadvantages mentioned earlier we do not use pso as our final controller pso control law upso can be described as upso xl tk xi tk xg tk xi tk in xl tk denotes the previous best position and xg tk denotes the global best position of neighbours of ith agent at time t tk and are acceleration coefficients since every agent in mas can get some information about the magnitude of concentration via local communication position of the agent with a global best can be easily known by the idea of pso we can compute the oscillation centre pi tk as pi tk xl tk xg tk where xl tk arg xg tk arg max g xl g xi tk t max g xg max aij g xj tk t thus from upso tk pi tk xi tk which is clearly a controller with proportional gain as highlighted earlier in order to compute qi tk movement process of a single filament that consists several order molecules has been modelled if xf t denotes position of the filament at time t t represent mean airflow velocity and n t be some random process then the model can be described as t t n t without loss of generality we shall regard the start time of our experiment as t from we have z t z t xf t n xs xs denotes the real position of the odor source at t assumption we assume the presence of a single stationary odor source thus xs t xs implications from remark require to be implemented at t tk instants hence xf tk t x t x n xs tk xf tk xs tk tk w tk in w tk pt tk and pt n remark in the accumulated average of tk and w tk can also be considered possible filament releasing time from xf tk tk xs tk w tk distributed control in the control layer we design a robust and powerful controller on the paradigms of sliding mode it is worthy to mention that based on instantaneous sensing and swarm information at different times each agent can take up the role of a virtual leader whose opinion needs to be kept by other agents from has been provided to the controller as the reference to be tracked the tracking error is formulated as ei t xi t tk t tk in terms of graph theory we can reformulate the error variable as t l b ei t l b xi t tk the above relationship can be viewed as the information about xs tk with some noise w tk hence from this point onward we shall denote l b as next we formulate the sliding manifold qi tk xs tk w tk si t tanh t therefore in can now be constructed from b path planning since detection of information of interest is tied to the threshold value defined for the sensors the next state is updated taking this threshold value into account thus the blueprints of path planning can be described in terms of three types of behavior surging if the ith agent receives data well above threshold we say that some clues about the location of the source has been detected if the predicted position of the source at t tk as seen by ith agent be given as xsi tk then the next state of the agent is given mathematically as xi xsi tk casting if the ith agent fails to detect information at any particular instant then the next state is obtained using the following relation xi kxi tk xsi tk k xsi tk which is a nonlinear sliding manifold offering faster reachability to the surface represents the speed of convergence to the surface and denotes the slope of the nonlinear sliding manifold these are coefficient weighting parameters that affect the system performance the forcing function has been taken as t m t sign si t in m is a small offset such that the argument of function remains non zero and w is the gain of the controller the parameter facilitates additional gain tuning in general m this novel reaching law contains a nonlinear gain and provides faster convergence towards the manifold moreover this reaching law is smooth and chattering free which is highly desirable in mechatronic systems to ensure safe operation theorem given the dynamics of mas connected in a directed topology error candidates and the sliding manifold the stabilizing control law that ensures accurate reference tracking under consensus can be described as usmi t m t sign si t search and exploration if all the agents fail to detect odor clues for a time segment tk for some l n and being the time interval for which no clues are detected or some constraint on wait time placed at the start of the experiment then the next state is updated as xi xsi tk in is some random parameter with as its standard deviation and as its mean f xi t tk where t w k sup remark as mentioned earlier this ensures and hence its non singularity the argument of tanh is always finite and satisfies t for z thus is also invertible moreover the non singularity of h can be established directly if the digraph contains a spanning tree with leader agent as a root proof from and we can write t t t t t tanh t t tanh t t tk with as defined in theorem from can be further simplified as t f xi t usmi t tk using the control that brings the state trajectories on to the sliding manifold can now be written as usmi t m t sign si t f xi t tk thus the derivative of lyapunov function candidate is negative definite confirming stability in the sense of lyapunov since ksi k and due to the nature of its arguments therefore and together provide implications that si and the surface is globally attractive this ends the proof r esults and discussions interaction topology of the agents represented as a digraph has been shown here in figure the associated graph matrices have been described below the computer simulation has been performed assuming that agent appears as virtual leader to all other agents making the topology fixed and directed for this study it should be noted that the theory developed so far can be extended to the case of switching topologies and shall be dealt in future this concludes the proof remark the control can be practically implemented as it does not contain the uncertainty term it is crucial to analyze the necessary and sufficient conditions for the existence of sliding mode when control protocol is used we regard the system to be in sliding mode if for any time system trajectories are brought upon the manifold si t and are constrained there for all time thereafter for t sliding motion occurs theorem consider the system described by error candidates sliding manifold and the control protocol sliding mode is said to exist in vicinity of sliding manifold if the manifold is attractive trajectories emanating outside it continuously decrease towards it stating alternatively reachability to the surface is ensured for some reachability constant moreover stability can be guaranteed in the sense of lyapunov if gain is designed as sup proof let us take into account a lyapunov function candidate vi taking derivative of along system trajectories yield si si f xi t usmi t tk fig topology in which agents are connected l b d l b agents have the following dynamics sin cos t sin cos t sin cos t sin cos t where m is called reachability constant for sup we have m ksi k k m ksi k k sin cos t substituting the control protocol in we have si m sign si in this study advection model given in has been used to simulate the plume with both additive and multiplicative disturbances the initial conditions for simulation are taken to be large values far away from the equilibrium point time varying disturbance has been taken as sin accuracy parameter and maximum mean airflow velocity other key design parameters are mentioned in table agents progressing towards the source source info position of agents m direction of movement of agents towards the source direction of movement of filaments released from the source true odor source time progression for agents sec fig agents in consensus to locate source of odor agents progressing towards the source in parallel formation position of agents m formation gap true odor source source info movement of away from source odor source location agent initial point agent initial point agent initial point agent initial point agent initial point agent terminal point agent terminal point agent terminal point agent terminal point agent terminal point time progression for agents sec fig agents in formation to locate source of odor tracking errors control signals u i t norm of error variables e i t time sec fig norm of tracking errors time sec fig control signals during consensus position of the source sliding manifolds surface variables s i t time sec fig sliding manifolds during consensus table i values of the design parameters used in simulation m w figure shows agents coming to consensus in finite time to locate the source of odor and figure shows agents moving in parallel formation to locate the odor source norm of the tracking errors has been depicted in figure it is evident that the magnitude of error is very small plot of control signals during consensus has been shown in figure and the plot of sliding manifolds has been shown in figure vi c oncluding remarks the problem of odor source localization by mas has been dealt with in a hierarchical manner in this work the problem translates into a cooperative control problem wherein agents are driven towards consensus to locate the true odor source in finite time through computer simulations it has been confirmed that the proposed strategy is faster and provides accurate tracking even in the presence of time varying disturbances r eferences larcombe robotics in nuclear engineering computer assisted teleoperation in hazardous environments with particular reference to radiation fields united states graham and trotman inc rozas morales and vega artificial smell detection for robotic navigation in advanced robotics robots in unstructured environments fifth international conference on june pp genovese dario magni and odetti self organizing behavior and swarm intelligence in a pack of mobile miniature robots in search of pollutants in proceedings of the international conference on intelligent robots and systems vol jul pp buscemi prati and sandini cellular robotics behaviour in polluted environments in proceedings of the international symposium on distributed autonomous robotic systems russell laying and sensing odor markings as a strategy for assisting mobile robot navigation tasks ieee robotics automation magazine vol no pp sep russell andrew odour detection by mobile robots river edge nj usa world scientific publishing russell shepherd and wallace a comparison of reactive robot chemotaxis algorithms robotics and autonomous systems vol no pp online available http vergassola villermaux and shraiman infotaxis as a strategy for searching without gradients nature vol no pp online available https j farrell pang and li plume mapping via hidden markov methods ieee transactions on systems man and cybernetics part b cybernetics vol no pp dec pang and j farrell chemical plume source localization ieee transactions on systems man and cybernetics part b cybernetics vol no pp oct zarzhitsky approach to chemical source localization using mobile robotic swarms dissertation braitenberg vehicles experiments in synthetic psychology boston ma usa mit press lytridis virk rebour and kadar odorbased navigational strategies for mobile agents adapt vol no pp apr online available http ishida suetsugu nakamoto and moriizumi study of autonomous mobile sensing system for localization of odor source using gas sensors and anemometric sensors sensors and actuators a physical vol no pp online available http russell chemical source location and the robomole project australian robotics and automation association pp marques and almeida electronic odour source localization in international workshop on advanced motion control proceedings cat april pp marques nunes and de almeida mobile robot navigation thin solid films vol no pp proceedings from the international school on gas sensors in conjunction with the european school of the nose network online available http ren and beard consensus seeking in multiagent systems under dynamically changing interaction topologies ieee transactions on automatic control vol no pp may yu chen ren kurths and zheng distributed higher order consensus protocols in multiagent dynamical systems ieee transactions on circuits and systems i regular papers vol no pp aug hayes martinoli and goodman swarm robotic odor localization optimization and validation with real robots robotica vol no pp online available http kennedy and eberhart particle swarm optimization in neural networks ieee international conference on vol nov pp marques nunes and de almeida particle swarmbased olfactory guided search autonomous robots vol no pp jun online available https jatmiko sekiyama and fukuda a mobile robot for odor source localization in dynamic with obstacles environment theory simulation and measurement ieee computational intelligence magazine vol no pp may lu liu and qiu a distributed architecture with two layers for odor source localization in systems in ieee congress on evolutionary computation july pp lu and han in a system for odor source localization in iecon annual conference of the ieee industrial electronics society nov pp fan chung spectral graph theory ser cbms regional conference series in mathematics ams and cbms vol online available http david young vadim utkin and umit ozguner a control engineer s guide to sliding mode control ieee transactions on control systems technology vol no pp may cao meng wu zeng and li consensus based distributed summation algorithm for gasleakage source localization using a wireless sensor network in proceedings of the chinese control conference july pp
3
coresets for dependency networks alejandro molina first last tu dortmund de cs department tu dortmund germany alexander munteanu first last tu dortmund de oct cs department tu dortmund germany kristian kersting last cs tu darmstadt de cs dept and centre for cogsci tu darmstadt germany abstract many applications infer the structure of a probabilistic graphical model from data to elucidate the relationships between variables but how can we train graphical models on a massive data set in this paper we show how to construct data sets which can be used as proxy for the original data and have provably bounded worst case gaussian dependency networks dns cyclic directed graphical models over gaussians where the parents of each variable are its markov blanket specifically we prove that gaussian dns admit coresets of size independent of the size of the data set unfortunately this does not extend to dns over members of the exponential family in general as we will prove poisson dns do not admit small coresets despite this result we will provide an argument why our coreset construction for dns can still work well in practice on count data to corroborate our theoretical results we empirically evaluated the resulting core dns on real data sets the results demonstrate significant gains over no or naive even in the case of count data introduction artificial intelligence and machine learning have achieved considerable successes in recent years and an number of disciplines rely on them data is now ubiquitous and there is great value from understanding the data building probabilistic graphical models to elucidate the relationships between variables in the big data era however scalability has become crucial for any useful machine learning approach in this paper we consider the problem of training graphical models in particular dependency networks heckerman et al on massive data sets they are cyclic directed graphical models where the parents of each variable are its markov blanket and have been proven successful in various tasks such as collaborative filtering heckerman et al phylogenetic analysis carlson et al genetic analysis dobra phatak et al network inference from sequencing data allen and liu and traffic as well as topic modeling hadiji et al specifically we show that dependency networks over one of the most prominent type of distribution in statistical machine coresets of size independent of the size of the data set coresets are weighted subsets of the data which guarantee that models fitting them will also provide a good fit for the original data set and have been studied before for clustering badoiu et al feldman et al lucic et al classification et al reddi et al regression drineas et al dasgupta et al geppert et al and the smallest enclosing ball problem badoiu and clarkson feldman et al agarwal and sharathkumar we refer to phillips for a recent extensive literature overview our contribution continues this line of research and generalizes the use of coresets to probabilistic graphical modeling unfortunately this coreset result does not extend to dependency networks over members of the exponential family in general we prove that dependency networks over poisson random variables allen and liu hadiji et al do not admit sublinear size coresets every single input point is important for the model and needs to appear in the coreset this is an important negative result since count primary target of poisson at the center of many scientific endeavors from citation counts to web page hit counts from counts of procedures in medicine to the count of births and deaths in census from counts of words in a document to the count of gamma rays in physics here modeling one event such as the number of times a certain lab test yields a particular result can provide an idea of the number of potentially invasive procedures that need to be performed on a patient thus elucidating the relationships between variables can yield great insights into massive count data therefore despite our result we will provide an argument why our coreset construction for dependency networks can still work well in practice on count data to corroborate our theoretical results we empirically evaluated the resulting core dependency networks cdns on several real data sets the results demonstrate significant gains over no or naive even for count data we proceed as follows we review dependency networks dns prove that gaussian dns admit sublinear size coresets and discuss the possibility to generalize this result to count data before concluding we illustrate our theoretical results empirically dependency networks most of the existing ai and machine learning literature on graphical models is dedicated to binary multinominal or certain classes of continuous gaussian random variables undirected models aka markov random fields mrfs such as ising binary random variables and potts multinomial random variables models have found a lot of applications in various fields such as robotics computer vision and statistical physics among others whereas mrfs allow for cycles in the structures directed models aka bayesian networks bns required acyclic directed relationships among the random variables dependency networks dns focus of the present concepts from directed and undirected worlds and are due to heckerman et al specifically like bns dns have directed arcs but they allow for networks with cycles and arcs akin to mrfs this makes dns quite appealing for many applications because we can build multivariate models from univariate distributions allen and liu yang et al hadiji et al while still permitting efficient structure learning using local estimtatiors or gradient tree boosting generally if the data are fully observed learning is done locally on the level of the conditional probability distributions for each variable mixing directed and indirected as needed based on these local distributions samples from the joint distribution are obtained via gibbs sampling indeed the gibbs sampling neglects the question of a consistent joint probability distribution and instead makes only use of local distributions the generated samples however are often sufficient to answer many probability queries formally let x x x d denote a random vector and x its instantiation a dependency network dn on x is a pair g where g v e is a directed possibly cyclic graph where each node in v d d corresponds to the random variable x i in the set of directed edges e v v i i i d each edge models a dependency between variables if there is no edge between i and j then the variables x i and x j are conditionally independent given the other variables x j indexed by d i j in the network we refer to the nodes that have an edge pointing to x i as its parents denoted by pai x j j i e pi i d is a set of conditional probability distributions associated with each variable x i pi where pi p x i pai p x i as example of such a local model consider poisson conditional probability distributions as illustrated in fig left i x p x i pai e x i here highlights the fact that the mean can have a functional form that is dependent on x i s parents often we will refer to it simply as the construction of the local conditional probability distribution is similar to the multinomial bayesian network case however in the case of dns the graph is not necessarily acyclic and p x i typically has an infinite range and hence can not be represented using a finite table of probability values finally the full joint distribution is simply defined as the product of local distributions y p x p x i d also called pseudo likelihood for the poisson case this reads i p x y d e x i note however that doing so does not guarantee the existence of a consistent joint distribution a joint distribution of which they are the conditionals bengio et al however have recently proven the existence of a consistent distribution per given evidence which does not have to be known in closed form as long as an unordered gibbs sampler converges core dependency networks as argued learning dependency networks dns amounts to determining the conditional probability distributions from a given set of n training instances xi rd representing the rows of the data matrix x over d variables assuming that p x i pai is parametrized as a generalized linear model glm mccullagh and nelder this amounts to estimating the parameters i of the glm associated with each variable x i since this completely determines the local distributions but p x i pai will possibly depend on all other variables in the network and these dependencies define the structure of the network this view of training dns as fitting d glms to the data allows us to develop core dependency networks cdns sample a coreset and train a dn over certain members of the glm family on the sampled corest relative frequency fit data data fit x x x number of goals figure illustration of dependency networks dns using poissons left the number of goals scored in soccer games follows a poisson distribution the plot shows the distribution of home goals in the season of the german bundesliga by the home team the home team scored on average goals per game right example structure of a poisson dn the conditional distribution of each count variable given its neighbors is a poisson distribution similar to a bayesian network a poisson dn is directed however it also contains cycles best viewed in color a coreset is a possibly weighted and usually considerably smaller subset of the input data that approximates a given objective function for all candidate solutions definition let x be a set of points from a universe u and let be a set of candidate solutions let f u be a measurable function then a set c x is an of x for f if x f c f x we now introduce the formal framework that we need towards the design of coresets for learning dependency networks a very useful structural property for based objective or loss functions is the concept of an embedding definition embedding an embedding for the columnspace of x is a matrix s such that rd we can construct a sampling matrix s which forms an embedding with constant probabilty in the following way let u be any orthonormal basis for the columnspace of x this basis can be obtained from the singular value decomposition svd x u t of the data matrix now let rank u rank x and define the leverage scores li for i n now we fix a sampling size parameter k o log sample the input points with probability qi min k li and reweight their contribution to the loss function by wi note that for the sum of squares loss this corresponds to defining a diagonal sampling matrix s by sii qi with probability qi and sii otherwise also note that the expected number of samples is k o log which also holds with constant probability by markov s inequality moreover to give an intuition why this works note that for any fixed rd we have x x xi e qi xi qi the significantly stronger property of forming an embedding according to definition follows from a matrix approximation bound given in rudelson and vershynin drineas et al lemma let x be an input matrix with rank x let s be a sampling matrix constructed as stated above with sampling size parameter k o log then s forms an embedding for the columnspace of x with constant probability proof let x u t be the svd of x by theorem in drineas et al there exists an absolute constant c such that r t t log k e ku s su u t u k c ku kf ku k k r log k c k where we used the fact that ku kf and ku k by orthonormality of u the last inequality holds by choice of k log for a large enough absolute constant d such that d since d log k log log log log k log log log d log log d log d c by an application of markov s inequality and rescaling we can assume with constant probability ku t s t su u t u k we show that this implies the embedding property to this end fix rd t x t s t t x t t v t s t su t t v t u t t v u t s t su u t u t ku t s t su u t u k t ku t s t su u t u k the first inequality follows by submultiplicativity and the second from rotational invariance of the spectral norm finally we conclude the proof by inequality the question arises whether we can do better than o log one can show by reduction from the coupon collectors theorem that there is a lower bound of log matching the upper bound up to its dependency on the hard instance is a dm m n orthonormal matrix in which the scaled canonical basis id is stacked times the leverage scores are all equal to implying a uniform sampling distribution with probability for each basis vector any rank d preserving sample must comprise at least one of them this is exactly the coupon collectors theorem with d coupons which has a lower bound of d log d motwani and raghavan the fact that the sampling is without replacement does not change this since the reduction holds for arbitrary large m creating sufficient multiple copies of each element to simulate the sampling with replacement tropp now we know that with constant probability over the randomness of the construction algorithm s satisfies the embedding property for a given input matrix x this is the structural key property to show that actually sx is a coreset for gaussian linear regression models and dependency networks consider g a gaussian dependency network gdn a collection of gaussian linear regression models pi x i i n x i i d on an arbitrary digraph structure g heckerman et al the logarithm of the likelihood besag of the above model is given by y x ln l ln pi ln pi a maximum likelihood estimate can be obtained by maximizing this function with respect to d which is equivalent to minimizing the gdn loss function x fg x kx i x i theorem given s an embedding for the columnspace of x as constructed above sx is an of x for the gdn loss function proof fix an arbitrary d rd consider the affine map d rd defined by i id i ei clearly extends its argument from d to d dimensions by inserting a entry at position i and leaving the other entries in their original order let i i rd note that for each i d we have i i x i x i and each i is a vector in rd thus the triangle inequality and the universal quantifier in definition guarantee that x x i i x i i x i i x x i i the claim follows by substituting identity it is noteworthy that computing one single coreset for the columnspace of x is sufficient rather than computing d coresets for the d different subspaces spanned by x from theorem it is straightforward to show that the minimizer found for the coreset is a good approximation of the minimizer for the original data corollary given an c of x for the gdn loss function let fg c then it holds that fg x min fg x proof let fg x then fg x fg c fg c fg x fg x the first and third inequalities are direct applications of the coreset property the second holds by optimality of for the coreset and the last follows from moreover the coreset does not affect inference within gdns recently it was shown for bayesian gaussian linear regression models that the entire multivariate normal distribution over the parameter space is approximately preserved by embeddings geppert et al which generalizes the above this implies that the coreset yields a useful pointwise approximation in markov chain monte carlo inference via random walks like the sampler in heckerman et al negative result on coresets for poisson dns naturally the following question arises do sublinear size coresets exist for dependency networks over the exponential family in general unfortunately the answer is no indeed there is no sublinear size coreset for the simpler problem of poisson regression which implies the result for poisson dns we show this formally by reduction from the communication complexity problem known as indexing to this end recall that the negative for poisson regression is mccullagh and nelder winkelmann x y exp xi yi xi ln yi theorem let be a data structure for d x y that approximates likelihood queries for poisson regression such that rd if exp n then requires n bits of storage proof we reduce from the indexing problem which is known to have n randomized communication complexity jayram et al alice is given a vector b n she produces for every i with bi the points xi i where i i denote the nth unit roots in the plane the vertices of a regular of radius r cos n n in canonical order the corresponding counts are set to yi she builds and sends of size s n to bob whose task is to guess the bit bj he chooses to query j r cos n r note j that this affine hyperplane separates r from the other scaled unit roots since it passes exactly through r mod n and r mod n also all points are within distance from each other by construction and consequently from the hyperplane thus xi for all i j if bj then xj does not exist and the cost is at most x exp xi yi xi ln yi x if bj then xj is in the expensive halfspace and at distance exactly j t j xj r cos n r cos n n so the cost is bounded below by exp n n exp exp n given bob can distinguish these two cases based on the data structure only by deciding whether is strictly smaller or larger than exp consequently s n n since this solves the indexing problem note that the bound is given in bit complexity but restricting the data structure to a sampling based coreset and assuming every data point can be expressed in o d log n bits this means we still have a lower bound of k logn n samples corollary every sampling based coreset for poisson regression with approximation factor exp n as in theorem requires at least k logn n samples at this point it seems very likely that a similar argument can be used to rule out any o n constant approximation algorithm this remains an open problem for now why core dns for count data can still work so far we have a quite pessimistic view on extending cdns beyond gaussians in the gaussian setting where the loss is measured in squared euclidean distance the number of important points having significantly large leverage scores is bounded essentially by o d this is implicit in the original early works drineas et al and has been explicitly formalized later langberg and schulman clarkson and woodruff it is crucial to understand that this is an inherent property of the norm function and thus holds for arbitrary data for the poisson glm in contrast we have shown that its loss function does not come with such properties from scratch we constructed a worst case scenario where basically every single input point is important for the model and needs to appear in the coreset usually this is not the case with statistical models where the data is assumed to be generated from some generating distribution that fits the model assumptions consider for instance a data reduction for gaussian linear regression via leverage score sampling uniform sampling it was shown that given the data follows the model assumptions of a gaussian distribution the two approaches behave very similarly or to put it another way the leverage scores are quite uniform in the presence of more and more outliers generated by the heavier tails of tdistributions the leverage scores increasingly outperform uniform sampling ma et al the poisson model yi poi exp xi though being the standard model for count data suffers from its inherent limitation on equidispersed data since e yi v yi exp xi count data however is often overdispersed especially for large counts this is due to unobserved variables or problem specific heterogeneity and the poisson model is known to be inferior for data which specifically follows the poisson model but turns out to be more powerful in modeling the effects that can not be captured by the simple poisson model it has wide applications for instance in econometric elasticity problems we review the poisson model for count data winkelmann yi poi exp xi ui exp xi vi vi ln ui n a natural choice for the parameters of the distribution is in which case we have e yi exp xi exp xi v yi e yi exp e yi it follows that v yi exp xi exp xi exp xi where a constant that is independent of xi controls the amount of overdispersion taking the limit for we arrive at the simple model since the distribution of vi ln ui tends to the deterministic dirac delta distribution which puts all mass on the inference might aim for the poisson model directly as in zhou et al or it can be performed by maximum likelihood estimation of the simple poisson model the latter provides a consistent estimator as long as the mean function is correctly specified even if higher moments do not possess the limitations inherent in the simple poisson model winkelmann summing up our review on the count modeling perspective we learn that preserving the loglinear mean function in a poisson model is crucial towards consistency of the estimator moreover modeling counts in a model gives us intuition why leverage score sampling can capture the underlying linear model accurately in the poisson model u follows a distribution it thus holds for ln ln u v that v n in cdn uniform full training data sample size in percentage cdn uniform full training data sample size in percentage negative log pseudo likelihood cdn uniform full cdn uniform full training data sample size in percentage cdn uniform full log time in hours log time in minutes log rmse root mean square error log rmse root mean square error negative poisson pseudo negative gaussian pseudo training data sample size in percentage cdn uniform full training data sample size in percentage rmse training data sample size in percentage training time figure performance the lower the better of gaussian cdns on mnist upper row and poisson cnds on the traffic dataset lower row shown are the negative log pseudo likelihood left the squared error loss middle in as well as the training time right in on the for different proportions of the data sampled x axis please note the jump in the after as one can see cdns blue quickly approach the predictive performance of the full dataset full black uniform sampling uniform red does not perform as well as cdns moreover cdns can be orders of magnitude faster than dns on the full dataset and scale similar to uniform sampling this is also supported by the vertical lines they denote the mean performances the more to the left the better on the top axes best viewed in color by independence of the observations which implies ln n in omitting the bias in each intercept term which can be cast into x we notice that this yields again an ordinary least squares problem ln defined in the columspace of x there is still a missing piece in our argumentation in the previous section we have used that the coreset construction is an embedding for the columnspace of the whole data set including the dependent variable for x ln we face two problems first is only implicitly given in the data but is not explicitly available second is a vector derived from x in our setting and might be different for any of the d instances fortunately it was shown via more complicated arguments drineas et al that it is sufficient for a good approximation if the sampling is done obliviously to the dependent variable the intuition comes from the fact that the loss of any point in the subspace can be expressed via the projection of ln onto the subspace spanned by x and the residual of its projection a good approximation of the subspace implicitly approximates the projection of any fixed vector which is then applied to the residual vector of the orthogonal projection this solves the first problem since it is only necessary to have a subspace embedding for x the second issue can be addressed by increasing the sample size by a factor of o log d for boosting the error probability to o and taking a union bound sample portion mnist gcdn gudn traffic pcdn pudn table comparison of the empirical relative error the lower the better best results per dataset are bold both gaussian gcdns and poisson pcdns cdns recover the model well with a fraction of the training data uniformly sampled dns udns lag behind as the sample size drops empirical illustration our intention here is to corroborate our theoretical results by investigating empirically the following questions how does the performance of cdns compare to dns with access to the full training data set and to a uniform sample from the training data set and how does the empirical error behave according to the sample sizes do coresets affect the structure recovered by the dn to this aim we implemented c dns in python calling all experiments ran on a linux machine cores gpus and ram benchmarks on mnist and traffic data we considered two datasets in a first experiment we used the data set of handwritten labeled digits we employed the training set consisting of images each with pixels for a total of measurements and trained gaussian dns on it the second data set we considered contains traffic count measurements on selected roads around the city of cologne in germany ide et al it consists of timestamped measurements taken by sensors for a total of measurements on this dataset we trained poisson dns for each dataset we performed fold for training a full dn full using all the data leverage score sampling coresets cdns and uniform samples uniform for different sample sizes we then compared the predictions made by all the dns and the time taken to train them for the predictions on the mnist dataset we clipped the predictions to the range for all the dns for the traffic dataset we computed the predictions bxc of every measurement x rounded to the largest integer less than or equal to x fig summarizes the results as one can see cdns outperform dns trained on full data and are orders of magnitude faster compared to uniform sampling coresets are competitive actually as seen on the traffic dataset cdns can have more predictive power than the optimal model using the full data this is in line with mahoney who observed that coresets implicitly introduce regularization and lead to more robust output table summarizes the empirical relative errors x f x x between dns and dns trained on all the data cdns clearly recover the original model at a fraction of training data overall this answers affirmatively relationship elucidation we investigated the performance of cdns when recovering the graph structure of word interactions from a text corpus for this purpose we used the http https skill item loss component rotation direction tree cell digit map skill set document item disparity object pca oscillator neuron distance pyramid tangent loss component estimator dialogue routing saliency road iiii policy context fuzzy option wavelet speaker user control call star letter delay building attractor controller subscriber block eeg neural evidence potential classifier spike channel instruction rule stress rules chip net lesion circuit light rotation direction tree cell digit map skill set document item disparity object pca oscillator neuron distance pyramid tangent estimator dialogue routing saliency road iiii policy context fuzzy option wavelet speaker user control call star letter delay building attractor controller subscriber block eeg neural evidence potential classifier spike channel instruction rule stress rules chip net lesion circuit light analog set document loss component disparity object estimator pca oscillator neuron dialogue distance routing pyramid saliency tangent road iiii policy context fuzzy option wavelet speaker user control call star letter delay building attractor controller subscriber block eeg neural evidence potential classifier spike channel instruction rule stress rules chip net lesion circuit light rotation direction tree cell digit map analog analog gaussian cdn poisson cdn skill item loss component rotation direction tree cell road digit map routing policy object pca neuron pyramid skill set document item disparity estimator oscillator distance dialogue saliency tangent context user iiii fuzzy option wavelet speaker control call star letter delay building attractor controller subscriber block eeg neural evidence potential classifier spike channel instruction rule stress rules chip net lesion circuit light analog loss component rotation direction tree cell road digit map routing policy object pca neuron pyramid skill set document item disparity estimator oscillator distance dialogue saliency tangent context user iiii fuzzy option wavelet speaker control call star letter delay building attractor controller subscriber block eeg neural evidence potential classifier spike channel instruction rule stress rules chip net lesion circuit light analog loss component rotation direction tree cell road digit map routing policy set document object pca neuron pyramid disparity estimator oscillator distance dialogue saliency tangent context user iiii fuzzy option wavelet speaker control call star letter delay building attractor controller subscriber block eeg neural evidence potential classifier spike channel instruction rule stress rules chip net lesion circuit light analog figure elucidating the relationships between random variables shown are the positive dependency structures of gaussian top and poisson bottom cdns on nips and different learning sampling sizes using left middle and right the edges show the top thresholded positive coefficients of the glms the colors of the edges represent modularity as one can see cdns elucidate relationships among the words that make semantically sense and approach the structure learned using the full dataset for a quantitative assessment see tab best viewed in color dataset it contains documents with a vocabulary above words we considered the most frequent words fig illustrates the results qualitatively it shows three cdns of sampling sizes and for gaussians top after a log transformation and for poissons bottom cdns capture well the gist of the nips corpus table confirms this quantitatively it shows the frobenius norms between the dns cdns capture the gist better than naive uniform sampling this answers affirmatively to summarize our empirical results the answers to questions and show the benefits of cdns conclusions inspired by the question of how we can train graphical models on a massive dataset we have studied coresets for estimating dependency networks dns we established the first rigorous guarantees for obtaining compressed of gaussian dns for large data sets we proved worstcase impossibility results on coresets for poisson dns a review of poisson modeling of counts provided deep insights into why our coreset construction still performs well for count data in practice sample portion udn gaussian cdn poisson gaussian poisson table frobenius norm of the difference of the adjacency matrices the lower the better recovered by dns trained on the full data and trained on a uniform subsample udn resp coresets cdns of the training data the best results per statiscal type are bold cdns recover the structure better than udns our experimental results demonstrate the resulting core dependency networks cdns can achieve significant gains over no or naive even in the case of count data making it possible to learn models on much larger datasets using the same hardware cdns provide several interesting avenues for future work the conditional independence assumption opens the door to explore hybrid multivariate models where each variable can potentially come from a different glm family or link function on massive data sets this can further be used to hint at independencies among variables in the multivariate setting making them useful in many other large data applications generally our results may pave the way to establish coresets for deep models using the close connection between dependency networks and deep generative stochastic networks bengio et al networks poon and domingos molina et al as well as other statistical models that build multivariate distributions from univariate ones yang et al acknowledgements this work has been supported by deutsche forschungsgemeinschaft dfg within the collaborative research center sfb providing information by analysis projects and references pankaj agarwal and sharathkumar streaming algorithms for extent problems in high dimensions algorithmica doi url https genevera allen and zhandong liu a local poisson graphical model for inferring networks from sequencing data ieee transactions on nanobioscience issn mihai badoiu and kenneth clarkson smaller for balls in proc of soda pages mihai badoiu and kenneth clarkson optimal for balls computational geometry doi url https mihai badoiu sariel and piotr indyk approximate clustering via in proceedings of stoc pages bengio laufer alain and yosinski deep generative stochastic networks trainable by backprop in proc of icml pages julian besag statistical analysis of data journal of the royal statistical society series d jonathan carlson zabrina brumme christine rousseau chanson brumme philippa matthews carl myers kadie james mullins bruce walker richard harrigan philip goulder and david heckerman phylogenetic dependency networks inferring patterns of ctl escape and codon covariation in gag plos computational biology kenneth clarkson and david woodruff low rank approximation and regression in input sparsity time in proc of stoc pages anirban dasgupta petros drineas boulos harb ravi kumar and michael mahoney sampling algorithms and coresets for p regression siam journal on computing doi url https adrian dobra variable selection and dependency networks for genomewide data biostatistics petros drineas michael mahoney and muthukrishnan sampling algorithms for regression and applications in proc of soda pages url http petros drineas michael mahoney and muthukrishnan cur matrix decompositions siam journal on matrix analysis and applications doi url https dan feldman matthew faulkner and andreas krause scalable training of mixture models via coresets in proc of nips dan feldman melanie schmidt and christian sohler turning big data into tiny data coresets for pca and projective clustering in proc of soda pages dan feldman alexander munteanu and christian sohler smallest enclosing ball for probabilistic data in proc of socg pages doi url http leo n geppert katja ickstadt alexander munteanu jens quedenfeld and christian sohler random projections for bayesian regression statistics and computing fabian hadiji alejandro molina sriraam natarajan and kristian kersting poisson dependency networks gradient boosted models for multivariate count data mlj sariel a simple algorithm for maximum margin classification revisited url http arxiv sariel dan roth and dav zimak maximum margin coresets for active and noise tolerant learning in proc of ijcai pages heckerman chickering meek rounthwaite and kadie dependency networks for density estimation collaborative filtering and data visualization journal of machine learning research christoph ide fabian hadiji lars habel alejandro molina thomas zaksek michael schreckenberg kristian kersting and christian wietfeld lte connectivity and vehicular traffic prediction based on machine learning approaches in proc of ieee vtc fall jayram ravi kumar and sivakumar the communication complexity of hamming distance theory of computing doi url https michael langberg and leonard schulman universal for integrals in proc of soda mario lucic olivier bachem and andreas krause strong coresets for hard and soft bregman clustering with applications to exponential family mixtures in proc of aistats pages ping ma michael mahoney and bin yu a statistical perspective on algorithmic leveraging jmlr url http michael mahoney randomized algorithms for matrices and data foundations and trends in machine learning doi url https peter mccullagh and john nelder generalized linear models chapman and hall alejandro molina sriraam natarajan and kristian kersting poisson networks a deep architecture for tractable multivariate poisson distributions in proc of aaai rajeev motwani and prabhakar raghavan randomized algorithms cambridge univ press isbn aloke phatak harri kiiveri line harder clemmensen and william wilson netrave constructing dependency networks using sparse linear regression bioinformatics jeff m phillips coresets and sketches in handbook of discrete and computational geometry hoifung poon and pedro domingos networks a new deep architecture proc of uai sashank reddi and alexander smola communication efficient coresets for empirical loss minimization in proc of uai pages mark rudelson and roman vershynin sampling from large matrices an approach through geometric functional analysis journal of the acm doi url http joel tropp improved analysis of the subsampled randomized hadamard transform advances in adaptive data analysis doi url https rainer winkelmann econometric analysis of count data springer edition isbn eunho yang pradeep ravikumar genevera allen and zhandong liu on graphical models via univariate exponential family distributions jmlr mingyuan zhou lingbo li david dunson and lawrence carin lognormal and gamma mixed negative binomial regression in proceedings of icml url http pdf
2
proc of the eur conf on python in science euroscipy catos computer aided system jinook apr f animal behavioral biology there are several cases in which an autonomous system would be useful observation of certain species continuously or for documenting specific events which happen irregularly longterm intensive training of animals in preparation for behavioral experiments and training and testing of animals without human interference to eliminate potential cues and biases induced by humans the primary goal of this study is to build a system named catos computer aided system that could be used in the above situations as a proof of concept the system was built and tested in a pilot experiment in which cats were trained to press three buttons differently in response to three different sounds human speech to receive food rewards the system was built in use for about months successfully training two cats one cat learned to press a particular button out of three buttons to obtain the food reward with over percent correctness index training animal observing automatic device i ntroduction it is often the case in animal behavioral biology that a large amount of human resources time and data storage such as video recordings are required in animal observation and training some representative examples of these cases are observation of certain species continuously or monitoring for specific events which occur irregularly when behavior of certain species during any time period or specific time period such as nocturnal behaviors are investigated certain experiments require a prolonged training period sometimes over a year this type of experiment requires reliable responses which may not correspond to usual behavior patterns from animals in tasks therefore training may require a long period of time until the subject is ready to be tested additionally long periods of human supervised training can introduce unintended cues and biases for animals in the first case an autonomous system for observing animals can save human resources and reduce the amount of data storage the reduced amount of data can also conserve other types of human resources such as investigation and maintenance of data there have been attempts to build autonomous observing or surveillance systems in the fields of biology such as kritzler et al s work and corresponding author cognitive biology university of vienna c jinook oh this is an article discopyright tributed under the terms of the creative commons attribution license which permits unrestricted use distribution and reproduction in any medium provided the original author and source are credited http security systems such as belloto et al vallejo et al for instance there are also commercial products for surveillance systems with various degrees of automation or incorporating artificial intelligence however the intelligence of each system is and it is difficult to apply these specific systems to novel situations without considerable adjustments in the second case an autonomous system for prolonged intensive training can also save human resources and eliminate potential cues and biases caused by humans training with an autonomous system is an extension of traditional operant conditioning chambers and many modern and elaborated versions have been developed and used such as in markham et al takemoto et al kangas et al steurer et al and fagot bonte however many of the previous devices use commercial software also they do not possess the observational features developed in the current project it would be useful to have an relatively and modularized system which could be customized for the observation training and the experimentation on animal subjects of various species catos the system built in the present study fulfills these necessities the difference between the previous systems and catos computer aided system in the present work is that the animals do not have to be captured or transported to a separated space at a specific time in order to be trained the disadvantages of separating animals primates are and include stress on animals separated from their group or moved from their usual confines the risky catching procedure for both animal and human cf fagot bonte similar arguments apply to most animal species especially when they are social the automatic learning device for monkeys aldm described in fagot bonte is very similar to the trainer aspect of catos described in the present work but catos is different in following features first of all it aimed to be opensource based and more modular so that it can be more easily adjusted and adopted to different species and experiments another feature is that catos is equipped with various observational features including visual and auditory recording and recognition through video camera and microphone which make the system able to interact with the subjects such as reacting immediately to a subject with a motion detection from a camera or a sound recognition from a microphone catos should offer the following advantages the system should be flexible in terms of its adjustability and the extendibility to various projects and species the proc of the eur conf on python in science euroscipy software should be and both software and hardware components should be modularized as much as possible thus the system reassembly for researchers in animal behavioral biology is practical the system should have various observational features applicable to a broad range of animal species and observational purposes the system should perform continuous monitoring and it should record video sound only when a set of particular conditions is fulfilled this would reduce the amount of data produced during the procedure the system should have actuators to react in certain situations which allows it to act as a the human designs the procedure by adjusting parameters and modules but the actual performance should be done by the system in this way the system could help reducing the amount of time required for training and eliminating which might be induced by the human interferences with this system the animal should not have to be transported to a certain space or separated from its group for training the animals should be able to choose when to start a trial on their own two catos prototypes have been built during this study the first build of catos has pushbuttons as a main input device for cats and the second build has a as a main input device the first build was an initial attempt to build and test such a system the second build is the final product of the study the basic structures of these two builds are more or less the same the differences are that the second version has improved functions and it uses the touchscreen instead of pushbuttons the first build of catos was tested with domestic cats felis catus to train them to press three different buttons differently depending on the auditory stimuli three different human speech sounds the final goal of this training is to investigate human speech perception in cats there is no doubt in that many animal species can recognize some words in human speech the examples of speech perception in dogs and chimpanzees can be found in the work of kaminski et al and heimbauer et al respectively in some cases animals can even properly produce words with specific purposes an example of speech perception and production in a parrot can be found in the work of pepperberg despite these findings there is ongoing debate about whether the same perceptual mechanisms are used in speech recognition by humans and animals fitch to investigate this issue animals have to be trained to show different and reliable responses to different human speech sounds then we can test which features of human speech are necessary for different animal species to understand it thus the final aim of the training in this study would be to obtain cats showing different responses to different human speech sounds with statistical significance over percent before reaching this final goal several smaller steps and goals are required fig overall system diagram b rief description of catos c omputer a ided t raining bserving s ystem the overall system is composed of a combination of software and hardware components the software components are mainly composed of the python script named as version and the program for the microcontroller the aa runs all of the necessary processes and communicates with the microcontroller program the microcontroller program operates sensors and actuators as it communicates with the aa program the hardware components are composed of various devices some of which are directly connected to the computer via usb cables some other devices only have gpio general purpose input output pins therefore they are connected to the microcontroller the microcontroller itself is connected to the computer via a usb cable the hardware devices which are directly connected via usb cables can be accessed using various software modules which are imported into the aa program the access to other devices only using gpio pins is performed in the microcontroller and the aa program simply communicates with the microcontroller program via a serial connection for sending commands to actuators and receiving values from sensors the software for this system is called aa agent for animals this software was build with helps of many external libraries such as opencv and once it starts seven processes were launched using multiprocessing package of python and it runs until the user terminates the program the multiprocessing was used because the heavy calculation for image processing from multiple webcams were concerned the number of processes can be changed as some of them can be turned on or off these processes include a process for each camera a process an process an process a schema process and a process figure even though some of these processes have quite simple tasks they were separated in order to prevent them from interfering with each other becoming the bottleneck the system has to process the visual auditory and other sensory and motor information simultaneously to recognize the change of the environment and catos computer aided system fig respond to it properly the output data such as captured video input images recorded wav files csv files for trial results and the log file are temporarily stored in the output folder after the daily session is finished all of these output files go through an archiving process which can include but is not restricted to generating movies generating images with the movement analysis labeling sound files and moving different types of files into the categorized subfolders of an archiving folder named with a timestamp besides combining all the above modules and implementing some common functions one more python program was implemented to facilitate the process of analyzing the recorded data the program is called aa dataviewer which is based on wxpython gui toolkit and matplotlib for drawing graphs figure it loads the log file the result csv comma separated values file containing the results of the trial the csv files the movie files and the wav files from one folder containing all data collected for one session day for each video clip there is a jpeg image showing the movements of the blobs the circles in the image represent the positions of the blobs and their color represents the with the black corresponding to the beginning of the movie and the white to the end of the movie a line connecting multiple circles means that those blobs occurred at the same time another feature of this program is its ability to generate a graph with selected sessions in the archive folder there are each of which contains all the data for a session when the select sessions button is clicked a window appears for selecting multiple folders the result data from these selected of archive folder is drawn as a graph using matplotlib by visualizing the data for certain period it helps the trainer or experimenter quickly assess the current status of the training procedure the two feeders used in this study is a device mainly comprising the arduino microcontroller refer http a for the microcontroller a servomotor and a frame encasing the whole feeder both feeder variants work in a similar way by rotating the servomotor by a certain number of degrees although the second feeder shows better performance in terms of consistent amount of fig automatic feeder fig circuit with a microcontroller food released due to the usage of an archimedes screw initially an estimate of the amount of food left in the food container was obtained using an ir distance sensor but this feature was discarded in the second build since the distance information from the ir sensor was not accurate enough for this application the second feeder confirms the emission of a food reward via the piezoelectric sensor which is positioned right below the archimedes screw figure communication between the arduino chip and the main computer was accomplished by using the arduino module of the aa program in the circuit figure the temperature sensor measures the temperature inside of the protective wooden platform the photocell sensor measures the ambient light level the light bulb can be turned on when the photocell sensor indicates the ambient light level is below a threshold two fans are turned on when the temperature sensor indicates the temperature is too high in the platform proc of the eur conf on python in science euroscipy the piezoelectric sensor is read while the servomotor is actuating in order to confirm the occurrence of the food reward this sensor reading is required because occasionally the food dispensing fails due to the combination of the short motor activation time seconds and the shape of the dry food pieces which can fit into other pieces easily and then fail to emerge the servomotor is responsible for the food dispense by turning the archimedes screw back and forth r esults of building catos and its testing on domesticated cats the hardware and software were built and tested the software is available at https with gnu general public license version both hardware and software are curretnly in its alpha stage although its potential to be used to train and test animal cognition was tested and its usage seemed promising to save human resources in certain situations both hardware and software should be developed further to be practically used for experimenting animal cognition the two observed the experimental area for to hours per day for about months from the middle of october to the middle of march the movement records movie files jpeg image files and wav sound files generated during this period took giga bytes of storage to obtain a rough idea of the degree of reduction in data storage that was achieved using the system the number of recorded frames in the video recording was assessed data for days were taken to calculate it the total observation period was seconds corresponding to hours the number of frames recorded was and the average fps frame per second was therefore approximately the video recordings were stored for seconds hours which is about percent of entire observation period these specific numbers are not very meaningful since they can fluctuate with the increase or decrease of the subject s movements but the point is that the most of the meaningless recordings were successfully filtered out by catos human presence during session is not necessary data transfer from one computer to another maintenance or modification of the system requires human interaction but no time and effort is required concerning the training and testing sessions because no one attends the sessions a periodic analysis of the animal s performance with the system is required a simple assessment of how much food the animals took or more specifically how many correct and incorrect trials occurred can be done quickly since this information is already stored in result csv file displaying the number of correct and incorrect trials generated with timestamps at the end of each session also the utility program displays all the timestamps and its jpeg image which presents a brief report on the movement detected in the recorded thus simply browsing the jpeg images is often enough to assess the session if it is not enough then one can obtain a more detailed assessment by playing the recorded around the trial times fig recent performance of the trained cat on three human speech discrimination task two domesticated cats were trained for testing the system both cats learned that approaching the feeder on a playback sound could lead to a food reward then one cat further learned that pressing one out of three buttons could lead to a food reward the training of the association between three different sound stimuli and three different buttons is an ongoing process the most recent performance data figure shows over percent of overall performance and also the performance on each button is significantly higher than percent of chance level r eferences bellotto sommerlade benfold bibby reid roth fernandez gool and gonzalez a distributed camera system for surveillance proc of the third int conf on distributed smart cameras icdsc bradski the opencv library dobb s journal of software tools nov jones oliphant peterson and others scipy open source scientific tools for python fagot and paleressompoulle automatic testing of cognitive performance in baboons maintained in social groups behavior research methods may fagot and bonte automated testing of cognitive performance in monkeys use of a battery of computerized test systems by a troop of baboons papio papio behavior research methods may fitch speech perception a chimpanzee weighs in current biology july heimbauer beran and owren a chimpanzee recognizes synthetic speech with significantly reduced acoustic cues to phonetic content current biology june hunter matplotlib a graphics environment computing in science engineering kaminski j call and fischer word learning in a domestic dog evidence for fast mapping science june kangas and bergman a novel apparatus for behavioral studies in unrestrained squirrel monkeys journal of neuroscience methods august kritzler jabs kegel and krger indoor tracking of laboratory mice via an framework proc of the first acm international workshop on mobile entity localization and tracking in environments markham butt and dougher a computer touchscreen apparatus for training visual discriminations in rats journal of the experimental analysis of behavior catos computer aided system pepperberg evidence for conceptual quantitative abilities in the african grey parrot labeling of cardinal sets ethology steurer aust and huber the vienna comparative cognition technology vcct an innovative operant conditioning system for various species and experimental procedures behavior research methods december takemoto izumi miwa and nakamura development of a compact and experimental apparatus with a screen for use in evaluating cognitive functions in common marmosets journal of neuroscience methods july vallejo albusac jimenez gonzalez and moreno a cognitive surveillance system for detecting incorrect traffic behaviors expert systems with applications september proc of the eur conf on python in science euroscipy
5
graphical nonconvex optimization for optimal estimation in gaussian graphical models qiang kean ming han and tong jun abstract we consider the problem of learning gaussian graphical models the graphical lasso is one of the most popular methods for estimating gaussian graphical models however it does not achieve the oracle rate of convergence in this paper we propose the graphical nonconvex optimization for optimal estimation in gaussian graphical models which is then approximated by a sequence of convex programs our proposal is computationally tractable and produces an estimator that achieves the oracle rate of convergence the statistical error introduced by the sequential approximation using the convex programs are clearly demonstrated via a contraction property the rate of convergence can be further improved using the notion of sparsity pattern the proposed methodology is then extended to semiparametric graphical models we show through numerical studies that the proposed estimator outperforms other popular methods for estimating gaussian graphical models keywords adaptivity graphical nonconvex optimization nonconvexity semiparametric sequential convex approximation introduction we consider the problem of learning an undirected graph g v e where v d contains nodes that represent d random variables and the edge set e describes the pairwise conditional dependence relationships among the d random variables gaussian graphical models have been widely used to represent pairwise conditional dependencies among a set of variables let x be a random variables under the gaussian assumption x n the graph g is encoded by the sparse concentration matrix or the sparse inverse correlation matrix here is the correlation matrix such that w and is a diagonal matrix with department of operations research and nj qiangs department of operations research and nj kmtan department of operations research and nj usa hanliu tencent ai lab shen zhen guangdong financial engineering princeton university princeton financial engineering princeton university princeton financial engineering princeton university princeton china tongzhang diagonal elements of in particular it is well known that the jth and kth variables are conditionally independent given all of the other variables if and only if the j k element of or is equal to zero thus inferring the conditional dependency structure of a gaussian graphical model boils down to estimating a sparse inverse covariance or correlation matrix a number of methods have been proposed to estimate the sparse concentration matrix under the gaussian assumption for example meinshausen and proposed a neighborhood selection approach for estimating gaussian graphical models by solving a collection of sparse linear regression problems using the lasso penalty in addition yuan and cai et al proposed the graphical dantzig and clime both of which can be solved efficiently from a perspective yuan and lin and friedman et al proposed the graphical lasso methodology a penalized likelihood based approach to estimate the concentration matrix directly various extensions of the graphical lasso were proposed and the theoretical properties were also studied among others banerjee et rothman et ravikumar et the gaussian graphical models literature is vast and we refer the reader to cai et al and drton and maathuis for recent reviews on this topic despite the large literature on using the graphical lasso to estimate concentration matrices in gaussian graphical models the graphical lasso does not achieve the oracle rate of convergence more specifically it is belived that the optimal rate of convergence p in spectral norm for the graphical lasso is at the order of s log rothman et here n is the sample size d is the number of nodes and s is the number of edges in the true graph in fact the graphical lasso and all of the aforementioned methods are based on the lasso penalty and it is well known that convex penalties usually introduce estimation bias for example in the linear regression setting fan and li zhang b fan et al have shown that the nonconvex penalized regression is able to eliminate the estimation bias and attain a more refined statistical rate of convergence based on these insights we consider the following penalized maximum likelihood estimation with nonconvex regularizers x b b argmin log det p d d a a at a where is the symmetric definite cone formed by all b is the sample covariance symmetric positive definite matrices in d d dimensions matrix and p is a nonconvex penalty here ha bi tr at b denotes the trace of at b however from the computational perspective minimizing a folded concave penalized problem is very complicated due to its intrinsic nonconvex structure indeed ge et al have shown that solving with a general concave penalty such as the scad fan and li or the mcp zhang is strongly in other words there does not exist a fully approximation scheme for problem unless more structures are assumed recently loh and wainwright proposed an algorithm to obtain a good local optimum for but an additional convex constraint that depends on the unknown true concentration matrix is imposed moreover they failed to provide a faster rate of convergence statistically due to not taking the signal strength into account in this paper instead of directly solving the nonconvex problem we propose to approximate it by a sequence of adaptive convex programs even though the proposed approach is solving a sequence of convex programs under some regularity conditions we show that the proposed estimator for estimating the sparse concentration matrix achieves p the oracle rate of convergence of treating as if the locations of the nonzeros were known a priori this is achieved by a contraction property roughly speaking each convex program gradually contracts the initial estimator to the region of oracle rate of convergence even when a bad initial estimator is used in the first place r s b b c f f n z z oracle rate contraction where b is the inverse correlation matrix estimator after the convex approxip mation k kf denotes the frobenius norm c is a constant and is referred to as the oracle rate each iteration of the proposed method helps improve the accuracy k dominates the statistical error the error caused by each only when k b f iteration is clearly demonstrated via the proven contraction property by rescaling the inverse correlation matrix using the estimated marginal variances we obtain an estimator of the concentration matrix with spectral norm convergence rate in the order of p p log here a b max a b is used to denote the maximum of a and b by exploiting a novel notion called sparsity pattern we further sharpens the rate of convergence under the spectral norm the rest of this paper proceeds as follows in section we propose the new methodology and its implementation section is devoted to theoretical studies we show that the proposed methodology can be extended to the semiparametric graphical models in section numerical experiments are provided to support the proposed methodology in section we conclude the paper in section all the proofs and technical details are collected in the supplementary material notation we summarize the notation that will be used regularly throughout the paper given a vector u ud t rd we define the q of u by kukq p where q for a set a let denote its cardinality for a matrix a ai j we use a to indicate that a is positive definite for q we use kakq maxu kaukq to denote the operator norm of a for index sets i j d we define ai j to be the matrix whose i j entry is equal to ai j if i i and j j and zero otherwise we use a b aij bij to denote the hadamard product of two matrices a and b let diag a denote the diagonal matrix consisting diagonal elements of a we use sign x to denote the sign of x sign x if x and sign x otherwise for two scalars fn and gn we use fn gn to denote the case that fn cgn and fn gn if fn cgn for two positive constants c and we say fn gn if fn gn and fn gn op is used to denote bounded in probability we use c and c to denote constants that may vary from line to line a sequential convex approximation let x xd t be a zero mean gaussian random vector then its density can be parameterized by the concentration matrix or the inverse correlation matrix the family of gaussian distributions respects the edge structure of a graph g v e in the sense that if and only if i j this family is known as the random field with respect to the graph the problem of estimating the edge corresponds to parameter estimation while the problem of identifying the edge set the set e i j v i j corresponds to the problem of model selection given n independent and identically distributed observations x i of a zero mean random vector x rd we are interested in estimating the inverse i i t b n correlation matrix and concentration matrix let x x b c bw c where w c diag b to be the sample covariance matrix and let c estimate b we propose to adaptively solve the following sequence of convex programs o b argmin c log det k for t d p where w b ij is a d d adaptive regularization matrix for a given tuning parameter and a weight function w and t indicates the number of total convex programs needed the weight function w can be taken to be w t t where p t is a folded concave penalty such as the scad or the mcp proposed by fan and li and zhang respectively to obtain an estimate for the concentration matrix estimator we rescale b t e t w c b t w c after the t convex program this rescaling helps back to e t significantly by eliminating the introimprove the rate of convergence for duced through the unpenalized diagonal terms the detailed routine is summarized in algorithm algorithm a sequential convex approximation for the graphical nonconvex optimization b regularization parameter input sample covariance matrix b by c b w c bw c where w c is a step obtain sample correlation matrix c b diagonal matrix with diagonal elements of step solve a sequence of graphical lasso problem adaptively n o b argmin h ci b log det k d and w b ij for e t w c step obtain an estimate of by b t w c the complexity of step in algorithm is o per iteration this is the complexity of the algorithm for solving the graphical lasso problem we will show in the latter section that the number of iteration can be chosen to be t log log d based on our theoretical analysis algorithm can be implemented using existing r packages such as glasso theoretical results in this section we study the theoretical properties of the proposed estimator we start with the assumptions needed for our theoretical analysis assumptions let s i j i j be the support set of the elements in thus s is also the support set of the elements in the first assumption we need concerns the structure of the true concentration and covariance matrices assumption structural assumption we assume that s m min max min max here max maxj and min minj where assumption is standard in the existing literature for gaussian graphical models see for instance meinshausen and yuan cai et yuan and lin ravikumar et we need min and max to be bounded from above and below to guarantee reasonable performance of the concentration matrix estimator rothman et throughout this section we treat m as constants to simplify the presentation the second assumption we need in our analysis concerns the weight functions which are used to adaptively update the regularizers in step of algorithm define the following class of weight functions n o w w t w t is nonincreasing w t if t w t if t assumption weight function there exists an such that the weight function w w satisfies w and w u where u c for some constant the above assumption on the weight functions can be easily satisfied for example it can be satisfied by simply taking w t t where p t is a folded concave penalty such as the scad or the mcp fan and li zhang next we impose an assumption on the magnitude of the nonzero entries in the inverse correlation matrix assumption minimal signal strength recall that s is the true support set the minimal signal satisfies that min i j c where c is the same constant that appears in assumption assumption is rather mild in the design case can be taken to p be the order of log which diminishes quickly as n increases it is an analogue to the minimal signal strength assumption frequently assumed in nonconvex penalized regression problems fan and li zhang taking the signal strength into account we can then obtain the oracle rate of convergence main theory we now present several main theorems concerning the rates of convergence of the proposed estimator for the sparse inverse correlation and the concentration matrices the following theorem concerns the rate of convergence for the estimator b obtained from algorithm when p proposition estimator let log under assumption we have r s log d b f n with probability at least proof of proposition we collect the proof of proposition in appendix a in the supplementary material the above proposition indicates that the statistical error under the frobenius norm p for the estimator is at the order of s log which is believed to be unimprovable when convex regularization is used rothman et ravikumar et however when a sequence of convex programs is used as in our proposal the rate of convergence can be improved significantly this is demonstrated in the following theorem theorem contraction property suppose that n s log d and take such that p log under assumptions and b satisfies the following contraction property b f with probability at least krl z oracle rate b s kf z contraction t f p moreover if t log n log log d we have s b t op f n proof of theorem the proof is collected in appendix a in the supplementary material theorem establishes a contraction property each convex approximation contracts the initial estimator towards the true sparse inverse correlation matrix until it reaches p the oracle rate of convergence to achieve the oracle rate we need to solve no more than approximately log log d convex programs note that log log d grows very slowly as d increases and thus in practice we only need to solve a few convex programs to get a better estimator than existing method such as the graphical lasso the rate p of convergence is better than the existing literature on methods for estimating sparse inverse correlation matrices rothman et lam and fan ravikumar et by rescaling we obtain a concentration matrix estimator with a faster rate of convergence theorem faster rate in spectral norm under the same conditions in theorem we have r s log d t e op n n proof of theorem the proof is deferred to appendix a in the supplementary material the theorem above provides the optimal statistical rate for estimating sparse concentration matrices using likelihood based methods rothman et lam and fan ravikumar et the extra log d term is a consequence of estimating the marginal variances we further sharpen the obtained theory using a novel notion called sparsity pattern as defined below definition sparsity pattern for a matrix a aij we say asp asp ij is the sp sp corresponding sparsity pattern matrix if aij when aij and aij otherwise let be the sparsity pattern matrix of or our next theorem provides an improved rate of convergence using this newly defined notion of sparsity pattern theorem improved convergence rate using sparsity pattern suppose that n p s log d and take such that log let t log under assumptions and we have r b t op and n r r log d t e op km n n proof of theorem the proof is deferred to appendix b in the supplementary material theorem suggests that the rates of convergence can be bounded using the spectral norm of the sparsity pattern matrix which are sometimes much sharper than those provided in theorems and to demonstrate this observation we consider a sequence of chain graphs specified by the following sparsity pattern matrices a k mck for k id k where ak r such that the i j entry ak ij if and ak ij otherwise id k r d k d k is the identity matrix let sk be the total sparsity of mck that is sk we plot the ratio of the two rates of convergence for estimating in theorems and kmc versus s in figure from figure we can k k k see that the ratio goes to as the total sparsity increases this demonstrates that the convergence rate in theorem is indeed much sharper than that in theorem as least for the chain graphs constructed above we also observe similar but less significant improvement for graphs in figure we give an geometric illustration of the star and chain graphs chain graph kmck sk figure convergence rates using sparsity pattern matrix mck and total sparsity sk star graph chain graph figure an illustration of the star and chain graphs extension to semiparametric graphical models in this section we extend the proposed method to modeling semiparametric graphical models we focus on the nonparanormal family proposed by liu et al which is a nonparametric extension of the normal family more specifically we replace the random variable x xd t by the transformation variable f x fd xd t and assume that f x follows a multivariate gaussian distribution definition nonparanormal let f fd t be a set of monotone univariate functions and let be a correlation matrix with diag a random variable x xd t has a nonparanormal distribution x npnd f if f x f fd xd t nd we aim to recover the precision matrix the main idea behind this procedure is to exploit kendall s tau statistics to directly estimate without explicitly calculating the marginal transformation functions fj we consider the following kendall s tau statistic x i i sign xj xj xk xk n n i the kendall s tau statistic represent the nonparametric correlations between the empirical realizations of random variables xj and xk and is invariant to monotone ej and x ek be two independent copies of xj and xk the population formations let x ej sign xk x ek we need version of kendall s tau is given by corr sign xj x the following lemma which is taken from liu et al it connects the kendall s tau statistics to the underlying pearson correlation coefficient lemma assuming x npnd f we have sin b sbjk for the motivated by this lemma we define the following estimators s unknown correlation matrix sin j k sbjk j now we are ready to prove the optimal spectral norm rate for the gaussian copula graphical model the results are provided in the following theorem p theorem assume that n s log d and let log under assumptions b satisfies the following contraction property and b f b krl s kf f t z z with probability at least optimal rate contraction if t log b t f p n log log d we have s op n proof of theorem the proof is deferred to appendix c in the supplementary material numerical experiments we compare our proposal to the graphical lasso glasso friedman et and neighborhood selection ns meinshausen and each of these approaches learns a gaussian graphical model via an penalty on each edge to evaluate the performance across methods we define the true positive rate as the proportion of correctly identified edges in the graph and the false positive rate as the proportion of incorrectly identified edges in the graph in addition we calculate the difference between the estimated and true concentration matrix under the frobenius norm we do not compute this quantity for the ns approach since they do not estimate the concentration matrix directly for our proposal we consider t iterations with the scad penalty proposed by fan and li that takes the following form if t if otherwise where in all of our simulation studies we pick each of the methods involves a sparsity tuning parameter we applied a fine grid of tuning parameter values to obtain the curves shown in figure we consider cases with n and d with two for a p p adjacency matrix a i random graph with elements of a set to ii band graph with ai i for i d we then use the adjacency matrix a to create a matrix e as if aij eij otherwise and set e given the matrix e we set equal to emin i where emin is the smallest eigenvalue of we then standardize the matrix so that the diagonals are equal to one finally we generate the data according to x x n n we present the results averaged over data sets for each of the two simulation settings with n and p in figure random graph our proposal glasso false positive rate our proposal ns glasso our proposal ns glasso our proposal glasso our proposal glasso false positive rate false positive rate false positive rate our proposal ns glasso band graph false positive rate random graph frobenius norm frobenius norm band graph band graph false positive rate random graph false positive rate our proposal ns glasso true positive rate frobenius norm frobenius norm true positive rate true positive rate true positive rate random graph band graph our proposal glasso false positive rate figure row i true and false positive rates averaged over data sets with p for random and band graphs respectively row ii between the estimated and the true inverse covariance matrices under the frobenius norm the curves are obtained by varying the sparsity tuning parameter for each of the methods from row i of figure we see that our proposal is very competitive relative to the existing proposals for estimating gaussian graphical models in terms of true and false positive rates across all simulation settings row ii of figure contains the between the estimated and the true inverse covariance matrices under the frobenius norm as a function of the false positive rate for random graph with n we see that the minimum error under the frobenius norm for our proposal is smaller than that of the graphical lasso as we increase the number of observations to n the between the minimum error for the two proposals are more apparent more interestingly the region for which our proposal has lower frobenius norm than the graphical lasso is the primary region of interest this is because an ideal estimator is one that has a low false positive rate while maintaining a high true positive rate with low error under the frobenius norm in contrast the region for which the graphical lasso does better under the frobenius norm is not the primary region of interest due to the high false positive rate we see similar results for the band graph setting conclusion and discussions we propose the graphical nonconvex optimization which is then approximated by a sequence of convex programs for estimating the inverse correlation and concentration matrices with better rates of convergence comparing with existing approaches the proposed methodology is sequential convex in nature and thus is computationally tractable yet surprisingly it produces estimators with oracle rate of convergence as if the global optimum for the penalized nonconvex problem could be obtained statistically a contraction property is established each convex program contracts the previous estimator by a until the optimal statistical error is reached our work can be applied to many topics low rank matrix completion problems quantile regression and many others we conjecture that in all of the aforementioned topics a similar sequential convex approximation can be proposed and can possibly give faster rate with controlled computing resources it is also interesting to see how our algorithm works in distributed systems is there any fundamental between statistical efficiency communication and algorithmic complexity we leave these as future research projects references banerjee el ghaoui and d aspremont a model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data the journal of machine learning research cai liu and luo x a constrained minimization approach to sparse precision matrix estimation journal of the american statistical association cai ren and zhou estimating structured covariance and precision matrices optimal rates and adaptive estimation electronic journal of statistics cai liu and zhou estimating sparse precision matrix optimal rates of convergence and adaptive estimation the annals of statistics drton and maathuis structure learning in graphical modeling annual review of statistics and its application fan and li variable selection via nonconcave penalized likelihood and its oracle properties journal of the american statistical association fan liu sun and zhang for sparse learning simultaneous control of algorithmic complexity and statistical error the annals of statistics in press friedman hastie and tibshirani sparse inverse covariance estimation with the graphical lasso biostatistics ge wang ye and yin strong result for regularized lq problems with concave penalty functions arxiv preprint lam and fan j sparsistency and rates of convergence in large covariance matrix the annals of statistics sparsistency and rates of convergence in large covariance matrix estimation the annals of statistics liu han yuan wasserman et al semiparametric gaussian copula graphical models the annals of statistics loh and wainwright j regularized with nonconvexity statistical and algorithmic theory for local optima journal of machine learning research meinshausen and graphs and variable selection with the lasso the annals of statistics ravikumar wainwright raskutti yu et al covariance estimation by minimizing divergence electronic journal of statistics rothman bickel levina zhu et al sparse permutation invariant covariance estimation electronic journal of statistics yuan high dimensional inverse covariance matrix estimation via linear programming journal of machine learning research yuan and lin y model selection and estimation in the gaussian graphical model biometrika zhang nearly unbiased variable selection under minimax concave penalty the annals of statistics zhang analysis of convex relaxation for sparse regularization the journal of machine learning research supplementary material to graphical nonconvex optimization for optimal estimation in gaussian graphical models qiang sun kean ming tan han liu and tong zhang abstract this supplementary material collects proofs for the main theoretical results in the main text and additional technical lemmas the proofs of proposition theorems and are collected in section a section b provides the proof for theorem proofs related to semiparametric graphical models are given in section various concentration inequalities and preliminary lemmas are postponed to sections d and e respectively a rate of convergence in frobenius norm this section presents an upper bound for the adaptive estimator b in frobenius norm which in turn helps establish the scaling conditions needed to achieve the optimal spectral norm convergence rate proofs of proposition theorems and in this section we collect the proofs for proposition theorems and in order to suppress the noise at the th step it is necessary to control min i j b ij in high dimensions for this we construct an entropy set e of s and analyze the mag nitude of the entropy set at the stage e is defined as ec min n e i j i j s or ij w u for u o thus the constant in assumption is c then it can be seen that s e and thus e is an entropy set of s for any proposition follows from a slightly more general result below which establishes rate of convergence for the estimator of sparse inverse correlation matrix b proposition estimator assume that assumption holds suppose p p s take such that log d and suppose n log then with probability at least b must satisfy r s log d b ck f n b kmax then in the proof of proposition define the event j kc k event j by applying lemma q and taking e s we obtain k b f p p p if we further take log d log d then by lemma we have event j hold with probability at least the result follows by plugging the choice of theorems and follow form a slightly more general result below which chare t in spectral acterizes the rate of convergence of b in frobenius norm and that of norm p theorem assume that assumptions and suppose that s p take such that log then with probability at least b satisfies b f moreover if that t log krl z optimal rate p b s kf z f contraction p and f r r log d k s n n min min n we have k op max e t b t op k proof of theorem under the conditions of theorem combining proposition and lemma we obtain the following contraction property of the solutions b t b f next we introduce an inequality by induction analysis specifically if an n and then b f krl an s kf krl k we obtain that b krl s kf s f f k respec in the sequel we bound krl s kf and k b f f p k tively by proposition we have k b moreover if we f p p p t b let t log n log log n then k kf p on the other side we have krl s kf op k which follows from lemma therefore combining the above results obtains us that k b t f p op k e t we apply lemma and obtain to achieve the statistical rate for taking b that e t c w c kw c w c w w b t w b t k z w w k z c w w c w c c kw b t c w c w w w k b t kw z kw k k b t b t w we now bound terms to respectively before we proceed we apply lemma and the union sum bound to obtain that for any n o n o c max d exp p kw n c exp n c d ii i where c log suppose that then we have n c p n further suppose that n log d and take log d we obtain that n c d log d and r log d c p kw w max n d therefore we have w c where we use the assumption that maxi max p c op max log since w and w are diagonal and thus commutative we note that for any two event a and b p a p a b p a b c holds therefore for any m we have r log d c p w w m max n r log d c w m w max n p c c w w w min w w w p c w w c w w w min further using lemma yields that r log d c p w w m max n p c p w min w z w z min m max r log d n by taking m min max min and letting we get under the assumption that max min o log d we have p c log d and thus therefore we obtain that w max min o p w op min max log d similarly we have the following facts b t op k c w min c op w min and w applying the above results to the terms we obtain that r r s s max log d op min k op min k n n min n r r log d s max op k op min k n n min min therefore by combining the rate for terms we obtain the final result technical lemmas s define the symmetrized bregman divergence for the loss function l as dl rl l for any matrix a r let a r be the diagonal matrix of a with diagonal entries equal to and a a be the diagonal mtrix lemma for the symmetrized bregman divergence defined above we have s dl rl rl proof of lemma we use vec a to denote the vectorized form of any matrix a then by the mean value theory there exists a such that s dl rl min r vec rl l k where t l vec by standard properties of the kronecker product and the weyl s inequality horn and johnson we obtain that min min r l finally observing that we obtain s dl rl plugging the definition of k rl k obtains us the final bound k by using localized the following lemma characterizes an upper bound of k b f analysis p lemma suppose s take e such that s e and further assume k e c kmin krl kmax let b be the solution to then b must satisfy kb kf k s kf e kf p proof of lemma we start by introducing an extra local parameter r which satp p p p isfies s r k this is possible since s and p s by assumption based on this local parameter r we construct an where t is taken such that k e k r mediate estimator e t b f e e if k kf r t otherwise applying lemma with and obtains us e f rl e rl e to bound the right hand side of the above inequality we use lemma to obtain s e dl s b tdl t rl b rl b we note that the of the norm k evaluated at consists the set of all symmetric matrices r such that ij if i j ij sign ij if i j and ij ij if i j and ij where ij is the i j entry of then by the conditions there exists a b k b such that b b b b plugging into and adding the term rl b i on both sides of we obtain b b h k k e b z kf hrl b b z h i b b t hrl b z i ii iii next we bound terms i ii and iii respectively for a set e let e c denote its complement with respect to the full index set i j i j d for term i separating to e d and e c d in which d is the set consisting the support of rl and b of all diagonal elements and then using the matrix inequality we obtain rl b rl rl e d rl b e d f e c f b b b s d b i h b rl s d i for the last term in the above equality we have b s c b h s c i h s c e e d f e c f b and b for term ii separating the support of obtain h b e d b s c h b e to s d and s c d we b s c b s c b s c i s c plugging into and applying matrix inequality yields h b b b s d b i h b s b h k s kf k b s d i h s i k s c s c kf k s kf k b e c kf k b b s c s kf e kf where we use d in the second equality and e c s c in the last inequality b b for term iii using the optimality condition we have iii rl b plugging the bounds for term i ii and iii back into we find that k r e f k e c kf rl k rl e d f e c kf b s b f e c f f further observing the facts that k e c kf k e rl e f and tk b f we can simplify the above inequality to k ke kf k s kf p p c e min c rl k dividing both sides by k e f e d kf k s kf e kf max f p s b e d kf k c b e kf krl e kf in where we use krl e d kf k c the equality and the last inequality follows from the inequality the fact k kmax and the assumption that kmax therefore by the definition of k k k r ps ps r which implies e b r we obtain k e f from the construction of e thus b satisfies the desired error bound k recall the definition of e t we can bound k b kf s f in terms of lemma sequential bound under the same assumptions and conditions in lemma for b must satisfy k b kf s f rl proof of lemma now if we assume that for all e f we have the following where e is defined in and e c min krl using the matrix inequality we obtain p p s max s and krl s f kmax e kf therefore we have s f rl e f p p e kmax e kmax p p s where the second inequality is due to the assumption that krl kmax the error bound is given by lemma by taking and e e p b rl e f s s f f where last inequality is due to therefore we only need to prove that and hold by induction for we have w u for any u and thus s which implies that and hold for now assume that and hold at for some since i j e implies that i j s and b w ij j w u by assumption and since w x is we b must have u therefore by induction hypothesis we obtain that ij p b e f u b u f u p p s where the second last inequality follows from lemma the fact that and hold at this implies that now for such e c we have k e c kmin w u krl which completes the induction step our next lemma establishes the relationship between the adaptive regularization parameter and the estimator from the previous step lemma assume w t let ij w for some and w w i j then for the frobenius norm k kf we have w s f u f u f proof of lemma by assumption if u then w u otherwise w w u therefore the following inequality always hold w w u then by applying the k triangle inequality we obtain that s f w u u f f our last technical result concerns a contraction property namely how the sequential approach improves the rate of convergence adaptively proposition contraction property assume that assumptions and p hold assume that kmax and s then b satisfies the following contraction property b f krl s kf b f proof of proposition under the conditions of the theorem the proof of lemma yields that e c kmin where e is defined in and k thus applying lemma with b b b f kf s s f w u f u rl e f b u f f i u b f e f rl s f rl f in the next we bound term i separating the support of rl then using triangle inequality we obtain i rl kmax in terms of k b plugging the bound into yields that b rl e f w f z and e e we obtain k s on the other side by lemma we can bound k krl e e to s and e and f p moreover we have the following facts first we have rl e rl by the inequality from the assumption we know krl kmax plugging p these bounds into results that krl e kf krl s kf now by p following a similar argument in lemma we can bound by b e f u b u therefore term i can be bounded by krl s kf u b f plugging the upper bound for i into we obtain f b f krl s kw b u u f now observing that k kmin u thus w u w where is a matrix with each entry equals to and is defined similarly further notice that u we complete the proof b improved convergence rate using sparsity pattern we develop an improved spectral norm convergence rate using sparsity pattern in this section we collect the proof for theorem first and then give technical lemmas that are needed for the proof proof of theorem proof of theorem let us define s i j ij u where u is introij duced in let s i j ij u then lemma implies e f w u f q q e u and thus i j s for any i j e we must have b ij b ij ij therefore applying lemma and using the fact that k kmax u we obtain q p p s b b s s s f on the other side i j s implies that b ij b ij b ij ij b ij exploiting the above fact we can bound q b p ij u in terms of k b b f q s by induction on we obtain q q p b kf max since log log we must have that the right hand side of the above inequality is smaller than which implies that s and b b therefore the estimator enjoys the strong oracle property using lemma obtains us that b b b c applying lemma finishes the proof of theorem s max technical lemmas we start with the definitions of some constants for notational simplicity let and d i i i d define the oracle estimator as o b b argmin c log det d supp recall that smax maxj p i is the maximum degree lemma suppose that the weight function satisfies that w u for u defined p in assume that smax k s if b kmax we must have b and b f proof of lemma if we assume that for all e f we have the following where e is defined in and k kmin e c krl b kmax k s using lemma we obtain that k b k b max p b therefore the assumption of the lemma implies s replacing s by e in lemma and using inequality we have p b b b e e s f f f for we have w u and thus s which implies that and hold for now assume that and hold at for some since j e s implies that j s and w j j w u by assumption and since j w x is decreasing we must have obtain that b e f p b u therefore by induction hypothesis we b f u u u where the last inequality follows from the definition of u hold at implies that now for such e c we have k e c kmin w u krl b kmax which completes the induction step this completes the proof p p s this inequality with some abuse of notation we let i j and u i j the following inequality bounds the regularization parameter w i j in terms of functionals of and lemma let e f s e w for any set e s e must satisfy q w u f j s ij ij u w u p proof by triangle inequality we have k e kf k s kf we further bound k s kf if ij u then we have w ij i ij u otherwise ij ij since because w is and thus ij ij u implies w ij w ij u therefore using the cauchy schwartz inequality completes our proof define the following optimization problem b argmin b c log det d lemma let k satisfy krl b kmax and b s c kmin b b f b p o s then b must s e t b where t is chosen proof we construct an intermediate solution e e such that k kf r if k kf r t otherwise here r satisfies p b s r k b lemma implies that e b b b ds e b rl e rl b e l f then we use lemma to upper bound the right hand side of the above inequality s e b s b b b dl tdl t rl b rl b b plugging the above inequality into we obtain e b b rl b f rl b e b we further control the right hand side of the above inequality by exploiting the first b and rl b s d therefore order optimality condition which is rl b b to the right hand side of and using the optimality adding and subtracting term condition obtains us that e b b b e b rl b e b f z z i ii b it suffices to bound i and ii separately for term i by therefore to bound k e f decomposing the support to s and s c then using matrix inequality we have i s f e b s f s c min vec e b s c again by using the optimality condition we has b ii rl b s c e rl b s c c s max vec e b by plugging the upper bound for i and ii back into we have b s f e e f b b s rl b s c s c min f max s c vec e b by assumption we know that k kmin krl b kmax which implies that the second b term in the right hand side of the above inequality is positive thus we have p e b b s r k b we obtain that e r s f now since f p b b s f b s by the construction of e we must have t f and thus e b recall that is the sparsity pattern matrix corresponding to p b s kmax cn for a sequence lemma if cn and k c cn then we have b max b cn and cn it suffices to show that k k proof of lemma let b max r where e r cn to show this we construct an intermediate estimator t b e b otherwise we choose t such that k e max r if k kmax r and for a matrix a let as be a matrix agreeing with a on s and having elsewhere using the two term taylor expansion we know that there exists a such that e vec rl e vec rl which implies that n vec e e where e s let e e e n vec o l e vec e e e vec e e e t define f vec e to be o ee e e ee vec e in which by the matrix expansion formula that m a f vec e reduces to a x vec e m e using triangle inequality we then obtain that f vec e max j k x etj e m ek a further applying inequality to each single term in the right hand side of the above displayed inequality we have etj e m ek m e e max sm max where we use the fact k smax k kmax therefore we obtain f vec e x e m sm max k kmax which by triangle inequality implies that k e kmax k ee vec n e e o b e b the fact kc b utilizing the kkt condition c e p we obtain k e kmax cn e m max smax k e smax k e kmax smax k e smax k e kmax kmax cn and cn smax cn r smax r which is a contradiction thus e and b satisfies the desired maximum norm bound for the spectral norm bound we utilize lemma and obtain that b the proof is finished c b max cn semiparametric graphical model proof of theorem we need the follows lemma which are taken from liu et al it provides a nonasymptotic probability bound for estimating using s lemma let c be a constant for any n log d with probability at least we have r log d npn sup c n jk the rest of the proof is adapted from that of theorem and thus is omitted d concentration inequality in this section we establish the concentration inequalities which are the key technical tools to the large probability bounds in section lemma tail bound let x xd t be a random vector with covariance such that each xi is with variance proxy then there exists constants and such that for all t with t the b satisfies the following tail probability bound associated sample covariance p ij t exp proof of lemma by the definition of the sample covariance matrix we have bij p p k k k k n xi xj n xi xj therefore we can dep k k n compose bij ij as n ij by applying the union sum bound xi xj we obtain that x n t t k k p bij t xi xj p ij ij n z z in the sequel we bound and separately for term following the argument of lemma in bickel and levina there exists constant and not depending n d such that x n t k k p xi xj exp ij n for all t satisfying t next we bound the term by the linear structure p of random variables we obtain that for all p p i therefore by applying lemma we obtain that is a p p random variable with norm bounded by k k k we p p give explicit bounds for the of and by the bound the p tail probability of can be bounded in the following n o p t exp for every random variable z integration by parts yields the identity ez p u du we apply this for z and obtain after change of variables p z u tp that z z n p p o p p p p t pt dt exp t dt p p p p p p which indicates that k k the gamma function is defined as t q t t p therefore we obtain e x dx similary we can bound k n k by jj j q p p k k ii jj max where max max dd define zij p p and write the taylor expansion series of the let e max expoential function we obtain e exp zij x k e z k ij k x k k k max k x max e k e where we use k k in the last second inequality exponenting and using the markov inequalty yields that ee zij p zij t p zij t p e zij e t exp t et for all t using the above result we can boudn as n n nt nt o p zij exp exp nt o combing the bounds for and taking min and min obtain us that ij p t exp t which completes the proof we then develop a large deviation bound for marginal variances lemma large deviation bound for marginal variance let x xd t p be a random vector with covariance such that each xi is subn k gaussian with variance proxy and x be n samples from x let c log then for any we must have n o b ii exp p n c ii ii k proof we write zi k k k k xi e ii n and pn k zi k zi for i let i zi zi for k therefore the function of k i is m k t for t next we control the tail probability i e ii and e ii respectively for the tail probability of e ii by of applying lemma we obtain n n o i i p exp n a n where a supt t log log similarly for any e ii as we obtain the tail probability of n n o i i p exp n b n where b supt t log after some algebra we obtain b log if b otherwise let c min a b log therefore combing the above twon inequalities o by union bound we obtain p n b ii ii i e n exp i ii n n ii ii b ii p n c note that we have thus we obtain n o exp n c our next results characterizes a large deviation bound for sample correlation matrix lemma large deviation bound for sample correlation let x xd t p be a random vector with covariance matrix such that each xi is k n with variance proxy and x be n independent and identically b pn x k x k t denote the sample covariance distributed copies of x let b c bw c denote the sample correlation matrix where w c is the diagonal and c b further let and be the i j th element of matrix with diagonal elements of b c and c respectively define min min then for min maxi we have n p exp ii o where i j b ii b jj b ij to proof of lemma we denote the sample correlation as prove the tail probability bound it suffices to prove the tail probability bound for and respectively we start with the tail probability bound for let us assume that using the basic probability argument we have p a p a b p a b c p a p b c thus for any t we obtain b ij b ii b jj b ii b jj p p b ij t t ii jj ii jj z b ii b jj p next we bound the term after some simple algebra can be bounded by b p t b p t let mini where is defined in lemma if we apply lemma with a better constant and lemma then for any in which is defined in lemma we must have b ij p b ii p p b jj jj jj n o n o exp exp t log t let min further for any min maxi by taking t and using the inequality t log t for all t such that t we obtain p n o n o n o exp n exp n exp if in the a similar fashion as before we can obtain the the following tail probability bound q b ij t p p ii jj z b ii b jj p to continue we bound the term in the next if take t min maxi q q we obtain that t thus we have p b ij p b ii p b p n o n o exp exp log n o exp where min min min by combining above two cases for min maxi we have p b exp n in a similar fashion we obtain the same tail probability bound for for min maxi thus the proof is completed lemma under the same conditions in lemma we have the following result hold r s lim lim sup p rl m and krl s kf op s max m n n n b proof of lemma it is easy to check that rl s f c s f by applying lemma and the union sum bound for any m such that m p min maxi n in which is defined in lemma we obtain r p rl m s exp m exp m log s s max n q p taking m such that log s m min maxi n and m in the above inequality obtains us that r lim lim sup p rl m s max m n n p which implies that rl s f op b lemma a concentration inequality for sample correlation matrix let c and be defined in lemma suppose n c log take q p b must log d log d in which is defined as in lemma then c satisfy b p c max b therefore applying lemma and proof it is easy to check that rl c union sum bound we obtain that for any min maxi with defined in lemma b p c exp n max where min min in which is definedqin lemma for n d by taking log d we sufficiently large such that n b kmax b kmax obtain p kc p kc the proof is completed n lemma under the same conditions in lemma we have r b b lim lim sup p c c s max m and c s m n n max op n proof of lemma the proof is similar to that of lemma and thus is omitted e preliminary lemmas in this section we state and prove the technical lemmas used in previous sections the following lemma establishes the tail bound type of the product of two random variables let k k and k k be the and defined in vershynin lemma for x and y being two random variables then the absolute value of their product y is a random variable with kx y k kxk ky k proof of lemma to show x y is it suffices to prove that the of x y is bounded by the definition of the we have kx y k sup p p y we need to use the inequality as follows e r s where f and g are two random functions if we choose f x p g y p and r s in the inequality then the right hand side of can be bounded by n o sup p p n n o o sup sup p therefore we obtain that kx y k p ky k the proof is completed s lemma let dl l l l and dl dl dl for t t with t we have that s s dl t tdl proof of lemma let q t dl t l t l rl t since the derivative of l t with respect to t is hrl t i then the derivative of q t is t rl t rl s t therefore the bregman divergence dl can written as s e e dl t rl t rl t t for t s as a by plugging t in the above function equation we have dl special case if we assume that q t is convex then t is and thus s s dl t t tdl therefore the proof is completed it remains to prove that q t is a convex function q q q for such that and we have by the property of the inner product function and using the linearity property of we have the following equality hold rl rl i rl on the other side by the convexity of the loss function l we obtain l l l l by adding and together and using the definition of function q we obtain q q q which indicates q t is a convex function thus we complete our proof lemma let ai bi be square matrices for i then we have a a the next lemma characterizes an upper bound of ka where k is any matrix norm b in terms of ka lemma let a b be invertible for any matrix norm k we have ka b ka ka ka ka we need the following lemma for bounding the with respect to the kronecker product lemma let a and b be matrices of the same dimension then we have ka a ka b ka and min ka the proof of the above lemma can be carried out by using the definitions and thus is omitted here for simplicity for a matrix a aij we say asp asp ij is the corresponding sparsity pattern sp sp matrix if aij when aij and aij otherwise lemma let a be a matrix such that kakmax let asp be the corresponding sparsity pattern matrix then we have kasp proof of lemma let aij be the i j entry of matrix a and xj the entry of x following the definition of the spectral norm of a matrix we obtain that sup sup sup n n sup x aij xj sgn xj aij xj n n n n aij xj kasp thus the proof is completed b be a definite random matrix a a lemma let a positive definite deterministic matrix then we have b a a a b a b a min a p a p a min b and a are commutative that is aa b aa b then we have if we further assume that a p b a a a a b a p a min b a min a a b a as a b a b a a proof of lemma we first write a the property of the spectral norm that b a a b a b a b a min b a b a a a a a min a then it follows from b a a b b a and thus min a b by weyl s inequality we obtain that min a min a a b b a a thus in the event of a a min a we have min a b min a hold thus it follows from that min a b a a a b a b a min a p a p a min b and a this proves the first desired probability bound if we further assume that a b a are commutative under the event a min a we have b a a b a b a a b a a b a a p b a a a p b a a min a a therefore we prove the third result the following lemma is taken from dembo and zeitouni which leads to a p concentration bound of the empirical means n xi where xi s are random copies of x define the logarithmic moment generating function associated with x to be log mx log e exp x lemma large deviation inequality let the logarithmic moment generating function of x be defined in define the dual of x to be x sup x then for any t we have x n p xi n x n p xi n ex ex where t and t t exp exp n ex inf x and n ex inf x t references bickel and levina regularized estimation of large covariance matrices the annals of statistics dembo and zeitouni o large deviations techniques and applications vol springer science business media horn and johnson matrix analysis cambridge university press liu han yuan wasserman et al semiparametric gaussian copula graphical models the annals of statistics vershynin introduction to the analysis of random matrices arxiv preprint
10
feb group kernels for gaussian process metamodels with categorical inputs and mines umr cnrs limos france alpestat france france arpajon france london school of economics england abstract gaussian processes gp are widely used as a metamodel for emulating computer codes we focus on problems involving categorical inputs with a potentially large number l of levels typically several tens partitioned in g l groups of various sizes parsimonious covariance functions or kernels can then be defined by block covariance matrices t with constant covariances between pairs of blocks and within blocks however little is said about the positive definiteness of such matrices which may limit their practical usage in this paper we exploit the hierarchy and provide a parameterization of valid block matrices t based on a nested bayesian linear model the same model can be used when the assumption within blocks is relaxed giving a flexible parametric family of valid covariance matrices with constant covariances between pairs of blocks as a we show that the positive definiteness of t is equivalent to the positive definiteness of a small matrix of size g obtained by averaging each block we illustrate with an application in nuclear engineering where one of the categorical inputs is the atomic number in mendeleev s periodic table and has more than levels introduction this research is motivated by the analysis of a computer code in nuclear engineering depending on both continuous and categorical inputs one of them having more than levels the final motivation is an inversion problem however due to the heavy computational cost a direct usage of the simulator is hardly possible a realistic approach is to use a statistical emulator or metamodel thus as a first step we investigate the metamodelling of such computer code more precisely we consider gaussian process gp regression models also called kriging models sacks et rasmussen and williams which have been successfully used in sequential strategies for uncertainty quantification see chevalier et whereas there is a flourishing literature on gp regression the part concerned with categorical inputs remains quite limited we refer to zhang and notz for a review as for continuous inputs covariance functions or kernels are usually built by combination of ones most often by multiplication or more rarely by addition deng et the question then comes down to constructing a valid kernel on a finite set which is a positive semidefinite matrix some effort has been spent on parameterization of general covariance matrices pinheiro and bates and parsimonious parameterizations of smaller classes pinheiro and bates some block form have also been proposed qian et in order to deal with a potential large number of levels however their validity was not investigated furthermore to the best of our knowledge applications in gp regression are limited to categorical inputs with very few levels typically less than guided by the application we investigate more deeply the group kernels cited in qian et al defined by block covariance matrices t with constant covariances between pairs of blocks and within blocks we exploit the hierarchy by revisiting a nested bayesian linear model where the response term is a sum of a group effect and a level effect this leads to a parameterization of t which is automatically positive definite interestingly the assumption on within blocks can be relaxed and we obtain a parameterization of a wider class of valid group kernels the positive definiteness condition of t is also explicited it is equivalent to the positive definiteness of the smaller covariance matrix obtained by replacing each block by its average as mentioned above this work has some connections with bayesian linear models as well as linear mixed effect models see lindley and smith smith in a hierarchical view other related works concern hierarchical gps with a tree structure for instance particular forms of group kernels are obtained in multiresolution gp models fox and dunson park and choi given two resolution levels and a spatial partition a ai of rd a parent gp on a corresponding to the lowest resolution serves as a trend for children gps on ai corresponding to the highest resolution children gps are independent conditionaly on the parent gp and have the same covariance structure with a lengthscale parameter decreasing with the diameter of ai as a result for a given resolution the covariance matrix has a block form given by a sum of nested block diagonal covariance matrices in comparison the gp corresponding to a categorical input does not assume a conditional independence between children and the block form of covariance matrices can be more general the paper is structured as follows section gives some background on gp regression with mixed categorical and continuous inputs section presents new findings on group kernels section illustrates on synthetic examples section is devoted to the application which motivated this work section gives some conclusions and perspectives for future research background and notations gps with continuous and categorical variables we consider a set of i continuous variables xi defined on a hypercubic domain and a set of j categorical variables uj with lj levels without loss of generality we assume that i and that for each j j the levels of uj are numbered lj we denote x xi u uj and w x u we consider gp regression models defined on the product space j y d lj i and written as yi w i z w i i where z and are respectively the trend the gp part and a noise term there exist a wide variety of trend functions as in linear models our main focus here is on the centered gp z w characterized by its kernel k w cov z w z i kernels qj on d can be obtained by combining kernels on and kernels on lj standard valid combinations are the product sum or anova thus if kcont denotes a kernel for the continuous variables x kcat a kernel for the categorical ones u examples of valid kernels for w x u are written product sum anova k w kcont x kcat u k w kcont x kcat u k w kcont x kcat u for consiseness we will denote by one of the operations sum product or anova the three formula above can then be summarized by k w kcont x kcat u then in turn kcont and kcat can be defined by applying these operations to kernels for continuous variables famous kernels include squared exponential or rasmussen and williams i we denote by kcont xi such kernels i i for a categorical variable notice that as a positive semidefinite function on a finite space a kernel is a positive semidefinite matrix we denote by tj the matrix of size lj corresponding to kernels for uj j j thus examples of expressions for kcont and kcat are written i kcont x kcont kcont xi kcat u tj uj j the formulation given by equations is not the most general one since kernels are not always obtained by combining ones nevertheless it encompasses the gp models used in the literature of computer experiments with categorical inputs it generalizes the kernels very often used and the sum used recently by deng et on the categorical part it also contains the heteroscedastic case since the matrices tj are not assumed to have a constant diagonal contrarily to most existing works zhang and notz this will be useful in the application of section where the variance of the material is level dependent remark combining kernels needs some care to obtain identifiable models for instance the product of kernels with ki xi i is a kernel depending on only one variance parameter the gp model is identifiable for this new parameter but not for the initial parameters kernels for categorical variables we consider here a single categorical variable u with levels we recall that a kernel for u is then a l by l positive semidefinite matrix kernels for ordinal variables a categorical variable with ordered levels is called ordinal in this case the levels can be viewed as a discretization of a continuous variable thus a gp y on l can be obtained from a gp yc on the interval by using a transformation f also called warping y u yc f u consequently the covariance matrix t can be written t kc f f when kc x depends on the distance then k depends on the distance between the levels distorted by f in the general case f is and defined by l parameters however a parsimonious parameterization may be preferred based on the cdf of a flexible probability distribution such as the normal or the beta we refer to mccullagh for examples in regression and to qian et for illustrations in computer experiments remark notice that usual continuous kernels have values which is a necessary condition if they are valid radial kernels in all dimensions as a consequence the kernels for ordinal variables built by warping do not allow negative correlations between levels kernels for nominal variables for simplicity we present here the homoscedastic case when t has a constant diagonal it is immediately extended to situations where the variance depends on the level by considering the correlation matrix general parametric covariance matrices there are several parameterizations of matrices based on the spectral and choleky decompositions the spectral decomposition of t is written t pdp where d is diagonal and p orthogonal standard parameterizations of p involve the cayley transform eulerian angles householder transformations or givens rotations as detailed in khuri and good and shepard et another general parameterization of t is provided by the cholesky decomposition t ll where l is lower triangular when the variance t does not depend on the level the columns of l have the same norm and represent points on a sphere in rl a spherical parameterization of l is then possible with one variance term and l angles representing correlations between levels see pinheiro and bates parsimonious parameterizations the general parametrizations of t described above require o parameters more parsimonious ones can be used up to additional model assumptions among the simplest forms the compound symmetry cs often called exchangeable covariance matrix assumes a common correlation for all levels see pinheiro and bates the cs matrix with variance v and covariance c is defined by v if t l c if this generalizes the kernel obtained by substituting the gower distance d gower into the exponential kernel corresponding to the cs covariance matrix treats equally all pairs of levels which is an important limitation especially when l more flexibility is obtained by considering groups of levels assume that the l levels of u are partitioned in g groups gg and denote by g the group number corresponding to a level then a desired parameterization of t is given by the block matrix see qian et al v if t cg g if where for all i j g the terms ci i are correlations and ci j i j are correlations notice that additional conditions on the ci j s are necessary to ensure that t is a valid covariance matrix which is developed in the next section block covariance matrices for levels grouping we consider the framework of section where u denotes a categorical variable whose levels are partitioned in g groups gg of various sizes ng without loss of generality we assume that we are interested in parsimonious parameterizations of the covariance matrix t written in block form g g bg wg where the diagonal blocks wg contain the covariances and the blocks bg are constant matrices containing the covariances we denote g g g bg cg jng where js t denotes the s by t matrix of ones this means that the betweengroup covariances only depends on groups and not on levels we will also consider the particular case where diagonal blocks wg are cs covariance matrices with variance vg and covariance cg in this subclass the and covariances only depends on groups and not on levels when the variance term is the same for all groups we obtain block matrices of the form as a special case although the block matrices of the form may be covariance matrices they are not positive semidefinite in general in the next section we provide a proper characterization as well as a parameterization of such matrices which automatically fulfills the positive semidefinite conditions we will use the following additional notations for a given integer l il is the identity matrix of size l jl is the matrix of ones of size l is the vector of ones of size finally for a vector or a matrix m we denote by m the real number equal to the average of its coefficients a gaussian model for cs covariance matrices we first focus on the case of a cs matrix we denote by l v c v c il cjl the cs matrix with a common variance term v and a common covariance term it is that l v c is positive definite if and only if l v c for instance one can check that the eigenvalues of l v c are v l c with multiplicity eigenvector and v c with multiplicity l eigenspace l notice that a cs matrix is positive definite for a range of negative values of its correlation term then we consider the following gaussian model l where n with and are random variables from n with assumed to be independent of a direct computation shows that the covariance matrix of is the cs covariance matrix l clearly this characterizes the subclass of positive definite cs covariance matrices l v c such that c is the full parameterization including negative values of c in the range l v can be obtained by restricting the average of level effects to be zero as detailed in the next proposition proposition when and are related as in the covariance of conditional on zero average errors is a cs matrix with variance v and covariance c conversely given a cs covariance matrix c with variance v and covariance c there exists a representation such that c is the covariance of conditional on zero average errors where c and v parameterization of centered covariance matrices the usage of model to describe cs covariance matrices involves gaussian vectors that sum to zero this is linked to centered covariance matrices covariance matrices f such that f as detailed in the next proposition we further give a parameterization of centered covariance matrices proposition let f be a covariance matrix of size l then f is centered iff there exists a gaussian vector z on rl such that f cov in that case let a be a l l matrix whose columns form an orthonormal basis of l then f is written in an unique way f ama where m is a covariance matrix of size l in particular if f v il jl is a centered cs covariance matrix then m and we can choose z n vil the choice of a in prop is free and can be obtained by normalizing the columns of a l l helmert contrast matrix venables and ripley l a hierarchical gaussian model for block covariance matrices let us now return to the general case where the levels of u are partitioned in g groups it will be convenient to use the hierarchical notation indicating that belongs to the group gg then we consider the following hierarchical gaussian model g g gg where for each g the random variable represent the effect of the group g and the random variables represent the effects of the levels in this group we further assume the vector is normal n the vectors are normal n the vectors are independent the vectors and are independent as an extension of prop the next proposition and cor show that gives a parameterization of positive semidefinite matrices of the form with cs diagonal blocks under the additional assumption that the average of level effects is zero in each group more generally we obtain a large parametric family of positive semidefinite matrices of the form proposition the covariance matrix of conditional on g g has the form with for all g g g wg g g jng fg bg g jng where fg is a centered positive semidefinite matrix equal to cov therefore wg wg jng is positive semidefinite for all g conversely consider a positive semidefinite matrix t having the block form such that wg wg jng is positive semidefinite for all diagonal blocks e be the g g matrix obtained by averaging each block of then there let t exists a representation such that t is the covariance of conditional on zero average errors g g with cov e t wg wg jng corollary positive semidefinite matrices of the form with cs diagonal blocks exactly correspond to covariance matrices of in conditional on the g constraints when cov ing as a we obtain a simple condition for the validity of block covariance matrices of the form interestingly it only involves a small matrix whose size is the number of groups proposition let t be a matrix having the block form such that wg wg jng is positive semidefinite for all g then e is positive semidefinite i t is positive semidefinite if and only if t e is positive definite and the diagonal ii t is positive definite if and only if t blocks wg are positive definite for all g furthermore we have e diag wg wg jn t xtx g where x is the n g matrix x remark all the results depend on the conditional distribution thus there is some flexibility in the choice of since several matrices can lead to the same conditional covariance matrix cov remark groups of size prop is still valid for groups of size indeed if ng then is degenerate and equal to thus fg wg wg is positive semidefinite related works model shares similarities with bayesian models and linear mixed effect models see lindley and smith with gaussian priors for the effects and the centering constraints are also standard identifiability conditions in such models furthermore the particular case of cs covariance matrices corresponds to the exchangeable assumption of the corresponding random variables typically in the framework of linear modelling model could be written as yg m with additionals grand mean m and errors however if the framework is similar the goal is different in linear modelling the aim is to quantify the effects by estimating their posterior distribution on the other hand we aim at investigating the form of the covariance matrix of the response part or equivalently the covariance matrix of the likelihood summary and comments the results of the previous sections show that a wide class of valid block covariance matrices can be parameterized by a family of covariance matrices of smaller sizes this class is formed by positive definite matrices of the form such that wg wg jng is positive semidefinite for all g it contains the case where diagonal blocks are cs covariance matrices the algorithm is summarized below generate a covariance matrix of size for all g g if ng set fg else generate a covariance matrix mg of size ng compute a centered matrix fg ag mg a g where ag is a ng ng matrix whose columns form an orthonormal basis of ng for all g g g compute the blocks wg and blocks bg by eq in steps and the covariance matrices and mg can be general and obtained by one of the parameterizations of however some specific form such as cs matrices can also be chosen depending on the number of groups and their sizes different levels of parsimony can be obtained table summarizes some possibilities notice that it may be hard to choose the parametric setting mg in order to account for a specified constraint of the block matrix t such as homoscedasticity an alternative to is to use the economic constraint on t of prop indeed positive definiteness on t is e of size equivalent to positive definitiness of the small matrix t parametric setting mg cs ing general ing cs general general general resulting form of t wg bg cs vg cg cg vg cg cg general cg general cg number of parameters g p ng ng g pg ng g table parameterization details for some valid matrices t of the form examples before considering the application in nuclear engineering we consider two toy functions with one continuous input and one categorical input which reproduce two specificities of that application the first function mimics a situation where the output variance depends on the level of a categorical input the second one investigates level grouping when the number of levels is large example an heretoscedastic case consider the deterministic function cos f x u cos cos if u if u if u where x and u the expression of f is adapted from han et by scaling the three output curves f u according to the level of u thus the output variance clearly depends on the level as visible in figure these three curves are strongly dependent with a positive link between f x and f x and negative between f x and f x figure test function f x in black f x in red and f x in green the design points correspond to one realization of a sliced lhd the aim is to compare the accuracy of four gp models by reconstructing f with few evaluations for all of them a kernel rasmussen and williams is chosen for x the first gp model ind consists in three independent gps corresponding to the levels of u the other ones have tensor product kernels with three different covariance matrices t for u the first two ones assume a constant variance t is defined by a cs covariance structure eq or by a general spherical parameterization sph see section finally we consider a heteroscedastic spherical parameterization where the covariance matrix is defined by a general variance vector and a spherical parameterization of the correlation matrix in order to benefit from the strong link between levels we use a design that spreads out the points between levels for instance the information given by f may be useful to estimate f x and f x at without computing f and f more precisely we have used a random sliced latin hypercube design slhd qian with points by level for a total budget of points parameter estimation is by maximum likelihood as the likelihood surface may be multimodal we have launched several optimizations with different starting points chosen at random in the domain model accuracy is measured over a test set formed by a regular grid of size in terms of criterion the criterion has a similar expression than but is computed on the test set p yi q pi i yi where the yi denote the observations on the test set their mean the predictions it is negative if the model performs worst than the mean positive otherwise and tends to when predictions are close to true values finally the process is repeated times in order to assess the sensitivity of the result to the design we observe that the heteroscedastic model clearly outperforms the other ones as expected the estimated variances for this model are whereas a constant variance is wrongly estimated around by cs and sph moreover we have represented in figure an estimation of the correlations between levels deduced from the parameterized covariance matrix t for representativeness we have chosen one of the designs used to generate figure such that the corresponding is the closest to the median of the values we can see that the strong dependence link of f u between the levels of u is only recovered correctly by with a poor estimation of correlation parameters for the other ones example levels grouping the second function is defined as x g x u cos p u ind cs sph figure criterion for four gp models based on repetitions of the design u with x u and p u as visible in figure there are two groups of curves corresponding to levels and with strong correlations and strong negative correlations we aim at reconstructing g with five gp models based on levels grouping the first one uses a cs covariance matrix corresponding to a single group the second one considers the two groups and the third model based on the five groups has two variants a when the correlation is constant and b in the general case the fourth model uses the spherical parameterization of t leading to groups and the last one considers an ordinal paramaterization for t the design of experiments is a slhd the remaining simulation settings are the same as in example the estimated correlation parameters are shown in figures the right correlation structure is well recovered with two groups and five groups with different correlations the model with thirteen groups involves the estimation of parameters which is hard to achieve especially with points this is visible in the erratic values of the estimated correlations values which seem not meaningful on the opposite considering only one group or five groups with a common correlation oversimplifies and fails at recovering the right correlations the ordinal cs sph figure estimated correlation parameters among levels for f for a design of experiments corresponding to a median figure test function nel recovers the two blocs of curves but can not detect negative correlation between them remark in figure we can see that the best tradeoff between prediction accuracy and parsimony is obtained with two groups whereas it reduces the number of observations by group notice the rather good performance of the ordinal model at the cost of a larger number of parameters the warping was parameterized by an affine function see section this is noticeable since negative correlations are not possible see above this may be due to the larger number of levels combinations which reduces the influence of the negative correlations group groups b groups groups a groups ordinal figure estimated correlation parameters among levels for g based on a representative design of experiments design with median application in nuclear engineering position of the problem as presented in introduction this research is originally motivated by the solving of an inverse problem confronting experimental measurements in nuclear engineering and numerical simulation more precisely this analysis concerns the identification of the mass m of pu that is present in a particular waste container using a nuclear detection technique such as the gamma spectrometry knoll in that case at each energy level e m e e y sg e where y sg e is the quantity of interest provided by the gamma transmitter and e e is the attenuation coefficient which depends on the source group groups groups groups b groups ordinal figure of six gp models based on repetitions of the design number of parameters used bloxplot order ronment denoted by in practice only discrete values of e are of interest corresponding to the natural energy levels of pu e kev then based on previous studies guillot the real source environment is parameterized by the following input variables an equivalent geometric shape for the nuclear waste sphere sph cylinder cyl or parallelepiped par an equivalent material for this waste characterized by its chemical element with atomic number in the bulk density of the waste in the distance of measurement between the container and the measurement device in cm the mean width and lateral surfaces in logarithmic scale crossed by a gamma ray during the rotation of the object after normalization the characteristics of the input space can be summed up in table name of the input distance density width surface energy shape chemical element variation domain sph cyl par table description of the input variables for the nuclear application to recapture the notation of the previous sections let x and u be the vectors gathering respectively the continuous and categorical inputs and w x u for a given value of w monte carlo simulation codes as mcnp goorley et can be used to model the measured scene and approach the value of w e e the mass m can eventually be searched as the solution of the following optimization problem m w arg min ky obs m w k m w where k k is the classical euclidian norm w and y obs respectively gather the values of and y sg at the six values of e that are used for the measurements to solve it is therefore necessary to compute at a high number of points however each evaluation of the mcnp code can be extremely demanding between several minutes to several hours cpu for one evaluation thus surrogate models have to be introduced to emulate the function w w which is now investigated in the frame of gaussian process regression we refer to clement et al for the second step namely the treatment of the inversion problem model settings for pedagogical purpose a dataset of large size n has been computed with the mncp code the construction of the design of experiments was guided by the categorical inputs such that each of the combinations of levels appears times it was completed by a latin hypercube of size n to define the values of the four continuous inputs from this full dataset a training set of size n is extracted by selecting at random observations by chemical element the remaining n n points serve as test set y y sph y in function of the energy cyl par y in function of the geometric shape figure y in function of the energy and geometric shape model settings are now motivated by a graphical analysis in figure the output is displayed in function of the energy and the geometric shape we observe that successive energy levels correspond to close values this fact confirms that the energy is ordinal and we use the warped kernel defined by eq the influence of the geometric shape is less obvious and we have chosen an exchangeable cs covariance structure for it in figure y is displayed in function of the chemical elements ordered by atomic number two important facts are the high number of levels and heteroscedasticity for this purpose the chemical elements are divided into groups provided by expert knowledge and represented by colors this partition suggests to use a group kernel of the form where the blocks wg are cs covariance matrices in order to handle heteroscedasticity the variance of wg is assumed to depend on the group number the influence of continuous variables can be observed by panels not represented and does not reveal useful information for our purpose a kernel is set for all continuous inputs as we expect the output to be a regular function of the continuous inputs indeed for this kernel the corresponding gaussian process is two times differentiable finally three candidate kernels for w are obtained by combining the kernels of input variables defined above by sum product or anova see section y figure y in function of chemical elements ordered by atomic number results following the model settings detailed above figure panel presents the results obtained with random designs of size n and three operations on kernels furthermore we have implemented three other kernels for the chemical element in order to compare other model choices for this categorical input in the first panel we grouped all the levels in a single group in the second one we kept the kernel but forced the covariances to have a common value finally in the fourth panel we considered that the levels were ordered by their atomic number and used the warped kernel of eq with a normal transform figure of several gp models based on random designs corresponding to different model choices for the chemical element first panel single group second panel groups with a common covariance third panel groups fourth panel ordered levels total number of parameters used in the panel order prod add prod anova prod first comparing the three operations on kernels we remark that in all the panels additive kernels provide the worst results this suggests the existence of interactions between different inputs of the simulator second the anova combination produces slight improvements compared to the standard both in terms of accuracy and stability with respect to design choice now comparing the four panels we see that gathering the levels in a single group is the least efficient strategy the kernel gives very good performances especially when the covariances vary freely constraining them to be equal degrades the result surprisingly here the ordinal kernel gives the best performance indeed for this application it was not intuitive to the experts that the chemical element can be viewed as an ordinal variable simply sorted by its atomic number this is confirmed by the correlation plots of figure corresponding to a model with a median score we can see that the estimated correlations between levels seems to decrease as the difference between levels increases an indication that the levels may be ordered by their atomic number finally we report several results first the estimated transformation of energy levels figure is concave and flat near high values which corresponds to the behaviour observed in figure left panel in addition the last three levels lead to similar results figure this corresponds to the fact that when the energy is high the gamma ray almost always crosses the nuclear waste leading to a high value for the output second the estimated correlation among the sphere the cylinder and the parallelepiped is very high c figure this justifies considering a covariance structure for that categorical input rather than using three independent gp models for all the three levels conclusion in the framework of gp regression with both continuous and categorical inputs we focused on problems where categorical inputs may have a potentially large number of levels l partitioned in g l groups of various sizes we provided new results about parsimonious block covariance matrices defined by a few and covariance parameters a groups common betweengroup covariance b groups general figure estimated correlation parameters among the chemical element a transformation b correlations figure estimated correlation parameters for the energy figure estimated correlation parameters for the geometric shape we revisited a nested bayesian linear model where the response term is defined as a sum of a group effect and a level effect we obtained a flexible parameterization of block covariance matrices which automatically satisfy the positive definiteness conditions as a particular case we recover situations where the covariance structures are compound symmetry with possible negative correlations furthermore we showed that the positive definiteness of a given block covariance matrix can be checked by verifying that the small matrix of size g obtained by averaging each block is positive definite this criterion can be useful if the proposed block matrix has a desirable constraint such as homoscedasticity which is not directly handled by the proposed parameterization we applied these findings on several toy functions as well as an application in nuclear engineering with continuous inputs categorical inputs one of them having levels corresponding to chemical numbers in mendeleev s table in this application groups were defined by experts the results measured in terms of prediction accuracy outperform those obtained with oversimplifying assumptions such as gathering all levels in a same group on the other hand when the categorical input can be viewed as an ordinal one plugging the right order into warped kernels has lead to slightly better results in our experiments there are several perspectives for this work firstly one future direction is to find a technique to recover groups of levels this may be not an easy task due to the small number of observations available in the context of gp regression similarly if there is an order between levels can we infer it from the data secondly the trend of the gp models has been fixed to a constant more complex forms based on linear models could be explored software information and acknowledgements implementations have been done with the r packages mixgp and kergp deville et illustrations use wickham and corrplot wei and simko this research was conducted within the frame of the chair in applied mathematics oquaido gathering partners in technological research brgm cea ifpen irsn safran storengy and academia cnrs ecole centrale de lyon mines university of grenoble university of nice university of toulouse around advanced methods for computer experiments appendix proof of proposition the vector is a centered gaussian vector with covariance matrix il l l hence the conditional distribution of knowing is a centered gaussian vector with covariance matrix cov il l il l jl then by using the independence between and the s we deduce cov jl il jl il jl we recognize the cs covariance matrix l v c with v l and c l as a covariance matrix it is positive semidefinite furthemore we have c v and c l v l and the conditions of positive definiteness are satisfied conversely let c be a positive definite cs matrix l v c then we have v c v and we can define l v c and v from the direct sense we then obtain that the covariance matrix of is l v c proof of proposition the first part of the proposition is obtained by remarking that if f cov z then f cov z thus assuming that z is centered f is equivalent to z with probability for the second part notice that z means that z is orthogonal to thus one can write the expansion of z in the orthonormal basis l defined by denoting by t the l of coordinates we have z at this gives f cov at a cov t a and follows with m cov t to prove unicity observe that by definition a a a starting from f ama and multiplying by a on the left and by a on the right we get m a fa showing that m is unique now let f v il jl since jl l we obtain m a fa v a a a l a as a notice that resubstituting m into f ama gives aa il jl finally if z n vil then the properties of conditional gaussian vectors lead immediately to cov proof of proposition the expressions of wg and bg are obtained directly by using the independence assumptions about and the s notice that fg the covariance matrix of knowing is centered by proposition this gives wg wg jng fg which is positive semidefinite conversely let t be a positive semidefinite matrix of the form such e be that wg wg jng is positive semidefinite for all g let t e is also a the matrix obtained from t by averaging each block then t positive semidefinite matrix indeed since t is positive semidefinite it is e is the covariance matrix the covariance matrix of some vector z then t p of e z the vector obtained from z by averaging by group e zg g z thus there exists a centered gaussian vector whose covariance matrix is e now for g g define fg wg g g jng wg wg jng observe that fg and by assumption fg is positive semidefinite hence from proposition there exists a centered gaussian vector such that fg cov we can assume that are independent and and are independent finally we set by the direct sense and we obtain that t is the covariance matrix of conditional on g g proof of corollary let t be a positive semidefinite matrix of the form with cs diagonal blocks then the diagonal cs matrices are positive semidefinite leading to vg cg thus wg wg jng vg cg ing g jng is a positive semidefinite cs matrix hence by prop t is obtained from model with cov vg cg ing g jng by prop last part we can choose ing with vg cg conversely if g ing then by prop fg cov is a cs covariance matrix the result follows by prop proof of proposition the direct sense of i has already been derived in e the proof of prop furthermore inspecting that proof we see that if t is positive semidefinite then t admits the representation thus t is a covariance matrix and positive semidefinite for ii a proof is available in roustant and deville notice that we need to add the condition that wg is positive definite for all g however adding an equivalent condition for i namely that wg is positive semidefinite was not necessary indeed it is a consequence of the fact that e is positive semidefinite wg wg jng is positive semidefinite and that t e g g wg which implies t finally eq is direct references chevalier bect ginsbourger vazquez picheny and richet fast parallel stepwise uncertainty reduction with application to the identification of an excursion set technometrics clement saurel and perrin stochastic approach for radionuclides quantification epj web url https deng lin liu and rowe additive gaussian process for computer models with qualitative and quantitative factors technometrics deville ginsbourger and roustant kergp gaussian process laboratory url https contributors durrande r package version fox and dunson multiresolution gaussian processes in pereira burges bottou and weinberger editors advances in neural information processing systems pages curran associates url http goorley fensin and mckinney users manual version may gower euclidean distance geometry math sci guillot quantification gamma de par phd thesis blaise pascal ii france han santner notz and bartel prediction for computer experiments having quantitative and qualitative input variables technometrics issn khuri and i good the parameterization of orthogonal matrices a review mainly for statisticians review paper south african statistical journal knoll germanium detectors volume john wiley sons lindley and smith bayes estimate for the linear model with discussion part journal of the royal statistical society ser b mccullagh regression models for ordinal data journal of the royal statistical society series b methodological park and choi hierarchical gaussian process regression in sugiyama and yang editors proceedings of asian conference on machine learning volume of proceedings of machine learning research pages pinheiro and bates models in s and statistics and computing springer new york pinheiro and bates unconstrained parametrizations for variancecovariance matrices statistics and computing qian sliced latin hypercube designs journal of the american statistical association qian wu and wu gaussian process models for computer experiments with qualitative and quantitative factors technical report department of statistics university of wisconsin rasmussen and williams gaussian processes for machine learning the mit press roustant and deville on the validity of parametric block correlation matrices with constant within and between group correlations working paper or preprint may url https sacks welch mitchell and wynn design and analysis of computer experiments statistical science shepard brozell and gidofalvi the representation and parametrization of orthogonal matrices the journal of physical chemistry a smith bayes estimates in and models biometrika venables and ripley modern applied statistics with springer edition wei and simko corrplot visualization of a correlation matrix r package version wickham elegant graphics for data analysis new york isbn url http zhang and notz computer experiments with qualitative and quantitative variables a review and reexamination quality engineering
10
may a survey on trapping sets and stopping sets may aiden price and joanne hall member ieee codes are used in many applications however their error correcting capabilities are limited by the presence of stopping sets and trappins sets trappins sets and stopping sets occur when specific error patterns cause a decoder to fail trapping sets were first discovered with investigation of the error floor of the margulis code possible solutions are constructions which avoid creating trapping sets such as progressive edge growth peg or methods which remove trapping sets from existing constructions such as graph covers this survey examines trapping sets and stopping sets in ldpc codes over channels such as bsc bec and awgnc index codes trapping sets stopping sets qcldpc codes margulis codes awgnc peg algorithm graph covers i ntroduction s technology advances we wish to communicate over longer distances and have the ability to stay connected even over poor communication channels while codes are one of the best ways to achieve this their performance in many cases is limited by the presence of trapping sets and stopping sets trapping sets and stopping sets can cause iterative decoding methods to fail with relatively few errors finding ways to avoid or remove trapping sets and stopping sets will further improve the already high performance of ldpc codes and bring their performance curves even closer to the shannon limit performance optmization is becoming incresingly crucial as the world moves further into the digital age an increase in the speed at which digital communication occurs through modern applications such as wifi and has drastic implications on the overall productivity of the world in gallager introduced codes ldpcs ldpc codes are a class of binary linear block codes with a sparse matrix an advantage of using ldpc codes is that they are able to provide error control which is very close to the capacity for many different channels this categorizes ldpc codes as one of few capacityapproaching codes error correction methods which can allow the noise in a channel to be set very close to its theoretical maximum while maintaining ability the performance of an error correction method is based upon two properties the performance of such a code over a channel with variable noise and the optimal bit error ratio ber of a code with sufficient ratio snr the optimal ber is known as the error floor of a code and a a price is with the science and engineering faculty queensland university of technology queensland qld australia j hall is with the school of science royal melbourne institute of technology melbourne manuscript received mo date year revised mo date year is discussed in different papers in terms of bit error rate ber frame error rate fer block error rate and symbol error rate depending on the application being addressed see for examples of such error floor analysis consideration of the error floor is one of the most important aspects of constructing a ldpc code analysis of the performance of ldpc codes over the binary erasure channel bec led to the discovery of stopping sets in the margulis construction improved upon the performance of gallager codes though a weakness in this construction led to a high error floor over the additive white gaussian noise channel awgnc compared to the performance of other constructions of the time this high error floor was due to the presence of stopping sets stopping sets over bec as described in became a well understood problem and led to the definition of trapping sets which are defined over awgnc and bsc in some early works trapping sets are called words trapping sets and stopping sets are an important topic worthy of a survey ii p reliminaries and n otation in order to engage with the literature on stopping sets and trapping sets an overview of the preliminaries is necessary we provide a short review of the literature surrounding ldpc codes the common transmission channels and common decoding techniques definition a binary n k d linear code c is a subspace of an vector space which is used to provide structure to a message vector for transmission over a channel in order to transmit messages over communication channels using an error correcting code we encode the message using a generator matrix definition the generator matrix g of a code c is a matrix which has dimensions k the k rows of g correspond to linearly independent code words which form a basis of one of the most important aspects of error correction is the process of decoding a matrix allows us to identify whether errors have been introduced during transmission this matrix can also be represented by a tanner graph definition a matrix h of c is a matrix which generates the nullspace of the code this means that a code word c is in the code c iff h ct where is an r null vector h has dimensions n k definition the matrix h may be represented by a bipartite graph with variable node set v and check node set this bipartite graph is denoted g h v e may where the columns of h indicate the variable nodes in v and the rows of h indicate the check nodes in for i v and j c i j e if and only if hi j this bipartite graph is known as a tanner graph with r n k check nodes and n variable nodes see fig we can also refer to individual variable nodes let the variable node be vi and the check node be c j definition if a matrix of a code is sparse then the corresponding code c is called a ldpc code we note that the classification of sparse used in the context of ldpc codes is that there are fewer ones in the matrix than there are zeros the sparse nature of ldpc codes means that decoding processes have a fast as there are fewer operations to compute when compared to a matrix two more important features of tanner graphs are neighbours and node degrees definition for a variable node vi and check node c j if i j e we say that nodes vi and c j are neighbours the degree of a node in the tanner graph is defined as the number of edges it is connected to from the node degree definition we can also define regular ldpc codes definition an ldpc code is called dv dc if each variable node v has degree dv and each check node c has degree dc we denote an ldpc code of this form a c dv dc code ldpc codes are designed to be used as methods over a variety of communication channels there are three communication channels discussed in this paper the binary erasure channel bec the binary symmetric channel bsc and the additive white gaussian noise channel awgnc though these channels handle data transmission in different ways their encoding and decoding goals are the same the upper bound on the error correcting ability of an ldpc code is determined by the minimum distance of the code in order to define the minimum distance of a code we will first define hamming weight and hamming distance definition the hamming weight w of a vector is the number of its elements the hamming weight of a binary vector is therefore the number of ones in the vector the hamming distance d between two vectors x and y is the number of places in which they differ written as d x y in the literature hamming weight and hamming distance are often referred to using the terms weight and distance the weight of code words affects the number of operations performed in decoding and the distance between code words affects how many errors can be corrected definition the minimum distance of a code c is defined as the smallest hamming distance between any two code words in the code encoding and verification the distance between these codewords is given as d take the following error vector the process of transforming a message vector into its associated code word is known as encoding every code word c c can be expressed as c m where m is the mesage vector the code word c has the original k information bits as well as an additional r parity bits to give the code word a length of n bits as the matrix h is the nullspace of the code we can use it as a verification method to test if a recieved vector is a code word the product h ct is denoted the syndrome s of c through for a given vector v v c iff s h vt there must be an even number of ones in the components of the product which add to give s this is known as the constraint after a code word c is transmitted through a channel the other party receives a vector if s vt then v c and so we use error correcting techniques in attempt to correct v and recover d c min d x y x y c x y the following theorem and corollary describe a code s error detection and correction abilities using minimum distance theorem a code c can detect up to s errors in any code word if d c s a code c can correct up to t errors in any code word if d c corollary if a code c has minimum distance d then c can be used to either detect up to d errors or to correct up to d errors in any code word if the minimum distance of a code is too small then it can not provide sufficient error correction this is demonstrated in example example let two code words be given as e if e is added to then the resulting code word is identical to demonstrating the importance of minimum distance the minimum distance of an ldpc code is also related to its a large code rate lowers the upper bound on the minimum distance of a code definition the code rate r c of an ldpc code is the portion of information bits sent in comparison to the entire code vector sent written as r c k n where r c the code rate and minimum distance often determine the error correcting capability of an ldpc code though the decoding algorithm plays a direct part in the time it takes to decode messages may e e a b e e c d fig a irregular tanner graph a used to demonstrate the er decoding algorithm with the received vector v e e e over bec nodes of interest are highlighted gray b shows steps and of the er algorithm s first iteration c shows the changes made in step and then step of the second iteration d shows the changes made in step again this time revealing that no further erasures exist the algorithm then terminates on step of this iteration thus successfully correcting the received vector in iterations communication channels and decoding basics the communication channel by which transmission occurs impacts the error correction algorithms that are chosen a communication channel can be modelled as a triple which contains an input alphabet an output alphabet and the probability of transition between a symbol in the input alphabet and a symbol in the output alphabet the binary erasure channel bec is one of the simplest channel models definition the binary erasure channel bec is a communication channel with two input symbols and and three output symbols and e the erasure symbol the bec has an erasure probability p where given an input ci the output vi is defined by the probability formulae p vi ci p and p vi e analysis of the bec significantly advanced modern understanding of error correction an example of a simple decoding process over the bec is the edge removal algorithm definition let c c n be a binary code word transmitted over the bec and v e n be the received vector the edge removal er algorithm proceeds as follows initial step the value of each received vector bit vi is assigned to each variable node i v of the tanner graph the check nodes ci c count the number of erased bits which are neighbours in the tanner graph g h if check node ci neighbours only one e symbol in v the even parity constraint uniquely determines the original value of e for that variable node repeat steps and until either all erasures have been recovered or until every check node that is a neighbour of an erased bit is a neighbour of at least two erased bits for step above if the latter occurs then the decoder has failed due to the presence of a stopping set see section iv we provide an example of the er decoding process in fig where represents a variable node and a check node this example is found in for this decoding example we use an irregular paritycheck matrix h and demonstrate the decoding of received vector v e e e using the er algorithm over the bec another communication channel is the binary symmetric channel bsc definition the binary symmetric channel bsc is a communication channel with two input symbols and and two output symbols also and the bsc has an error probability p where given an input ci the output vi is defined by the probability formulae p vi ci p and p vi an example of a decoding process over the bsc is the gallager a algorithm definition let c c n be a binary code word transmitted over the bsc and v n be the received vector the gallager a algorithm proceeds as follows initial step the value of each received vector bit vi is assigned to each variable node i v of the tanner graph after this a check node ci sends to all neighbouring variable nodes vi v j the sum mod of all of the adjacent variable nodes except for the node itself where j is the degree of check node ci each variable node vi then sends the following to their adjacent check nodes if all messages from check nodes ci other than the target check node of the message are equal then vi sends that message back otherwise it resends its prior value repeat steps and until either all variables nodes send the same values over two consecutive iterations or when a max iteration count is reached may a b c d fig a tanner graph a used to demonstrate the gallager a decoding algorithm with the received vector v we represent a being sent along an edge by a dashed line and a with a full black edge b shows step of the gallager a algorithm as well the check node calculation taken as the addition mod of all incoming message from variable nodes adjacent to each check node denoted vi ci c then shows step ci vi lastly d shows step vi ci due to the complexity of this algorithm only one full iteration has been shown though step in definition describes how decoding continues the gallager b algorithm offers improved decoding with an additional step on each loop within the algorithm for each degree j and each check node loop i there is a prechosen threshold value bi j throughout the steps involved in check node ci for each variable node v and each adjacent check node c if at least bi j neighbours of v excluding c sent the same information in the previous round then v sends that information to c otherwise v sends its received value to algorithm a is a special case of algorithm b where bi j independent of the round throughout the decoding procedure if the max iteration count is reached without completion the decoder has failed due to the existence of a trapping set see section v an example of the first steps of the gallager a algorithm see fig demonstrates the differences between the decoding considerations made between the bec and the bsc the most complex channel considered here is the binary input additive white gaussian noise channel expressed commonly either as the or just as the awgnc definition let x be a message vector where denotes an arbitrary length the additive white gaussian noise channel awgnc maps the input vector x to the vector x and then adds the result with gaussian white noise to give an output vector y x w where w n each code symbol y y carries with it a signal to noise ratio snr of eb and the conditional distribution of y is p pw x p y x which gives the output alphabet for the awgnc as y as in the bec and bsc we would like to have some indication of the errors that a channel is introducing to the code word a metric used for the awgnc is the log likelihood ratio llr p l ln p this describes the likelihood that x is or if l is positive then p p and thus the input estimate should be there are methods to map from y r to y and as such all decoding methods used over the bsc can be implemented on the awgnc however high performing decoding algorithms such as maximumlikelihood decoders the algorithm and the maxproduct algorithm utilize the llr information to improve decoding speed the bsc can be used for these channels as llr values are defined over the bsc though the awgnc more closely models the influence of communication channels and is favoured for high performance simulations example the bsc has a conditional llr function as the bit flipping probabilities are well understood for the outputs ln p y lbsc p ln as the noise determines the values of y in the awgnc and y r the llr on this channel is defined as l aw g nc eb the decoding algorithms used on the awgnc are far more complex than those on the bec and bsc and as such we provide an overview of various methods rather than detailed definitions and examples decoding methods used over the awgnc tend to be message passing algorithms where nodes send information to their neighbours to correct errors based on the structure of the matrix the original algorithm is an example of a flooding may ability of a code as well as the frame error ratio fer the fer is the ratio of frames or whole messages transmitted which can not be fully corrected versus the total number of frames transmitted the largest contributors to the error floor stopping sets and trapping sets a b fig a tanner graph for the irregular matrix given in example b induced subgraph of the highlighted stopping set with consistent labelling schedule where in each iteration all variable nodes and subsequently all check nodes pass new messages to their neighbours another example of a flooding schedule is the algorithm see an improved schedule where both variable nodes and check nodes send messages to each other throughout a single iteration is known by many names including serial scheduling layered scheduling and sequential scheduling these algorithms offer an improved decoding performance as information is moving through the tanner graph more frequently examples of decoding algorithms using this scheduling include the algorithm mpa and the belief propagation algorithm bpa bpa is widely used in ldpc code analysis and is based on the likelihood that a node takes a value given its current value and the values of nearby nodes from previous iterations error correction must be implemented differently for each channel the edge removal algorithm for example deals with erasures and thus is not suitable over the bsc errors which occur during transmission that are not corrected by the decoding algorithm form what is known as the bit error ratio ber the ratio of bits that can not be corrected versus the total number of bits transmitted in order to test the performance of ldpc codes we can simulate the transmission of messages over an increasing ratio snr and calculate the ber of a code under varying conditions as snr grows larger the ber of a code will suddenly decrease depending on the conditions of the channel and the error correcting capability of the ldpc code in use this curve is known as the waterfall region the best scenario for correcting errors is when the probability of error during transmission over a channel is negligible and when the implemented error correcting code can correct many errors definition the waterfall region eventually ends in all ber graph curves as anomalous errors cause decoders to fail even with a high snr ratio the lowest the ber becomes before levelling is called the error floor of a code the ber is a standard way to analyze the error correcting iii c ycles and g irth the decoding method we choose has direct implications for the accuracy and efficiency of decoding cycles were the first known negative characteristic of ldpc codes and were extensively studied as they impacted on the accuracy of high performance ldpc codes a cycle in a graph is a sequence of connected nodes which form a closed loop where the initial and final node are the same and no edge is used more than once the cycle length is the number of edges a cycle contains and the length of the smallest cycle in a graph is denoted as its girth if no cycles exist within the tanner graph of a paritycheck matrix then the iterative belief propagation decoding technique is always successful with sufficient iterations however if the neighbours of a node are not conditionally independent then belief propagation methods become inaccurate the inferred solution is to construct a matrix with no cycles however as discussed in section vii this is unessecary as not all cycles negatively impact the decoding efficiency of ldpc codes in fact the restriction of girth can lead to constraints on the structure of the code which further impedes the decoding efficiency the cycles which negatively impact the decoding efficiency of ldpc codes combine to form what are known as stopping sets and trapping sets these sets lead to a high error floor in otherwise efficienct ldpc code constructions throughout various communication channels and affect all high performing decoding algorithms iv s topping s ets over bec stopping sets are collections of variable and check nodes in the tanner graph of an ldpc code which greatly reduce its error correcting ability these sets cause decoding to fail when certain variable nodes are affected by errors after transmission stopping sets were first described in by di et al who were researching the average erasure probabilities of bits and blocks over the bec definition let g h be a tanner graph and v be the set of variable nodes in g h a stopping set s is a subset of v such that all neighbours of s are connected to s at least twice the empty set is also a stopping set and the space of stopping sets is closed under union if and are both stopping sets then so is the following lemma describes a stopping set by the performance of the ldpc code s decoding algorithm lemma let g be the generator for an ldpc code over the bec and e denote the subset of the set of variable nodes which is erased by the channel after the transmission of a message then the set of erasures which remain when the may e e a e e b e e e c d fig the effect of a stopping set on the er decoding process for the irregular tanner graph a on the right hand side of the line b shows steps and of the er algorithm s first iteration c shows the changes made in step and then step of the second iteration finally d shows that no further erasures can be corrected and thus we see that the received vector v e e e produces a scenario in the tanner graph where the er algorithm can not retrieve the original code word from definition we know that this is due to the presence of a stopping set decoder stops is equal to the unique maximal stopping set of definition is now widely accepted given a bec with erasure probability the performance of the code over the bec is completely determined by the presence of stopping sets since stopping sets have a combinatorial characterization their distributions through various tanner graphs can be analyzed rigorously definition let s denote the collection of all stopping sets in a tanner graph g h the stopping number of g h is the size of the smallest stopping set in the stopping number of a code aids in the analysis of the code s error floor it is known that the performance of an ldpc code over the bec is dominated by the small stopping sets in the graph the larger this value is the lower the error floor of the code in some cases this stopping number increases linearly with the number of variable nodes in the tanner graph this can be seen more easily using the stopping ratio definition let g h be a tanner graph with n variable nodes and stopping number the stopping ratio of a tanner graph is defined by the ratio of its stopping number to the number of variable nodes a stopping set in the matrix of an ldpc code is shown in example example let c be the code with the following check matrix h columns and in h have been highlighted as they belong to a stopping set the tanner graph for c with the stopping set highlighted is shown in fig a stopping set must be either empty or at least contain two variable nodes the stopping number of c is therefore and its stopping ratio an example showing the impact of a stopping set on the decoder is shown in fig where the edge removal decoding algorithm is used over the bec solutions to the problem of stopping sets covered in section vii involve either avoiding or removing small stopping sets in the tanner graph leaving only ldpc codes with large stopping sets while stopping sets are well defined and some solutions exist to minimize their effect of the error floor of ldpc codes the terminology does not support channels without erasure t rapping s ets over bsc awgn trapping sets much like stopping sets are also collections of variable nodes and check nodes which impede the error correcting ability of ldpc code only small elementary trapping sets impact the error floor of ldpc codes over the bsc and awgnc because of clustering the definition of trapping sets came shortly after stopping sets were defined similarly to the bec when decoding over the bsc and awgnc sometimes the maximum iteration count is reached when only a small set of variable nodes are in error experiments with the argulis codes lead to the definition of trapping sets definition let g h be a tanner graph for a received vector y of length n we define the failure set t y to be the set of bits that are not eventually correct using some arbitrary iterative decorder decoding is successful on y if and only if t y definition if t y then t y is a trapping set more specifically t is an a b trapping set in h if it has a variable nodes for which the induced by t contains b check nodes may fig a trapping set left with critical number k and a trapping set right with critical number k these k values are found using the gallager b decoding algorithm and may vary when other decoding algorithms are applied iterative techniques on the bsc and awgnc distinguish trapping sets from stopping sets over the bec if there is only iteration by which the decoding algorithm can become trapped then the notion of trapping sets becomes irrelevant lemma let c be a code using a maximum likelihood decoder then the trapping sets are precisely the code words though if the channel is bec then an iterative decoding failure is said to be due to stopping sets making stopping sets and trapping sets equivalent over the bec lemma let c be a code using a belief propagation algorithm over bec then the trapping sets are precisely the stopping sets lemma is an important bridge between trapping sets and stopping sets allowing us to relate the bec to the bsc and awgnc decoding failure in an ldpc code over the bsc and awgnc is largely due to the existence of trapping sets trapping sets pose a real threat to the error correcting ability of ldpc codes even though there may be very few nodes in error after transmission if enough of those nodes belong to a trapping set the decoder will fail definition let t be a trapping set the critical number k is the minimal number of variable nodes that have to initially be in error for the decoder to become trapped in it is important to note that the variables nodes that are initially in error do not necessarily belong to the trapping set it is possible that at some iteration the trapping set is entered causing the decoder to fail in order to become trapped the decoder must after some finite number of iterations be in error on at least one variable node from t at every iteration thereafter only trapping sets with a small number of variable nodes and check nodes impact the of ldpc codes definition an a b trapping set in a n k d code is a small trapping set if a n and b only these small trapping sets contribute to a larger errorfloor small trapping sets are also of elementary form definition an elementary a b trapping set in a n k d code is a trapping set for which all check nodes in the induced subgraph have either degree one or two and there are exactly b check nodes while check nodes of odd degree larger than one are possible they are very unlikely within small trapping sets techniques to find and remove elementary trapping sets have become crucial when constructing high perfoming codes two examples of trapping sets are shown in fig in fig the trapping set on the right has a smaller number of variable nodes than the one of the left however under the gallager b decoding algorithm the larger trapping set has a smaller critical number thus the performance of the code is limited by the larger trapping set this idea is quite unintuitive and shows the depth of consideration which must be made when attempting to improve the error floor of ldpc codes the problems which trapping sets and stopping sets introduce to ldpc code ares important to research and solve there do exist methods for constructing ldpc codes by avoiding or removing trapping sets and stopping sets however these methods come at the cost of restraining other properties such as code length density or error correcting ability vi t he influence of s topping s ets and t rapping s ets on ldpc c ode p erformance the original ldpc codes proposed in by gallager were construction methods which allowed for varied code rates definition a gallager code is an ldpc code constructed using a matrix with uniform row weight i and uniform column weight j the code has length n code words and has code rate r c which gives a matrix h with n columns and k rows where k n r c naive analysis indicated that failed decoding is due to received vectors containing too many errors for the decoding algorithm analysis of a range of error patterns by di et al determined that this was not always the case leading to the definition of stopping sets over the bec a variety of analyses of gallager codes have shown high performance a construction in by margulis promised an improved performance over the awgnc for each prime p let p be the special linear group whose elements consist of matrices of determinant over z p this group has k p p p elements for p the margulis code is of length n with code rate r c the rows of the matrix are indexed by the elements of p and the columns are indexed by two copies of p detailed in the following definition definition let p be generated by the following matrices b if g p is the index of a row of the matrix a one is placed in the columns corresponding to g g and gb on the left hand side of the matrix and also in the columns corresponding to g g and on the right hand side of the matrix this results in a matrix for a margulis code an example of a matrix generated using the margulis construction is shown in fig to demonstrate the may matrix for margulis code frame error rate fer fig matrix generated using the margulis construction setting p to give a code with n the blue dots represent ones in this matrix with the remaining white space representing zeros sparse nature of ldpc codes another example of a margulis matrix can be found in where p which corresponds to a code with n while the code has a higher performance than a random gallager code the error floor is still quite high this error floor was claimed to be due to words a comparison between the margulis code and a random gallager code both with n can be seen in fig definition let h be a matrix if x is a vector of weight w and hxt s where s is of weight v then x is a w v word words are different from stopping sets typical w v words contain v check nodes which are only connected to the variables nodes once the words in the margulis code are the and words the high error floors of the margulis code can be reproduced with a bit approximation to a belief propagation algorithm words account for of the error floor performance of the margulis code near code words are trapping sets trapping sets are often clustered if one trapping set is found it will often contain nodes which belong to another trapping set this makes the search for trapping sets somewhat simpler finding both stopping and trapping sets are problems which makes solutions to these sets difficult to analyse the er decoding algorithm is simple and the effect of stopping sets can be demonstrated easilly however decoding over the bsc or awgnc is much more complex see fig iterative decoding methods tend to have maximum iteration counts as termination conditions and as such demonstrating the effect of trapping sets is difficult to show in lieu of an example we remind the reader of the termination conditions for the gallager a algorithm this decoder terminates either when all variable nodes send the same values over two consecutive iterations or when some maximum iteration count is reached in the latter case the decoder has failed due to the existence of a trapping set db fig ber comparison between the p margulis code using the same example as presented in fig and a random gallager code both with n and decoded using mpa over the awgnc these graphs are also known in the literature as waterfall curves while we have only discussed the issues with the margulis code using specific decoding algorithms here there are many code constructions which contain trapping and stopping sets and decoding algorithms which terminate for the same reasons for further reading see further reading on stopping sets in ldpc codes include the message passing mp algorithm and the maximumlikelihood ml decoder both over the bec further reading on trapping sets include finite alphabet iterative decoders faids and constructions based on latin squares this construction offers high structure by which stopping sets and trapping sets can be analysed if constructions which avoid trapping and stopping sets exist then the error floors of the associated ldpc codes will lower significantly this would improve the speed at which almost all digital communication occurs given the already high performance of ldpc codes in modern applications including wifi and vii c urrent s olutions the simple goal is to avoid or completely remove every stopping set or trapping set from an ldpc code this is both not reasonable given the number of cycles in an ldpc construction and more importantly not necessary only small elementary trapping sets impact the error floor due to clustering if there are enough errors in transmission for a decoder to get trapped in a large trapping set then it is highly likely that it would also be trapped in a small trapping set if there are not enough errors for the decoder to become trapped in a large trapping set the received vector can either be successfully decoded or the decoder will fail due to the presence of at least one small trapping set the current solutions to trapping sets are the development of constructions which avoid small trapping sets and the removal of trapping sets from existing constructions may vi fig a subgraph tree contained in the depth l neighbourhood spreading from the variable node vi note that here represents a variable node and represents a check node a avoiding trapping sets stopping sets pose threats to the error correction of messages sent over the bec however in practice the awgnc is used and so when discussing proposed solutions we focus on the influence of trapping sets over the awgnc the peg construction is a method of constructing tanner graphs with high girth many trapping sets include small cycles so the likelihood of a small trapping set being constructed is small with a graph of high girth in order to give the definition for the peg construction some definitions are needed the peg construction method uses variable and check node degree sequences the variable node degree sequence is denoted dv where dvi is the degree of variable node vi i n and the parity check sequence is denoted dc where dc j is the degree of check node c j j m and the construction partitions the set of edges e to e where evi contains all edges incident on symbol node vi the k th edge incident on vi is denoted as evki where k dvi the neighbourhood of depth l for variable node vi is nvli and is defined as the set of all check nodes included in a subgraph tree spreading from variable node vi within depth this is demonstrated in fig the complement of nvli is where c is the set of check nodes the subgraph tree generated this way is constructed with vi as the root given the parameters n m and dv we define the peg construction as follows progressive algorithm peg for i to n do begin for k to dvi do begin if k edge c j vi where is the first edge incident to vi and c j is a check node that it has the lowest degree in else expand a subgraph from symbol node vi up to depth l in until the cardinality of nvli stops increasing but is less than m or but then evki i k edge c j vi where evi is the k th edge incident to vi and c j is a check node chosen from the set having the lowest degree end end end when presented with check nodes of the same degree a decision must be made the selection of a check node at random or the selection of a check node according to some order an improved construction the construction chooses at random though the deterministic nature of the ordered check node process might be of use an example of peg construction is given in fig setting n m and dv the peg construction maximises the local girth of a variable node when a new edge is added to the node after the discovery of stopping and trapping sets the peg construction was modified the peg construction is notable for its ability to create high girth ldpc codes however the number of cycles is not controlled trapping sets are formed by a combination of several cycles the peg algorithm while having a higher girth than aternate constructions contains more trapping sets thus leaving the error floor open to improvement the randpeg construction improves upon the peg algorithm by minimizing cycles at the same time as reducing the computational complexity of the peg algorithm this can be further improved by adding an objective function to avoid small trapping sets ldpc qcldpc codes which are used in many applications can be constructed using the improved randpeg algorithm the objective function used in the improved randpeg algorithm detects all and trapping sets removing all trapping sets and as many trapping sets as possible without adversely affecting the performance of the ldpc code the characterization of trapping sets is achieved in through the locations of check nodes in different levels of trees see fig the resulting construction is as follows improved randpeg algorithm rpeg for i to n do may b a c d fig a peg construction with n m and d v check nodes are chosen based on index order the first edge chosen for each variable node is chosen from the check nodes with lowest degree at random the generation of the subgraphs and subsequent edge placement are the factors which highlight this construction method in a the edge choices are simplistic as the subgraphs are of low depth this decision making continues in b until is considered where the edge choice is restricted to or due to connections in the subgraph to check nodes and these choices can be observed in both remaining figures c and d one notable choice is the n d edge decision for in d where the only remaining option once is chosen becomes which gives a uniform check node degree sequence begin for k to dvi do begin if k edge c j vi where is the first edge incident to vi and c j is a check node such that it has the lowest degree in else expand a subgraph from symbol node vi up to depth l in until the cardinality of nvli stops increasing but is less than m or but remove from i all check nodes that appear at least once in the tree spreading from vi this removes all check nodes that would create cycles of size for cm in do compute the number of and trapping sets that would be created if cm is selected remove all check nodes that would create trapping sets and remove check nodes which create more than the smallest number of trapping sets if evki edge cm vi where evki is the k th edge incident to vi and cm is a check node chosen from the remaining nodes in else declare a design failure end end end end end the improved randpeg construction algorithm while having a high computational complexity performs the task of avoiding trapping sets optimally for given dimensions of an ldpc code possible improvements to this construction method include lowering the computational complexity and potentially lowering the girth the removal of all cycles is unnecessary as not all cycles contribute to trapping sets the inclusion of a lower girth into a construction which also contains no small trapping sets could lead to an ldpc code with a higher decoding performance b removing stopping and trapping sets the performance of ldpc codes is constrained by the presence of cycles and trapping sets within the code s paritycheck matrix we discuss two methods of removing trapping sets the addition of a redundant equation and the use of tanner graph covers redundant equations adding a redundant equation is equivalent to adding a redundant row to the matrix this has been used in an attempt to remove the trapping sets present in the margulis code the and trapping sets in the margulis code are elementary point trapping sets point trapping sets are subsets of variable nodes that contain all errors ever to occur throughout the decoding process a redundant parity check row is identified which when added to the matrix potentially disrupts the and elementary trapping sets this row is identified through a random search which relies on information about trapped variables not available during may decoding as random searches can not be used in applied error correction a structured search was considered to be more useful the structured search identifies variable and check nodes which connect to both the and trapping sets and combines the projection of the involved nodes such that a redundant row can be added to eliminate the effect of those trapping sets the only way to disrupt the and trapping sets in the margulis code is if the projection of both the variables and the variables on the redundant equation has row weight one this can be most reliably achieved by extending trapping sets to trapping sets see fig given that the margulis code has a paritycheck matrix an elementary a b trapping set contains a fixed number of check nodes let e denote the number of check nodes connected to two variable nodes within the trapping set then e b and therefore two variable nodes and three check nodes must be added to extend a trapping set to a trapping set at most one check node can be connected to both of the added variable nodes such that are not created such an extension which avoids the creation of in the margulis code is only possible in two configurations a the two check nodes of the basic trapping set are connected through two additional variable nodes to one additional check node b in the second configuration the additional variable nodes do not share a check node these configurations are demonstrated in fig the existing check and variable nodes neighbouring the additional check and variable nodes are linearly combined to generate a redundant equation a structured search is then used to ensure that this projection has row weight one in both the and trapping set the addition of a redundant equation focuses on point elementary trapping sets from the margulis code where the structure of the trapping sets are well known if this method were applied to other ldpc codes the location and structure of the trapping sets within such a code are unknown the redundant rows are computationally inexpensive to compute however the code rate of the resulting ldpc code will be reduced and the extra row increases the number of operations per decoding iteration though by a negligible amount another potential problem is the success rate of this solution the addition of a redundant equation does not guarantee that the trapping set will be disrupted tanner graph covers another method capable of eliminating trapping sets is the utilization of graph covers this method constructs an ldpc code c of length given a code c of length the parity check matrix of this code is denoted h and is initialized to h h h ts ts ce expansion to ts a ce expansion to ts b fig the trapping set structure of a trapping set and its expansion configurations a left and b right for a the two expansion variables are denoted and and the check node connected to both of the variables nodes is denoted c e the unsatisfied check nodes in the original trapping set are denoted c through c and the variable nodes connected to these check nodes are denoted through the check nodes of degree one in the expansion of the trapping set are denoted c e and c e the node labels for b follow similarly the operation of changing the value of ht k and to and k and hm to is termed as edge swapping the graph covers method requires that the locations of dominant trapping sets are known the method of edge swapping is then described as follows graph covers algorithm take two copies and of the code since the codes are identical they share the same trapping sets initialize swappede dges frozene dges order the trapping sets by their critical numbers choose a trapping set in the tanner graph of with minimal critical number let denote the set of all edges in if swappede dges go to step else go to step swap an arbitrarily chosen edge e frozene dges set swappede dges swappede freeze the edges from so that they can not be swapped in the following steps set frozene dges frozene dges repeat steps to until all trapping sets of the desired size are removed possible improvements to the graph cover method are to prioritize specific edges for swapping and freezing to avoid creating trapping sets of the same critical number however experimentally all trapping sets with minimal critical number were removed using the above algorithm the graph covers method gave improved fer results for a tanner code a margulis code and a mackay code using the gallager b decoding algorithm while the decoding method was constant throughout these results the application of graph covers will optimize fer performance using an may arbitrary decoding algorithm the ldpc code c created from code c has code rate r r and minimum distance d an increase to the minimum distance of the code c gives higher error correcting capabilities than however a lower code rate could decrease the overall efficiency the lower row and column weight of h gives c higher fer performance than c though with a of low decoding complexity the associated with removing trapping sets are more severe than the surveyed construction methods which avoid them code length decoding speed from check nodes etc the current research goal remains the creation of a construction or modifiable construction which can either avoid or remove small elementary trapping sets without penalty to the code s error correcting ability or decoding efficiency viii c onclusion throughout this survey we have covered the literature surrounding ldpc codes communication channels and decoding techniques the negative impact cycles have on ldpc code efficiency is noted and the problem of stopping sets and trapping sets have been defined and discussed including the dominance of small elementary trapping sets over awgnc a small variety of partial solutions such as the randomized progressive algorithm and tanner graph covers are discussed the research goal remains to find constructions of ldpc codes without small trapping sets acknowledgment the authors would like to thank professor ian turner who worked closely with us throughout our research emeritus professor ed dawson and dr harry bartlett for their help in the final stages before submission dr dhammika jayalath for his help with the decoding simulations over awgnc and xuan he for his suggestions on how to present our ber data computational resources and services used in this work were provided by the hpc and research support group queensland university of technology brisbane australia a price is supported by an apa scholarship r eferences diouf declercq ouya and vasic improved peg construction of large girth codes ieee intern symp turbo codes and iterative inf proc istc pp mackay and neal near shannon limit performance of low density parity check codes electron vol no pp shannon a mathematical theory of communication bell system technical journal vol no pp ieee standard for information local and metropolitan area specific part wireless lan medium access control mac and physical layer phy specifications ieee std pp oct etsi digital video broadcasting dvb second generation framing structure channel coding and modulation systems for broadcasting interactive services news gathering and other broadband satellite applications gallager codes ire trans inf theory vol no pp bonello chen and hanzo codes and their rateless relatives ieee commun surv vol no pp richardson error floors of ldpc codes proc annual allerton conference on commun control and computing vol no pp johnson and weller codes for iterative decoding from partial geometries ieee trans vol no pp ivkovic chilappagari and vasic eliminating trapping sets in codes by using tanner graph covers ieee trans inf theory vol no pp di proietti telatar richardson and urbanke analysis of codes on the binary erasure channel ieee trans inf theory vol no pp margulis explicit constructions of graphs without short cycles and low density codes combinatorica vol no pp mackay and postol weaknesses of margulis and codes electronic notes in theoretical computer science vol pp baldi cryptography springer science business richter finding small stopping sets in the tanner graphs of ldpc codes turbo codes related topics international on source and channel coding turbocoding pp mcgregor and milenkovic on the hardness of approximating stopping and trapping sets ieee trans inf theory vol no pp shedsale and sarwade a review of construction methods for regular ldpc codes indian journal of comput sci vol no pp hill a first course in coding theory oxford clarendon press peterson and weldon codes mit press richardson and urbanke modern coding theory cambridge university press elias coding for two noisy channels third london symposium vol poddar low density parity check codes complexity vol shokrollahi ldpc codes an introduction digital fountain tech rep traore kant and jensen message passing algorithm and linear programming decoding for ldpc and linear block codes aalborg university pp colavolpe and germi on the application of factor graphs and the algorithm to isi channels ieee trans vol no pp sharon litsyn and goldberger an efficient schedule for ldpc decoding proc conv electric electron engineers pp hailes xu maunder and hanzo a survey of ldpc decoders ieee commun surv vol no pp sharon litsyn and goldberger convergence analysis of serial schedules for ldpc decoding turbo codes related topics international on source and channel coding turbocoding pp kschischang frey and loeliger factor graphs and the algorithm ieee tran inf theory vol no pp zhang and fossorier shuffled belief propagation decoding asilomar conf on signals systems and computers vol pp kfir and kanter parallel versus sequential updating for belief propagation decoding physica a statistical mechanics and its applications vol no pp hocevar a reduced complexity decoder architecture via layered decoding of ldpc codes signal processing systems pp casado griot and wesel informed dynamic scheduling for decoding of ldpc codes ieee international conf on pp haykin communication systems john wiley sons tian jones villasenor and wesel construction of irregular ldpc codes with low error floors ieee intern conf on commun icc vol no pp may richardson shokrollahi and urbanke design of capacityapproaching irregular codes ieee trans on inf theory vol no pp diouf declercq ouya and vasic a ldpc code design avoiding short trapping sets ieee intern symp inf theory isit pp orlitsky viswanathan and zhang stopping set distribution of ldpc code ensembles ieee trans inf theory vol no pp ripoll and barraza a new algorithm to construct ldpc codes with large stopping sets simulation vol ranganathan divsalar vakilinia and wesel design of irregular ldpc codes using algorithmic cancellation ieee intern symp inf theory pp laendner and milenkovic algorithmic and combinatorial analysis of trapping sets in structured ldpc codes intern conf on wireless networks commun and mobile computing vol pp richter and hof on a construction method of irregular ldpc codes without small stopping sets ieee intern conf on vol pp mackay good codes based on very sparse matrices ieee trans inf theory vol no pp krishnan and shankar computing the stopping distance of a tanner graph is ieee trans inf theory vol no pp sankaranarayanan chilappagari radhakrishnan and vasic failures of the gallager b decoder analysis and applications proc inf theory and applic works ucsd vol luby mitzenmacher shokrollahi and spielman efficient erasure correcting codes ieee trans inf theory vol no pp and fekri on decoding of codes over the binary erasure channel ieee trans inf theory vol no pp danjean declercq planjery and vasic on the selection of finite alphabet iterative decoders for ldpc codes on the bsc ieee inf theory works itw pp laendner and milenkovic ldpc codes based on latin squares cycle structure stopping set and trapping set analysis ieee trans vol no pp hu eleftheriou and arnold progressive tanner graphs ieee global telecommun vol pp hu eleftheriou and arnold regular and irregular progressive tanner graphs ieee trans on inf theory vol no pp milenkovic soljanin and whiting asymptotic spectra of trapping sets in regular and irregular ldpc code ensembles ieee trans on inf theory vol no pp venkiah declercq and poulliat design of cages with a randomized progressive algorithm ieee commun vol no pp wang yedidia and draper construction of qcldpc codes ieee intern symp turbo codes and related topics pp laendner hehn milenkovic and j huber when does one redundant equation matter ieee globecom pp tanner sridhara and fuja a class of ldpc codes proc iscta pp rosenthal and vontobel constructions of ldpc codes using ramanujan graphs and ideas from margulis proc of the allerton conference on control and computing mackay encyclopedia of sparse graph codes aiden price was awarded the qut dean s scholarship and received a bsc and s t class honours in mathematics from queensland university of technology aiden is currently studying a phd at qut in the school of mathematics under the supervision of doctor harry bartlett and emeritus professor ed dawson and is funded by the apa scholarship his research interests are coding theory and its application to digital communications and cryptography joanne hall received a bsc and mphil in mathematics from the australian national university she graduated with a phd from rmit university in under the supervision of asha rao in the information security and informatics research group dr hall spent one year as a postdoctoral research scientist at charles university in prague and four years as a lecturer at the queensland university of technology in brisbane in she has returned to rmit university as a lecturer in the school of science her research interests are algebraic and combinatorial structures and their applications in digital communication
7
may an empirical analysis of approximation algorithms for the euclidean traveling salesman problem yihui he xi an jiaotong university xi an china ming xiang xi an jiaotong university xi an china heyihui mxiang abstract space and the cost function is the euclidean distance that is the euclidean distance between two cities x xd y yd is with applications to many disciplines the traveling salesman problem tsp is a classical computer science optimization problem with applications to industrial engineering theoretical computer science bioinformatics and several other disciplines in recent years there have been a plethora of novel approaches for approximate solutions ranging from simplistic greedy to cooperative distributed algorithms derived from artificial intelligence in this paper we perform an evaluation and analysis of cornerstone algorithms for the euclidean tsp we evaluate greedy and genetic algorithms we use several datasets as input for the algorithms including a small dataset a mediumsized dataset representing cities in the united states and a synthetic dataset consisting of cities to test algorithm scalability we discover that the greedy and algorithms efficiently calculate solutions for smaller datasets genetic algorithm has the best performance for optimality for medium to large datasets but generally have longer runtime our implementations is public available d x xi yi this simplification allows us to survey several cornerstone algorithms without introducing complex scenarios the remainder of this paper is organized as follows in section we briefly review the first solutions and survey variants to the tsp we describe the algorithms used in our experiment in section a description of the benchmark datasets and results of the experiment are detailed in section and explains the findings and compares the performance of the algorithms we then conclude and describe future work in section background an example tsp is illustrated in figure the input is a collection of cities in the two dimensional space this input can be represented as a distance matrix for each pair of cities or as a list of points denoting the coordinate of each city in the latter method distances are calculated using euclidean geometry a tour is shown in subfigure b although not shown in the figure each edge will have some edge weight denoting the distance between two nodes or cities due to the computational complexity of the tsp it may be necessary to approximate the optimal solution the optimal tour is shown in c for small graphs it may be possible to perform an exhaustive search to obtain the optimal solution however as the number of cities increases so does the solutions space problem complexity and running time if n is number of cities the number of possible pthe edges is i the number of possible tours is since the same tour with start point x and y appears twice once with x as the start node and once with y as the start node introduction known to be the traveling salesman problem tsp was first formulated in and is one of the most studied optimization problems to date the problem is as follows given a list of cities and a distance between each pair of cities find the shortest possible path that visits every city exactly once and returns to the starting city the tsp has broad applications including for lasers to sculpt microprocessors and delivery logistics for mail services to name a few the tsp is an area of active research in fact several variants have been derived from the original tsp in this paper we focus on the euclidean tsp in the euclidean tsp the vertices correspond to points in a https algorithm in optimization is a simple local search algorithm first proposed by croes in for solving the tsp the main idea behind it is to take a route that crosses over itself and reorder it so that it does not a complete local search will compare every possible valid combination of the swapping mechanism this technique can be applied to the travelling salesman problem as well as many related problems these include the vehicle routing problem vrp as well as the capacitated vrp which require minor modification of the algorithm this is the mechanism by which the swap manipulates a given route figure the tsp was first formulated in the by karl menger in vienna and harvard by the solutions for tsp began to appear the first solution was published by dantzig fulkerson and johnson using a dataset of cities in richard karp proved that the hamiltonian cycle problem was which proves that the tsp is in modern day the tsp has a variety of applications to numerous fields examples among these applications include genome sequencing air traffic control supplying manufacturing lines and optimization take route to route and add them in order to new route take route i to route k and add them in reverse order to new route take route to end and add them in order to new route algorithms return new route we now move to a discussion of the algorithms used in our evaluation first we describe an upper bound for tsp in section the traditional greedy and approaches are discussed in section and section we finally discuss the genetic algorithm in section genetic algorithm genetic algorithms ga are search heuristics that attempt to mimic natural selection for many problems in optimization and artificial intelligence in a genetic algorithm a population of candidate solutions is evolved over time towards better solutions these evolutions generally occur through mutations randomization and recombination we define a fitness function to differentiate between better and worse solutions solutions or individuals with higher fitness scores are more likely to survive over time the final solution is found if the population converges to a solution within some threshold however great care must be taken to avoid being trapped at local optima we will now apply a genetic algorithm to the tsp we define a fitness function f as the length of the tour supposed we have an ordering of the cities a xn where n is the number of cities the fitness score for the tsp becomes the cost of the tour d x y denote the distance from x to y random path finding the worst case of tsp is as hard as the best one so we uniformly generate a random path for all available edges and use this as a upper bound of optimal path benchmark for all other algorithms greedy algorithm the greedy heuristic is based on kruskals algorithm to give an approximate solution to the tsp the algorithm forms a tour of the shortest route and can be constructed if and only if the edges of the tour must not form a cycle unless the selected number of edges is equal to the number of vertices in the graph the selected edge before being appended to the tour does not increase the degree of any node to be more than the algorithm begins by sorting all edges from least weight to most heavily weighted after the edges are sorted the least edge is selected and it is added to the tour if it does not violate the above conditions the algorithm continues by selecting the next edge and adding it to the tour this process is repeated until all vertices can be reached by the tour the result is a minimum spanning tree and is a solution for the tsp the runtime for the greedy algorithm is o log n and generally returns a solution within of the heldkarp lower bound f a x d xi d xn the genetic algorithm begins with an initial random population of candidate solutions that is we have a set of paths that may or may not be good solutions we then move forward one time step during this time step we perform a set of probabilistic and statistical methods to select mutate and produce an offspring population with traits similar to those of the best individuals with the highest fitness runtime comparison greedy sa figure the dataset in the plane the united states cities figure runtime comparison the y axis is in log scale from we then repeat this process until our population becomes homogeneous the running time of genetic algorithms is variable and dependent on the problem and heuristics used however for each individual in the population we require o n space for storage of the path for genetic crossover the space requirement remains o n the best genetic algorithms can find solutions within of the optimal tour for certain graphs tour length comparison tour length greedy sa optimal random we benchmark our algorithms using publicly available datasets additionally to test the scalability of the algorithms we generated a synthetic dataset consisting of cities in all dataset names the numeric digits represent the number of cities in the dataset the datasets are as follows and all datasets except can be found online the and datasets represent consisting of locations of cities in the united states a visual representation of the dataset in the plane is shown in figure not all datasets have a known optimal tour when this is the case we use random path algorithm to infer a upper bound of the optimal tour experiment dataset figure tour length comparison the y axis is in log scale divided by random tour length a solution similar with the optimal for small datasets and become worse for larger datasets in terms of running time figure the best algorithm is greedy algorithm however in terms of optimal tour length of solution the best algorithm is ga this is in line with our expectations and alludes to the fact that different heuristics are better suited for different situations as shown in figure genetic algorithm performs fairly consistently in comparison to the and greedy algorithms across all datasets highlighted in figure the running time of genetic is almost linear this suggests that for larger datasets if running time is a concern then the genetic algorithm should be used figure further demonstrates that genetic algorithm maintains a smaller percent above optimal than the other algorithms from this we can see that genetic algorithm has high accuracy and better complexity than other heuristics especially for larger datasets surprisingly genetic algorithm got the optimal solution for random dataset the dataset was generated by plotting random uniformly distributed points x y in with x y as a result all distances satisfy the triangle inequality and this dataset can be classified as a euclidean tsp dataset the running time for creating the dataset is o n the output is a list of all cities represented as x y points comparison as we can see in figure the greedy is the most efficient in figure we can see that most algorithms return bryant and benjamin genetic algorithms and the traveling salesman problem department of mathematics harvey mudd college pages burkardt data for the traveling salesperson problem http croes a method for solving problems operations research grefenstette gopal rosmaita and van gucht genetic algorithms for the traveling salesman problem in proceedings of the first international conference on genetic algorithms and their applications pages lawrence erlbaum new jersey haque shah ejaz and xu an empirical evaluation of approximation algorithms for the metric traveling salesman problem hoffman wolfe garfinkel johnson papadimitriou gilmore lawler shmoys karp steele et al the traveling salesman problem a guided tour of combinatorial optimization wiley sons homaifar guan and liepins schema analysis of the traveling salesman problem using genetic algorithms complex systems hong a kahng and moon improved largestep markov chain variants for the symmetric tsp journal of heuristics kim shim and zhang comparison of tsp algorithms project for models in facilities planning and materials handling mucha frac for graphic tsp theory of computing systems qiu zhang and yan an adaptive markov chain monte carlo algorithm for tsp in computer science and software engineering international conference on volume pages ieee reinelt tsplib http rosenkrantz stearns and lewis ii an analysis of several heuristics for the traveling salesman problem siam journal on computing figure solution generated by genetic algorithm for dataset dataset shown in figure conclusion most of our algorithms attempt to solve the tsp in a linear fashion originating from artificial intelligence the genetic algorithm is very different compared to greedy and literature suggests that the best algorithms focus on iteration and convergence to find optimal tours something genetic algorithms attempt to achieve for example the large step markov chain relies on markov chains to find convergence of many paths to form a global optimum and several papers cite markov chains as the best known solution to tsp recent studies include using adaptive markov chain monte carlo algorithms many of these extend the metropolis algorithm a simulated annealing algorithm which attempts to mimic randomness with particles as the temperature varies this further supports our conclusion that algorithms inspired from artificial intelligence perform well for finding solutions for the tsp however these may not be suitable when a guarantee is required in this paper we surveyed several key cornerstone approaches to the tsp we selected four algorithms and tested their performance on a variety of public datasets our results suggest that genetic algorithms and other approaches from artificial intelligence are able to find a solution references arora polynomial time approximation schemes for euclidean tsp and other geometric problems in foundations of computer science annual symposium on pages ieee baojunpeng introduction to artificial intelligence m
2
may doppler synthetic aperture radar interferometry a novel sar interferometry for height mapping using waveforms birsen and cagri yanik department of electrical and computer systems engineering rensselaer polytechnic institute troy ny usa corresponding author yazici abstract this paper introduces a new and novel radar interferometry based on doppler synthetic aperture radar paradigm conventional sar interferometry relies on wideband transmitted waveforms to obtain high range resolution topography of a surface is directly related to the range difference between two antennas configured at different positions is a novel imaging modality that uses continuous waves uncw it takes advantage of high resolution doppler information provided by uncws to form high resolution sar images we introduced the theory of interferometry we derived interferometric phase model and develop the equations of height mapping unlike conventional sar interferometry we show that the topography of a scene is related to the difference in doppler between two antennas configured at different velocities while the conventional sar interferometry uses range doppler and doppler due to interferometric phase in height mapping interferometry uses doppler and due to interferometric phase in height mapping we demonstrate our theory in numerical simulations interferometry offers the advantages of robust environmentally friendly operations lightweight systems suitable for platforms such as and passive applications using sources of opportunity transmitting uncw submitted to inverse problems doppler synthetic aperture radar interferometry introduction synthetic aperture radar sar interferometry is a powerful tool in mapping surface topography and monitoring dynamic processes this tool is now an integral part of wide range of applications in many disciplines including environmental remote sensing geosciences and climate research earthquake and volcanic research mapping of earth s topography ocean surface current monitoring hazard and disaster monitoring as well as defense and security related research basic principles of sar interferometry were originally developed in radio astronomy interferometric processing techniques and systems were later developed and applied to earth observation sar interferometry exploits phase differences of two or more sar images to extract more information about a medium than present in a single sar image conventional sar interferometry relies on wideband transmitted waveforms to obtain high range resolution the phase difference of two wideband sar images are related to range difference there are many different interferometric methods depending on the configuration of imaging parameters in space time frequency etc when two images are acquired from different the phase difference is related to the topography of a surface in this paper we develop the basic principles of a new and novel interferometric method based on paradigm to determine topography of a surface unlike conventional sar uses continuous waves cw to form high resolution images conventional sar takes advantage of high range resolution and due to the movement of sar antenna for high resolution imaging on the other hand takes advantage of high temporal doppler resolution provided by uncws and for high resolution imaging we develop the phase relationship between two images and show that the phase difference is related to doppler difference we approximate this phase difference as and derive the equations of height mapping for interferometry conventional wideband sar interferometry for height mapping requires two different interferometry provides a new degree of freedom in system design by allowing antennas to have the same but different velocities to obtain height mapping additional advantages of interferometry include the following i small lightweight inexpensive and calibrate hardware high snr and long effective range of operation all of these make interferometry a suitable modality for applications requiring high snr long range of operation and low payload platforms such as or small uninhabited aerial vehicles ii effective use of electromagnetic spectrum and environmentally friendly illumination iii passive applications may not require dedicated transmitters since existing radio doppler synthetic aperture radar interferometry frequency rf signals of opportunity often have the properties to the best of our knowledge this is the first interferometric method that is developed in paradigm we present the theory for two monostatic however the method can be easily be extended to bistatic and multistatic configurations and synthetic aperture imaging applications in acoustics the rest of the paper is organized as follows in section sar geometry and notation are defined in section wideband sar image formation layover effect and basic principles of wideband sar interferometry are described in a perspective relevant to our subsequent development in section data model image formation and layover are summarized section introduces the basic principles of interferometry and compares the results to wideband sar case section presents numerical simulations and section concludes the paper configurations and notation we consider two sar systems as shown in fig antenna antenna location of the scatter height of the scatter figure imaging geometry for an interferometric sar system with two antennas following trajectories s and s the scatterer is located at x where its height is h x and x let s and s s r denote the trajectories of the first and second antennas respectively unless otherwise stated bold roman bold italic and roman letters will denote elements in and r respectively x r and x x the earth s surface is located at x x h x where h r is the unknown height representing ground topography let v r denote target reflectivity where we assume that the scattering takes place only on the surface of the earth major notation used throughout the paper is tabulated in table doppler synthetic aperture radar interferometry table notation symbol description x x h x x location on earth s surface h x unknown height of a scatter at x v x surface reflectivity s antenna trajectory s t ri x s range of antenna center frequency of the transmitted waveforms time for the antenna li x s of the antenna b dw t s i wideband sar demodulated received signal at antenna s surface z s s c surface kiw b filtered backprojection fbp operator for wideband sar iiw b wideband sar image b x wideband interferometric phase b baseline vector in wideband sar interferometry z b c interferometric phase cone l vector from a known scatterer position to the unknown location of a scatterer component of l perpendicular to s b f lat x flattened wideband sar interferometric phase t smooth windowing function duration of t nb du s i data li z s s c surface li z s s surface s s ri z s kiu n b fbp operator for iiu n b image nb x sd interferometric phase v baseline velocity nb f lat x flattened interferometric phase doppler synthetic aperture radar interferometry wideband sar interferometry the basic principles of sar interferometry are described by many sources and in this section we summarize the principles and theory of sar interferometry in a notation and context relevant to our subsequent presentation of interferometry we begin with the wideband sar received signal model derive the interferometric phase model provide a geometric interpretation of the interferometric phase from which we develop the equations of height mapping wideband sar received signal model we assume that the sar antennas are transmitting wideband waveforms let ri t s denote the received signals i where s and t are the and variables respectively under the and born approximations the received signals can be modeled as z ri t s x s x s v x where ri x s s is the range of the ith antenna c is the speed of light in is the temporal frequency variable v x is the scene reflectivity function is a function of that depends on antenna beam patterns geometrical spreading factors and transmitted waveforms let where is the bandwith and is the center frequency of the transmitted waveforms we demodulate the received signals and write b dw t s t ri t s i x s x s c ri x s v x next we approximate ri x s in ei ri x s c around s as follows ri x s ri x s ri x s s ri x s i where denotes derivative with respect to s and is the time for the ith antenna ri x s x b denotes the unit vector in the direction of x and s denotes the velocity of in x th the i antenna doppler synthetic aperture radar interferometry we define li x s x s and refer to li x s as the of the ith antenna note that at the zerodoppler time the antenna is orthogonal to the antenna velocity let ai x s x s e i c ri s x finally we write the demodulated received signal as follows b dw t s i x s ai x s ri x c v x wideband sar image formation and layover many different algorithms were developed to form wideband sar images such as seismic migration backprojection and chirp scaling algorithms all of these algorithms take advantage of high range resolution provided by wideband transmitted waveforms and doppler information provided by the movement of antennas the location of a scatterer is identified by intersecting the and surfaces and the ground topography as shown in fig range sphere doppler cone velocity vector sensor position x scatterer position figure the sar image of a scatterer is reconstructed at the intersection of the sphere and cone surfaces and the height of the scatterer more precisely the image of a scatterer is formed at z satisfying the following equations s ri x s surface z s s ri x s height surface h x z z doppler synthetic aperture radar interferometry note that ri x s and ri x s are the measured range and doppler and h x is the height of the scatterer as functions of z and define the and surfaces respectively contours are defined as the intersection of the surface sphere and the ground topography without loss of generality we consider a filtered backprojection fbp type method where the received and demodulated signals are backprojected onto contours defined on a reference surface in the absence of heigh information demodulated signal is backprojected onto the intersection of the surface and a known reference surface without loss of generality we assume a flat reference surface at zero height and backproject the demodulated signals onto the following contours hirange z and s ri x s let kiw b be an fbp operator then the reconstructed image of the scatterer at x becomes w b b i iiw b k z i di i b b s qw s dw t s dtds i i b where qw is a filter that can be chosen with respect to a variety of criteria i from the image of the scatterer at x becomes iiw b b ri x c the magnitude of reconstructed images is a measure of target reflectivity whereas the phase of the reconstructed image depends on the true location x x h x of the scatterer however since the true height h x of the scatter is unknown and hence different than that of the reference surface the location at which the scatterer is reconstructed is different than its true location x this positioning error due to incorrect height information is known as layover fig depicts the layover effect we see that without the knowledge of ground topography additional information or measurements are needed to reconstruct the scatterers at correct locations this additional information is provided by a second antenna that has a different vantage point than the first one wideband sar interferometric height reconstruction an interferogram is formed by multiplying one of the sar images with the complex conjugate of the other sar image prior to multiplying the sar images the two intensity images b i are so that pixel locations and each corresponding to the scatterer at position x in the scene are roughly multiplying b with the complex conjugate of b we get b b b b x x c the positioning errors due to layover are different in the two sar images due to different imaging geometries doppler synthetic aperture radar interferometry figure layover in wideband sar the range sphere depicts the surface of the monostatic sar configuration since the correct height of the scatterer at location x is unknown the image of the scatterer at x is formed at on a flat surface we refer to the phase of the interferogram as the wideband interferometric phase b x x x c b provides us the where is a for the interferometric phase third measurement needed to determine the location of a scatterer in in general the range difference can be many multiples of unique phase proportional to range difference can be determined by a phase unwrapping process now consider the following surface c wb x b where defines a x is the measured interferometric phase hyperboloid with foci at and we assume that the distance between the antennas is much smaller than the ranges of the antennas to the scene and approximate this hyperboloid as follows c wb z b x where b is the baseline vector defines a cone whose vertex is the first antenna and the axis of rotation is the baseline vector we call this surface the interferometric phase cone the interferometric phase cone provides the third equation needed to locate the position of a scatterer in more precisely the location of the scatterer is given by the solution of the following equations range sphere x z x s c wb interferometric phase cone z b x doppler cone doppler synthetic aperture radar interferometry the of are measured quantities defined in terms of the true location x of the scatterer in the scene and the left the three surfaces in terms of the location of the scatterer z in the image fig geometrically illustrates the solution of these three equations in wideband sar interferometry typically the figure wideband sar interferometry provides a third algebraic equation by which the unknown location of a scatters in is determined the scatterer is located at the intersection of the sphere and the interferometric phase cone the axis of rotation of the is the velocity of the first antenna and the axis of rotation of the interferometric cone is the baseline vector extending from the first to the second antenna variation in the color coding of interferogram is flattened by subtracting the expected phase from a surface of constant elevation let x l then under the assumption that s x s s s where l z s l s s in other words the vector is the component of l perpendicular to s the flattened phase then becomes b x b f lat x c c r since l b where is the component of b perpendicular to z can be alternatively expressed as b f lat x c r doppler synthetic aperture radar interferometry fig illustrates the key concepts and vectors involved in the wideband interferometry figure a illustration of vectors involved in wideband interferometric phase l x where denotes the location of the ith antenna at the time denotes the of the first antenna with respect to the reference scatterer located at and is the component of l perpendicular to the wideband interferometric phase is related to the projection of the baseline vector b onto the the x of the antenna with respect to scatterer location x known vectors are shown in red and unknown vectors are shown in black data model and image formation for data model for we consider two antennas following the trajectories t i transmitting cws as shown in fig let p t t be the transmitted waveform where is the center frequency the scattered field model at the ith antenna is then given by z t e ri t t t v x dx t let and t be a smooth windowing function with a finite support t following we correlate ri t with a scaled and translated version of the transmitted signal over t as follows z unb di s ri t t t dt doppler synthetic aperture radar interferometry inserting into we obtain z t e unb s di t t v x t t t dtdx approximating t around t t t and making the approximation we write t li x t where li x x and is the velocity of the ith antenna to simplify our notation for the rest of the paper we set li x li x s s s s and ri x ri x s we next define doppler for the ith antenna fid x s li x s s c inserting and into the data model becomes z d d unb di s x s t x s x s v x dtdx where t x s is a slow varying function of t composed of the rest of the terms in we now approximate fid x s around s sid as follows fid x s fid x sid fid x s s sid d fi x s we choose sid such that x s li x sid sid sid sid ri xsid where sid is the acceleration of the ith antenna and sid is the component of sid perpendicular to the li x sid as described in we refer to sid as the time for the ith antenna d using in x s and redefining the function in t ai t x s t x s e d fi x s d we obtain the following data model for image reconstruction z d d i i unb di s x s ai t x s x sd sd v x dtdx doppler synthetic aperture radar interferometry image formation and layover similar to the wideband case we reconstruct images by backprojection as described in the forward model in shows that the data dui n b s is the weighted integral of the scene reflectivity over contours it was shown in that a scatterer located at x in the scene is reconstructed at the intersection of surface and surface and ground topography more precisely the image of a scatterer located at x in the scene is reconstructed at z satisfying the following equations c surface ldi z s s fid x s s s c surface li z s s fid x s ri z s height h x z z where the of corresponds to measurements and the defines surfaces in image parameter z the surface given by the following set s s c d hi z z r li z s s fi x s ri z s can be viewed as a continuum of intersections of cones and expanding spheres centered at the sensor location the axis of rotation for the surface is the acceleration vector of the antenna trajectory fig illustrates and surfaces and the reconstruction of a point scatterer by the intersection of these surfaces and ground topography the reconstruction is analogous to the wideband sar image reconstruction shown in fig in the absence of ground topography information we backproject data onto isodoppler contours on a reference surface without loss of generality we consider the following contours c d dop hi r z and li z s s fi x s where the of the equality in is the high resolution measurement provided by cw let kiu n b be an fbp operator as described in then the reconstructed image is given by unb unb di iiu n b k zi d i eit s qui n b s t dui n b s where qui n b is a filter that can be chosen as in the reconstructed image is given by d i i iiu n b n b x sd sd doppler synthetic aperture radar interferometry figure in image reconstruction a scatterer located at x in the scene is correctly reconstructed at the intersection of the and surfaces and the ground topography surface is a cone in which its vertex is the antenna location and its axis of rotation is the antenna velocity the geometry of the surface depends on the antenna trajectory figure is drawn for a linear trajectory at a constant height in the absence of topography information we see that a scatterer located at x in the scene is reconstructed at x in the image this position error in the reconstructed image is the counterpart of the layover effect observed in conventional wideband sar images fig illustrates the layover effect in however the phase of the reconstructed image is a function of the scatterer s true location x and hence includes its height information h x figure if the height of a scatter is not known it is reconstructed at an incorrect position both the correct scatterer location x and its image lie on the same isodoppler surface the doppler cone lies at the intersection of the doppler cone defined x s and the flat topography note that the phases of the reconstructed images depend on the doppler synthetic aperture radar interferometry fid x sid the duration of the windowing function and the corresponding times sid the height information is included in the however since each imaging geometry may yield different dopplerrate in the phase of each image is multiplied by a different time to equalize the effect of this multiplication factor we multiply one of the reconstructed images with itself so that the in the phase of both images are multiplied by the same factor say as a result each image becomes d i iiu n b n b x sd sd i interferometric height reconstruction similar to the wideband case we form two images iiu n b i the intensity images b and multiply one of them by the complex conjugate of the other to form an interferogram then the interferometric phase the phase function of n b x n b x is given by b x x x where sd denotes for thus the scatterer lies on the following surface c z z x x where the is the measured interferometric phase the of defines a surface that can be described as the intersections of two cones one of which has a continuously changing solid angle assuming that the distance between the antennas is much smaller than the ranges of the antennas to the scene we can approximate the of the second antenna in terms of the of the first one as follows x x x where b is the baseline vector and is the component of b perpendicular to the of the first antenna using we approximate the interferometric phase as follows c sd unb x x sd v sd x v where we refer to v as the baseline velocity we see that approximates the interferometric phase as a additionally shows that interferometry involves not only configuring antennas in position space but also in velocity space the larger the difference in antenna velocities in the of the first antenna the larger the interferometric phase becomes if on the other hand the velocities of the antennas are the same the second term in defines the interferometric phase surface doppler synthetic aperture radar interferometry clearly in interferometry provides the third equation needed to determine the location of a scatterer in more precisely the location of a scatterer is given by the solution of the following three equations c z x z sd sd x sd z c n b x interferometric z v z sd sd fig depicts the intersection of the three surfaces at the scatterer location in surface interferometric phase surface doppler cone velocity vector sensor position scatterer position figure determination of the scatterer location in interferometry the scatterer is located at the intersection of the doppler cone and the two surfaces interferometric phase measurement provides the third surface the interferometric phase surface similar to the wideband sar interferometry the interferometric phase can be flattened by subtracting the phase due to a scatterer with known height without loss of generality let z with s x s and x thus identifying the location of a scatterer is equivalent to determining using we see that unb unb unb lat x x where is the component of l perpendicular to sd shows that the flattened interferometric phase for interferometry is related to the projection of the unknown onto the baseline velocity vector scaled by the range of the first antenna to since v l where is the component of v perpendicular to we alternative express as follows l nb lat x fig shows the key concepts and vectors involved in interferometry doppler synthetic aperture radar interferometry figure an illustration of key concepts and vectors in interferometry l x where s denotes the ith antenna position denotes the with respect to a reference surface is the component of l perpendicular to sd s denotes the antenna velocity x denotes the of the antenna with respect to the correct target location interferometric phase is proportional to the projection of the baseline velocity vector onto known vectors are shown in red and unknown vectors are shown in black comparison of interferometry wide wideband case table ii tabulates the interferometric phase for the wideband sar and cases we compare and contrast the two interferometric phases below for wb and unb the baseline is the difference in range and difference in velocity respectively the larger the the center frequency the larger the interferometric phase in both wb and unb cases the larger the range the smaller the interferometric phase in both wb and unb cases for unb larger the the larger the interferometric phase for wb the larger the b the difference between the positions of the two antennas the larger the interferometric phase for unb the larger the v the difference between the velocities of the two antennas the larger the interferometric phase doppler synthetic aperture radar interferometry table raw and flattened interferometric phase functions for wideband sar and wideband sar interferometric phase x b h sd x v flattened interferometric phase i b s sd l d x s d d numerical experiments experimental setup we conducted numerical experiments for both wideband and our experimental setup was as follows a scene of size at resolution was imaged a single point target was placed at m with the origin at the scene center two antennas flying on a linear trajectory parallel to the was used with both antennas placed at from the scene center in the direction the midpoint of the linear trajectories for both antennas was aligned at y wideband first antenna was placed at height of and the second at the length of the trajectories were in length for both antennas both antennas were moving at velocity of a waveform with flat spectrum of hz bandwidth at center frequency of was transmitted from both antennas frequency samples and s samples were used for imaging doppler first antenna was placed at height of and the second at the length of the trajectories were for both antennas the first antenna was moving at velocity of and the second at a continuous waveform at center frequency of was transmitted from both antennas a window of was used for processing at each slow time fast time t samples and s samples were used for imaging wideband sar interferometry fig and fig show the reconstructed images of the point target located at m from the first and the second antenna respectively assuming a flat ground topography at height of in both fig and fig we see that there is a displacement due to layover effect in the range direction the first antenna reconstructs the target at the second antenna reconstructs the target at doppler synthetic aperture radar interferometry a b figure a wideband reconstruction of the target located at m using the first antenna assuming flat ground topography the target is reconstructed at b wideband reconstruction of the target located at m using the second antenna assuming flat ground topography the target is reconstructed at we next align the peaks in the two images and multiply the first image with the complex conjugate of the second as in to generate the interferogram the resulting interferogram is shown in fig figure the interferogram from wideband sar reconstructed images in order to reconstruct the height we use the set of equations and the doppler cone equation at point gives us that the contours are in the which in our scenario is parallel to the thus contours have constant y value at the target s y position using this fact doppler synthetic aperture radar interferometry we need only to compute the intersection of contour and interferometric phase contour fixing the y position from figs and we see that both targets are reconstructed at y position of thus we reconstruct the true target position using y for reconstruction we sampled the height in the interval m at resolution fig shows the magnitude image of x at y note that x is the measured value derived from the phase of the reconstructed image the dark blue area indicates the contour where the magnitude of the difference is minimized a b figure a image of the magnitude of x at y the contour is indicated by dark blue area where the magnitude of b x is minimized b image of the magnitude of z b x at y the interferometric phase contour is indicated by dark blue area where the b magnitude of z b x is minimized similarly fig shows the magnitude image of the difference z b c b x as before the dark blue area indicates the interferometric phase contour combining the two images fig shows the intersection of the two contours indicated by the dark blue area the white x in fig indicates the exact intersection computed and where the target is reconstructed the white o indicates the true target position it is clear that the target is reconstructed at the correct position and height we proceed similar as in the wideband case for the case figs and show the reconstructed image for for the first and second antennas respectively the first antenna reconstructs the target at m and the second antenna at doppler synthetic aperture radar interferometry figure image of the intersection of the contour with the interfermetric phase contour at y the exact intersection is indicated by white x the true target position is indicated by white o the target is reconstructed at the correct position and height a b figure a reconstruction of the target located at m using the first antenna assuming flat ground topography the target is reconstructed at b reconstruction of the target located at m using the second antenna assuming flat ground topography the target is reconstructed at doppler synthetic aperture radar interferometry as in the wideband case we align the peaks of the two images and multiply the first image with the conjugate of the second image to form the interferogram of the doppler images the resulting interferogram is shown in fig figure the interferogram from reconstructed images to reconstruct the height we use the set of equations given in and the points is approximated by the end of the antenna s trajectories farthest from the target position by for a linear trajectory with constant velocity true point would be where namely where the is parallel to the velocity vector the best estimate would be at a point in the trajectory farthest away from the target location fig illustrates the surface at y which is the where the target position is reconstructed and the true target s y position notice that both images reconstruct the scatterer at the correct y position the contour is given by the dark blue area as before similarly figs and illustrate the and interferometric surfaces respectively at y fig combines figs and the intersection of the three contours is indicated by white x the white o shows the true target location clearly the target is reconstructed at the correct position and height conclusions we present a novel radar interferometry based on imaging paradigm uses single frequency transmitted waveforms it has several advantages over conventional sar including simpler inexpensive hardware high snr and long effective range of operation and is suitable for use in passive radar applications we derived the interferometric phase relationship for interferometric phase depends on the difference in the velocity of the antennas as opposed doppler synthetic aperture radar interferometry a b c figure a image of the magnitude of z x at y the contour is indicated by dark blue area where the magnitude of z x is minimized b image of the t magnitude of z d d f d x at y the d d sd z d contour is indicated by dark blue area where the magnitude of t z rd z d x is minimized c image of the magnitude z v z s d d of b x at y the interferometric dopplerd rate contour is indicated by dark blue area where the magnitude of z v sd b x is minimized z d d doppler synthetic aperture radar interferometry figure image of the intersection of the and interferometric contours at y the intersection is indicated by white x the true target position is indicated by white o the target is reconstructed at the correct position and height to the range difference observed in wideband sar thus in interferometry one can reconstruct the ground topography even with the same from both antennas so long as their velocities are different furthermore we showed that the true target position is determined by the intersection of and interferometric surfaces this is different from conventional wideband sar in that the surfaces that determine the true target position are and interferometric surfaces we presented numerical simulations for a single point scatterer using two antennas moving in linear trajectories to verify our interferometric method we also conduct conventional wideband sar interferometric reconstruction as a comparison we show that both wideband sar and interferometry is able to accurately reconstruct the target location thus our numerical simulations show that dopplersar interferometry retains the accuracy of conventional sar interferometry while having the advantage that affords in the future we will analyze the sensitivity of height estimation with respect to other observables and parameters acknowledgement this material is based upon work supported by the air force office of scientific research afosr under award number and by the national science foundation nsf under grant no doppler synthetic aperture radar interferometry appendix approximations appendix approximation let x and y be two vectors such that then by using taylor series expansion we can make the following approximation p p s x y y y where is the unit vector x appendix approximation of under assumption let x s denote a look direction where x y z and s then by using far field expansion we can write x s s y z s z s z y s s y s z s y s z y s s y s z s s y s z z s i h z y s y s z y s s y s s x s where is the transverse z projection of z onto the plane whose normal vector is along the look direction s y therefore difference of look directions is given by x s y s where x y s doppler synthetic aperture radar interferometry references bamler r and hartl p inverse problems rogers a and ingalls r science rogers a ingalls r and rainville l the astronomical journal graham l c proceedings of the ieee zebker h a and goldstein r m journal of geophysical research solid earth goldstein r m and zebker h a nature gabriel a k and goldstein r m international journal of remote sensing gabriel a k goldstein r m and zebker h a journal of geophysical research solid earth hanssen r f radar interferometry data interpretation and error analysis vol springer rosen p a hensley s joughin i r li f k madsen s n rodriguez e and goldstein r m proceedings of the ieee cherniakov m and moccia a bistatic radar emerging technology john wiley sons isbn url http fritz t rossi c n f lachaise m and breit h interferometric processing of data ieee international geoscience and remote sensing symposium igarss ieee pp duque s p and mallorqui j geoscience and remote sensing ieee transactions on wang l and yazici b ieee trans image process wang l and yazici b geoscience and remote sensing ieee transactions on issn wang l and yazici b siam journal on imaging sciences wang l and yazici b synthetic aperture radar imaging of moving targets using ultranarrowband continuous waveforms european conf synthetic aperture radar nuremberg germany pp wang l and yazici b detection and imaging of multiple ground moving targets using ultranarrowband sar spie defense security and sensing baltimore md pp wang l and yazici b bistatic synthetic aperture radar imaging using continuous waveforms ieee radar conf kansas city mo pp issn wang l and yazici b synthetic aperture radar imaging for arbitrary flight trajectories int conf digital signal process corfu greece pp yarman c e wang l and yazici b inverse problems wang l yarman c e and yazici b ieee transactions on geoscience and remote sensing borden b and cheney m inverse problems wang l yarman c e and yazici b theory of passive synthetic aperture imaging excursions in harmonic analysis volume springer pp zebker h and rosen p on the derivation of coseismic displacement fields using differential radar interferometry the landers earthquake geoscience and remote sensing symposium igarss surface and atmospheric remote sensing technologies data analysis and international vol ieee pp madsen s n zebker h a and martin j ieee transactions on geoscience and remote sensing prati c rocca f guarnieri a m and damonti e ieee transactions on geoscience and remote sensing rodriguez e and martin j theory and design of interferometric synthetic aperture radars iee proceedings and signal processing vol iet pp doppler synthetic aperture radar interferometry nolan c and cheney m ieee transactions on image processing yarman c and b ieee transactions on image processing yarman c b and cheney m ieee transactions on image processing prati c and rocca f international journal of remote sensing raney r runge h bamler r cumming i and wong f geoscience and remote sensing ieee transactions on yazici b cheney m and evren y c synthetic aperture inversion in the presence of noise and clutter inverse problems vol iop publishing pp wang l and yazici b doppler synthetic aperture radar imaging society of instrumentation engineers spie conference series vol p
5
bourbaki no janvier isomorphismes de graphes en temps d babai et luks oct par harald helfgott soient deux graphes n sommets isomorphes s ils le sont l ensemble des isomorphismes de peut avec une classe h du groupe sur n comment trouver et des de h le de donner un algorithme toujours en ces questions est longtemps ouvert babai a comment ces questions et d autres qui y sont en temps c en temps o exp o log n sa est en partie sur l algorithme de luks qui a le cas de graphes de introduction soient x y deux de savoir deux applications l alphabet et le domaine sont des ensembles tout groupe de permutations g sym agit sur l ensemble des de domaine sur un alphabet pour nous un groupe g ou un groupe g voudra toujours dire donner voire un ensemble de de g une classe voudra dire donner un de la classe et un ensemble de de h le de l isomorphisme de consiste x y et g s il y a au moins un de g qui envoie x sur y et si de tels isomorphismes existent les il est clair que l ensemble des isomorphismes isog x y forme une classe autg x du groupe autg x d automorphismes de x dans g c du groupe consistant dans les de g qui envoient x sur le consiste donner un algorithme qui le en temps polynomial en la taille n de voire en temps raisonnable par exemple le temps pourrait en n ce qui veut dire exp o log n o ici comme toujours o f n une par c f n pour n assez grand et c une constante et indique que la constante c de une grande partie de la motivation pour le de l isomorphisme de vient du fait que le de l isomorphisme de graphes se lui ce pour nous g s ou s g veut dire g est un de s pas forcement propre consiste si deux graphes et sont isomorphes et s ils le sont la classe de leurs isomorphismes un isomorphisme est une bijection de l ensemble de sommets de vers celui de telle que une solution permettrait par exemple de trouver une dans une base de le de l isomorphisme de graphes se en temps polynomial au de l isomorphisme de de la suivante supposons sans perte de que et ont le ensemble de sommets v alors nous pouvons comme l ensemble des paires d de v ou non suivant que nos graphes sont ou pas la xi i est comme suit pour la paire a ou a si nos graphes sont la valeur de xi a est s il y a une entre et en et dans le cas contraire soit g l image de l homomorphisme sym v sym par alors induit une bijection entre la classe des isomorphismes de et la classe isog babai le de l isomorphisme de peut en temps en le nombre d du domaine en novembre babai a une solution en temps quasipolynomial avec un algorithme explicite la de cet m a conduit trouver une erreur non triviale dans l analyse du temps mais babai a le en l algorithme la preuve est maintenant correcte corollaire babai le de l isomorphisme de graphes peut en temps en le nombre de sommets notre principale sera ba nous nous servirons aussi de la version courte nous essayerons d examiner la preuve de la la plus possible dans un de ce format en partie pour aider tout doute qui pourrait rester sur la forme actuelle du la meilleure borne connue pour le temps requis par le de l isomorphisme de graphes due luks bkl exp o n log n l usage de la joue un crucial dans la de babai comme dans la de voire dans l usage courant un choix est canonique s il est fonctoriel la situation typique pour nous sera la suivante un groupe g sym agit sur et donc sur il agit aussi sur un autre ensemble s et donc aussi sur les applications s c c est un ensemble une application s c s appelle un coloriage l ensemble c s appelle l ensemble de couleurs un choix canonique en relation g d un coloriage de pour chaque x est une application qui va de aux coloriages et qui commute avec l action de en particulier un choix canonique peut un outil pour des nonisomorphismes si les coloriages c x et c y induits canoniquement par x et y ne sont pas isomorphes l un l autre par exemple s ils ont un nombre d vermeils alors x et y ne sont pas isomorphes l un l autre quand il y a des isomorphismes dans g qui envoient c x sur c y la classe isog c x c y de tels isomorphismes sert la classe d isomorphismes isog x y de x y puisque cette est un de isog c x c y la preuve assimile aussi plusieurs lors d approches au la de la consiste essayer de suivre ce qui est en essence l algorithme de luks lu si cet algorithme s c est parce qu il s est contre un quotient isomorphe alt g et est grand notre majeure consiste ce qui se passe ce la principale sera de chercher colorier d une qui canoniquement de cela limitera les automorphismes et isomorphismes possibles par exemple si la de est en rouge et l autre en noir le groupe d automorphismes possibles se sym un coloriage similaire induit par y limite les isomorphismes aux applications qui alignent les deux coloriages nous trouverons toujours des coloriages qui nous aident sauf quand certaines structures ont une grande laquelle en revanche permettra une descente plus petit cette double du groupe ou descente des plus courtes le fondements et travaux en suivant l usage courant pour les groupes de permutations nous r g pour l g r auquel g sym envoie r une et un g sym nous xg par xg r x r g par contre nous pour l ensemble des xk avec l action gauche par r r l est que ceci est non pas seulement pour une permutation mais pour toute application k k non injective nous appelons les de tuples que algorithmes de base plusieurs algorithmes essentiels se basent sur une de schreier sch il a que pour tout h d un groupe g et tout a g qui engendre g et contient des de toutes les classes de h dans g h a h est un ensemble de de l suivante est celle de sims qui a l de travailler avec un groupe de permutations g sym xn en termes d une de stabilisateurs g e gk g xk g g i k xgi xi stabilisateur de points l algorithme de algorithme description sur lu construit des ensembles ci de de gi tels que cj engendre gi pour tout i n le temps pris par l algorithme est o a est l ensemble de de g qui nous est la fonction filtre prend o n de temps et tout g pour lequel elle est satisfait g ac ca c c est la valeur de ci la de la bien n n l algorithme nous pourrons toujours supposer que nos ensembles de sont de taille o le temps pris par l algorithme est donc o algorithme construction d ensembles ci fonction schreiersims a x a engendre g sym xn assure engendre gi et ci gi est injectif n ci e pour tout i n tantque b choisir g b arbitraire et l enlever de b i filtrer g ci si e alors ajouter ci s s b b cj retourner ci fonction filtrer g ci retourne i tel que gi g requiert ci gi et ci gi injectif n assure g ci sauf si i n e pour i jusqu n si ci tel quel xhi xgi alors sinon retourner i retourner n e nous supposons que l ensemble de initial le groupe g du est de taille o nc c une constante le temps pris par la utilisation de l algorithme est donc o nmax une fois les ensembles ci construits il devient possible d accomplir plusieurs essentielles rapidement exercice montrer comment accomplir les suivantes en temps polynomial un groupe g sym n a si un g sym est dans b un homomorphisme g sym et un sousgroupe h sym h c fhl soit h g avec g h no un test qui en temps polynomial si un g g appartient h astuce travailler avec g h la place de g ici comme toujours veut dire trouver un ensemble de et un groupe nous est si un tel ensemble nous est l algorithme de le stabilisateur de points g xk pour xk arbitraires par contre nous ne pouvons pas demander un ensemble de d un stabilisateur d ensemble g xk g g xgk xk pour g xi arbitraires faire ceci serait le de l isomorphisme orbites et blocs soit comme toujours un groupe de permutations g agissant sur un ensemble le domaine est l union disjointe des orbites xg g g de ces orbites peuvent en temps polynomial en ceci est un exercice simple la se celle simple elle aussi de trouver les composantes connexes d un graphe supposons que l action de g soit transitive il y a donc une seule orbite un bloc de g est un b b tel que pour g h g quelconques g h g h soit b b soit b b la collection b g g g de blocs pour b partitionne l action de g est primitive s il n y a pas de blocs de taille autrement elle s appelle imprimitive un de blocs est minimal si l action de g sur lui est primitive voyons comment si l action de g est primitive et s il ne l est pas comment trouver un de blocs de taille en la nous obtiendrons un de blocs minimal en temps polynomial nous suivons lu qui cite pour a b distincts soit le graphe avec comme son ensemble de sommets et l orbite a b g g g comme son ensemble d la composante connexe qui pour o a est la taille de l ensemble de de g qui nous est nous omettrons toute mention de cette taille par la suite puisque comme nous l avons dit nous pouvons la garder toujours sous pour paraphraser lu il faut avouer qu un tel pourrait s appeler maximal la taille des blocs est maximale leur nombre est minimal contient a et b est le bloc le plus petit qui contient a et b si est connexe alors le bloc est l action de g est imprimitive ssi est non connexe pour un a arbitraire et au moins un b dans ce nous obtenons un bloc qui contient a et b et donc tout un de blocs de taille un dernier mot si g sym nous disons que g est transitif voire primitif si son action sur l est luks le cas de groupes avec facteurs d ordre luks a comment le de l isomorphisme de graphes en temps polynomial dans le cas de graphes de le ou valence d un sommet dans un graphe non est le nombre d qui le contiennent il ceci au de le groupe d automorphismes de dans le cas d un groupe g tel que tout facteur de composition de g c tout quotient dans une suite principale de g est le processus de et loin d trivial ne nous concerne pas ici voyons comment luks ce cas du de l isomorphisme de nous suivrons la notation de ba si les viennent de lu soient k sym et la l ensemble d isomorphismes partiels k est k x y k x x y x l ensemble d automorphismes partiels k x est isok x x k est donc l ensemble de toutes les permutations g k qui envoient x sur y au moins en juger par ce qui peut se voir par la nous travaillerons en avec k de la forme h laisse invariante en tant qu ensemble il est clair que pour k sym et sym x y isok x y x y x y x y il est aussi clair que si g est un de sym et est invariant sous g alors autg x est un de g et pour tout sym x y est soit vide soit une classe droite de la forme autg x sym soient invariant sous pour autg x et tels que x y g x y x y x y iso g la est une application de babai appelle la de la l suivant n utilise pas la de groupes simples bcp soit g sym un groupe primitif soit n si tout facteur de composition de g est d ordre k alors nok ici comme d habitude ok une qui seulement de luks lu soient un ensemble et x y deux soit un groupe g sym tel que tout facteur de composition de g est d ordre il est possible de isog x y en temps polynomial en n preuve cas g non transitif soit stable sous l action de alors par il de calculer g x y une classe que nous notons et or pour x y pour y y isog x y nous de isog puis par sims le stabilisateur de points g de la x y pour se le groupe d isomorphismes dans un groupe entre deux de longueur comme n et prend du temps o tout va bien la est au lecteur cas g transitif soit n le stabilisateur d un de blocs minimal pour g donc est primitif par le mok m est le nombre de blocs or pour tels que g isog x y n x y ison x y ison x par et comme les orbites de n sont contenues dans les blocs qui sont de taille ison x yi yi se par la les groupes d isomorphismes de m paires de de longueur nous avons donc le la solution de mok pour des de longueur le pas consiste faire l union de classes en nous avons une description de chaque ison x yi soit comme l ensemble vide soit comme une classe droite du groupe h autn x dont nous avons une description c un ensemble de alors ison x yi isog x y a i nous aurions pu quelques appels en travaillant toujours avec des isomorphismes partiels mais cela a peu d importance qualitative vrai dire bcp thm est plus que ceci par exemple des facteurs arbitraires non sont admis cela donne une du relations partitions configurations soit c couleurs un ensemble que nous pouvons supposer disons de rouge violet une relation sur un ensemble est un r une structure relationnelle est une paire x ri pour chaque i c ri est une relation sur si les ri sont tous non vides et partitionnent nous disons que x est une structure de partition dans ce nous pouvons x par une fonction c c qui assigne chaque l indice i de la relation ri laquelle il appartient nous disons que c est la couleur de un isomorphisme entre deux structures x ri et est une bijection qui envoie ri pour chaque il est possible de construire un foncteur qui envoie chaque structure x sur une structure de partition x sur qui plus est iso x y iso x y la est triviale nous la algorithme pour montrer ce qu indexer veut dire cela nous permet de ne pas utiliser plus de min couleurs n tout en gardant leur en termes des couleurs originales c le temps pris pour calculer x est o k nous ne nous occupons pas des d de la collection de tuples i mais il peut s agir tout simplement d une liste lexicographiquement dans ce cas k est dans la i serait avec du hachage ce qui n est que l art de bien organiser une algorithme d une structure de relation indexeur fonction k c ri i pour a i c ri c indexeur i a retourner i c retourne c c c est l ensemble d indices de i i explique c en termes de c fonction indexeur i a i est une collection si a n est pas dans i alors ajouter a i retourner indice de a dans i un une relation d sur k i j ssi xi xj le m s s un ensemble consiste en les applications s s avec la composition comme une structure de partition x c est dite si a pour tous si c c alors b il y a un homomorphisme de m k m c tel que pour tout m k c c pour tout alors par exemple pour k a veut dire que la couleur de sait si ou pas dans le sens si nous connaissons c alors nous savons si ou pas de la b nous indique que la couleur de les couleurs de et nous pouvons un foncteur qui envoie chaque structure de partition x sur une comme pour le fait que x est un de x implique que iso x y iso x y la pour calculer est similaire celle pour calculer algorithme au lieu d assigner la couleur i c ri nous lui assignons la couleur c k il est de voir que x est le le plus grossier d une structure de partition x qui est une de la que x est le le plus grossier d une structure x qui est une structure de partition soit x c c c une structure de partition pour l k nous c l c comme suit c l c xl xl xl la structure de partition x l c l est dite le squelette de x la vide sera viride exercice tout squelette d une est une ici le fait que l axiome b dans la de soit valable pour non injectif est crucial pour x c une structure de partition et la induite x est la structure par la restriction de c il est clair que si x est une alors x l est aussi il ne faut pas confondre une structure de partition partition structure avec ce que nous appellerons un colored partition un d un ensemble est un coloriage de d une partition de chaque classe de couleur une classe de couleur est l ensemble de sommets d une couleur un est dit admissible si chaque ensemble b dans chaque partition est de taille pour un est un admissible tel que pour chaque un est une structure plus que le coloriage qu il mais moins que la structure que nous obtiendrions si nous donnions chaque de chaque partition une couleur un automorphisme ou isomorphisme d un doit les couleurs de mais pourrait permuter les ensembles de la taille qui appartiennent la partition d une couleur comme les ensembles de taille ne peuvent il est clair que nous pouvons supposer sans perte de que toute couleur est en ensembles de la taille nous ajoutons ceci la de partir de maintenant configurations pour z et i k nous z comme suit z z xk une x c est une ayant la suivante il y a une fonction c k c telle que pour c k et j c arbitraires et tout tel que c j z c z ki i k j les valeurs j sont nombres d intersection de une est dite classique si k remarque les classiques ont introduites par higman hi les premiers exemples du type schurien une est schurienne si elle est la partition de dans ses orbites orbitales sous l action d un groupe g sym si une classique n a que deux couleurs une pour x x x et l autre pour son la est dite une clique ou triviale exercice tout squelette d une est encore une fois l axiome b des joue un exercice soient x c une et une classe de couleurs en relation au coloriage induit par c sur alors la induite x est une ici c est un cas de b qu il faut utiliser la couleur c xn les couleurs c c xn puisque c xi c xi xi soient l k et nous colorions comme suit pour c en une structure de partition k l exercice soit x c une structure de partition soit l alors a est un du coloriage du squelette x b si x est l est aussi il est clair que de plus est canonique en relation ce qui veut dire que x commute avec l action sur du stabilisateur dans sym des points xl une c est dite si la couleur c x x x de tout sommet x est la une classique est dite primitive si elle est et les graphes gr x y x y c x y r pour toute couleur r telle que c x y r pour au moins une paire x y avec x y sont tous connexes elle est dite uniprimitive si elle est primitive et non triviale nous n avons pas besoin de si ces graphes son connexes dans le sens propre savoir il y a un chemin de tout sommet tout autre respectant l orientation ou dans le sens faible sans compter l orientation le fait que c soit classique et implique que r x y x y gr est de x pourquoi ce qui implique que toute composante faiblement connexe de gr est connexe exercice exercice soit x c une classique uniprimitive il n y a aucun ensemble b tel que la restriction de x b soit une clique solution si les de la grande clique sont blanches soit noir une autre couleur d de x et soit g gnoir or pour un graphe g non vide avec comme ensemble de sommets il est impossible qu il y ait un ensemble b tel que la du graphe b soit vide pourquoi exercice soit c une classique a soit rk une de couleurs alors si xk sont tels que c xk le nombre de tels que c xi ri pour tout i k seulement de rk b pour toute couleur r toute composante connexe de gr est de la taille solution esquisse en a le cas k vaut par la de prouvez les cas k par induction pour prouver b utilisez a le raffinement canonique la de un foncteur qui envoie une x c une x comme x sera un de x nous aurons iso x y iso x y l algorithme qui calcule est sur une de weisfeiler et leman wl il s agit d une de si dans une aucun ne se produit c si les classes d du nouveau coloriage ci sont les que celles de l ancien coloriage alors a aucun ne se produira dans le futur b le coloriage est voir la du aussi lehman mais ba indique que le auteur leman deux transformations naturelles l l peuvent ne pas l inverse l une de l autre si le coloriage c a r couleurs du il est clair qu il ne peut que r fois alors r sont pour produire une en particulier si l indexation est faite en temps logarithmique et le vecteur dans le pas de l algorithme est comme un vecteur creux puisque son nombre d est au plus le temps pris par l algorithme est o k log en outre ba une borne plus forte les algorithmes de type autrefois comme une approche plausible au de l isomorphisme de graphes depuis cfi evp il est clair qu il ne se pas ils sont quand un outil la version ici est due et iml algorithme pour les fonction weisfeilerleman k c c c c pour i jusqu ii pour j z z rj j k k ci indexeur ii indexeur est comme dans l algorithme ci indices de ii retourner ii ii donne du sens graphes hypergraphes et designs en blocs nous savons qu un graphe est une paire v a v est un ensemble sommets et a est une collection de paires d de v voire de de v avec deux si le graphe est non un graphe non est dit si le de tout sommet est le un graphe est dit si le sortant v w v v w a et le entrant v w v v w a sont de pour v ils sont la constante un graphe biparti est un triplet a avec a un graphe biparti est si le est de et le est de exercice soit x c une classique a soient deux classes de couleur et soit vert une couleur d en alors le graphe biparti gvert est nous omettons les mots entrant et sortant puisqu il est qu il s agit du entrant dans le cas de et du sortant dans le cas de b soit y et li y x c x y i soient lin bis et terre trois couleurs d alors pour llin y et lbis y le graphe biparti rterre est exercice soit x une classique soient deux classes de couleur soient vert une couleur d en et rouge une couleur d en soient bm les composantes connexes de grouge en le graphe biparti y m d comme suit x y d ssi x y gvert pour au moins un y bi alors y est solution notez que pour y bi et x x y est vert pour au moins un y bi ssi il existe x xm tels que xi est rouge pour i m et xm y est vert concluez par l exercice que tous les sommets en m ont le en y de analogue montrez que pour x et y bi tels que x y est rouge le nombre de z bi tels que x z est rouge ne pas de x y ou notons ce nombre alors le de tout v est son en x par q par a il ne donc pas de un graphe biparti est complet en tant que graphe biparti si a un graphe biparti qui n est ni vide ni complet est non trivial un hypergraphe h v a consiste en un ensemble v sommets et une collection a de de v avec des un hypergraphe est dit si u pour tout a a il est dit de r si tout v v appartient exactement r ensembles a dans a l hypergraphe complet sur v est v a v u chaque ensemble a est une fois un coloriage des de l hypergraphe complet est une application de a v u un ensemble c un block design bde de v u est un hypergraphe avec v sommets et de r tel que toute paire de sommets distincts est contenue dans exactement blocks un block design a la mais avec et la condition additionnelle d un hypergraphe la peut de la si un block design est incomplet si u notons b le nombre d d un bde proposition de fisher f pour tout block design incomplet b il est de voir que cette est vraie pour les designs les blocks designs admettent une un design v u est un hypergraphe v a avec v sommets tel que tout t v de taille t est contenu dans exactement ici t et nous toujours b si fisher le statisticien ici design vient d experimental design proposition rchw pour tout design v u et tout s min nous avons b vs de johnson un d association est une classique c c telle que c x y c y x y il s agit donc d un sens du mot qui n a rien voir avec les de la soient s et r un de johnson j r s c est par ss s s c est un ensemble r la relation ri est bien l ensemble ri c i notons que nous avons implicitement un foncteur de la d ensembles avec r la de de johnson ceci est un foncteur plein autrement dit les seuls automorphismes de j r s sont ceux qui sont induits par sym identification de groupes et de il est une chose de que deux groupes g h sont isomorphes et une autre de construire un isomorphisme de explicite entre eux cette implique au moins de donner les images gr de gr de voyons un cas particulier qui nous sera crucial nous aurons un groupe de permutation g sym et nous saurons qu il est isomorphe au groupe abstrait altm comment construire un isomorphisme si m n est pas trop petit en relation n il est connu que g doit isomorphe que le groupe altm un groupe de permutation de la forme alt k m qui n est autre m agissant sur l ensemble sk s k k est un ensemble m en d autres termes il existe une bijection sk et un isomorphisme g alt tels que g g le consiste construire sk et g alt calculables en temps polynomial avec ces nous suivons bls soient l orbitale la plus petite de g hors la diagonale soit l orbitale la plus grande nous supposerons que babai nomme les groupes alt k m groupes de johnson par analogie avec les de johnson puisque alt k n est qu un de altm ne pas appeler ce dernier groupe de m ramerrez m k ce qui revient dire que n n est pas trop grand en relation alors sk k rk sk pour x y b x y z x z y z ceci est l ensemble de tous les z tels que z intersecte x mais pas y soit c x y r z r z x y alors c x y s sk s s sk s x s y s sk i s i est l de x qui n est pas dans y soit la collection c x y x y sans nous pouvons calculer et comparer c x y pour x y et calculer et indexer tout en temps polynomial nous calculons aussi en temps polynomial l action de g sur induite par l action de g sur ceci g alt il y a une bijection naturelle j qui commute avec l action de g elle envoie c x y i i est l de tel que c x y s sk i s il est clair que pour c x y ssi j c x y ainsi nous obtenons la bijection sk par satisfait g g les applications sont donc celles que nous nous avons construit un isomorphisme explicite entre g et alt notons que cette nous permet de construire un isomorphisme explicite entre d un un d association qu on sait isomorphe un de johnson j m k et de l autre ce si m k alors n est si grand que m no log n en ce cas nous pouvons enlever le groupe g c dans l application qui nous un quotient de brutale comme dans le cas de la preuve du luks nous pourrions aussi nous passer de la supposition m k au de quelques complications en ce qui suit en particulier ne serait pas rk comme dans sinon un autre rj la principale input g sym x y de g transitif aligner coupe coupe ou johnson coupe j de m coupe ou relations rels non altm blocs k non oui m petit non oui k oui cas trivial non non locaux non non n oui alt g primitif lem me des designs non oui oui une couleur domine output isog x y fonction weisfeiler leman oui x y relations sur oui pullback premiers pas la de luks g transitif non n non oui altm oui m petit oui les premiers pas de la sont ceux de la preuve du luks en particulier si g sym n est pas transitif nous exactement comme dans le cas non transitif de la preuve du bien qu il soit possible que n ne que la marche puisque son est aussi dans ce cas nous n avons qu subdiviser le selon les orbites de supposons que g soit transitif nous savons que nous pouvons trouver rapidement un de blocs minimal r bi i r bi par nous trouvons aussi en temps polynomial le n g des g g tels que big bi pour tout i le groupe h agit sur au lieu du bcp nous utiliserons une de la des groupes finis simples cgfs elle a pour la fois par cameron puis par cam ma soit h sym r un groupe primitif r est plus grand qu une constante absolue alors soit a r r soit b il y a un m h tel que r se subdivise en un de m blocs sur lequel k k m agit comme un groupe altm m en plus h m la borne h m r se de m r r m s s ms r et h m s s est un dans il est possible bls de trouver en temps polynomial le normal m et les blocs de l action de nous avons vu au comment explicitement l action de m avec celle de alt k m par ailleurs l algorithme de nous permet de calculer en temps polynomial et donc nous dit aussi si nous sommes dans le cas a si c est le cas nous pour nous le logarithme en base et non pas le logarithme log log l dans cam ma est plus fort il toute l action de h sur vrai dire le groupe m s s m est isomorphe en tant que groupe de permutation alt k m s nous avons r k comme dans le cas transitif de la preuve du nous ainsi le r r instances du pour des de longueur si nous sommes dans le cas b nous toujours par le h m instances du avec m la place de h par l et comme dans l isoh x y isom x s est un de des classes de m dans si m c log n c est une constante m mm mc log n c log n m donc ici comme dans le cas a nous nous permettons de k comme dans le cas transitif de la preuve du nous obtenons une r c log n o log n instances du pour des de longueur ceci est tout fait consistant avec l objectif d avoir une solution en temps quasipolynomial en n ou en temps no log n il reste savoir que faire si nous sommes dans le cas suivant il y a un isomorphisme alt c log n c une constante ici nous avons i g par la de m dans la g et cela ii n par le stabilisateur des blocs dans la partie b du ce cas nous occupera pour le reste de l article babai indique comment enlever la de cgfs cette soient g et n comme avant avec g transitif alors est un groupe primitif agissant sur l ensemble de blocs si un groupe de permutations sur un ensemble r est tel que son action sur l ensemble des paires d distincts de r est transitive le groupe est dit doublement transitif or un de pyber qui ne pas de cgfs nous dit qu un tel groupe est soit alt r soit sym r soit d ordre log si est alt r ou sym r nous sommes dans le cas que nous discuterons d ici jusqu la si est doublement transitif mais n est ni alt r ni sym r nous pouvons comme dans le cas transitif de la preuve du puisque r o log r r babai propose aussi un traitement alternatif plus et supposons donc que n est pas doublement transitif alors la schurienne qu elle induit n est pas une clique en nous pouvons donner cette la et reprendre le de l argument ce la structure de l action de alt stabilisateurs orbites et quotients alternants nous aurons besoin de plusieurs sur les g altk ils joueront un crucial dans la des locaux dans la version originale ba ils ont aussi dans le par bls dans cet lem me soit g sym primitif soit g altk un avec k max alors est un isomorphisme prouver ce lem me est peu un exercice en des groupes il faut utiliser baps prop pour le cas de socle et la conjecture de schreier pour le cas de socle non la conjecture de schreier est un mais un dont la preuve son tour de cgfs par contre pyber py a une preuve du lem me qui n utilise pas cgfs avec une condition plus stricte k max c log c constante la de cgfs a donc de la preuve du principal soit g sym soit g symk un homomorphisme dont l image contient altk alors x est dit atteint si gx ne contient pas altk lem me soit g sym soit g altk un avec k max est la taille de la plus grande orbite de a si g est transitif tout x est atteint b au moins un x est atteint preuve esquisse a ceci du lem me si g est primitif ou si k ker pour k le stabilisateur d un de blocs minimal il reste le cas de k altk surjectif en lem me pour ki arbitraires k ks et un k s s simple il doit y avoir un i tel que se factorise comme suit k ki s un en utilisant ce lem me pour les restrictions ki de k aux orbites de k nous passons une orbite ki et par induction b soient les orbites de g et soit gi la restriction de g par le lem me en a il doit y avoir un i tel que se factorise en g gi altk un alors par a gx gi x altk pour tout x la proposition suivante jouera un crucial au proposition soient g sym transitif et g altk un soit u l ensemble des non atteints a supposons que k max est la taille de la plus grande orbite de alors g u altk b supposons que k si est une orbite de g qui contient des atteints alors chaque orbite de ker contenue dans est de longueur rappelons que g u g g xg x u stabilisateur de points preuve a il est facile de voir que g u en tant qu ensemble alors g u g et donc g u or altk supposons que g u e alors se factorise comme suit g altk puisque g u est le noyau de g ici est un et donc par le lem me b il existe un x u tel que x altk or x gx altk parce que x est dans u c non atteint contradiction b comme contient des atteints et est une orbite de g tout de est atteint soit n ker x la longueur de l orbite xn est xn n nx n n gx ngx gx g gx g ngx gx altk gx or tout propre de altk est d indice donc xn le cas de grande g primitif oui k oui cas trivial non oui pullback le cas de g primitif nous pouvons supposer que g est isomorphe en tant que groupe de permutation alt k m puisque nous avons les autres cas au en passant un groupe non primitif m le cas non primitif sera au comme nous l avons vu au nous pouvons construire une bijection entre et l ensemble sk des avec k d un ensemble cette bijection induit un isomorphisme g alt si k alors est en bijection avec et g altn altm nous sommes donc dans le cas trivial le groupe autg x consiste en les de altn qui permutent les lettres de x de la couleur et isog x y est non vide ssi x et y ont exactement le nombre de lettres de chaque couleur si aucune lettre n est ni en x ni en y nous ajoutons la condition que la permutation de n qui induit x y soit dans altn alors soit g primitif k deux sont des jumeaux par rapport un objet si la transposition le laisse invariant il est clair que les jumeaux forment des classes d et que pour toute telle classe d c tout sym c laisse l objet invariant notre objet sera la x ou y sont des jumeaux par rapport x si pour tout i x i x i nous pouvons donc facilement et en temps polynomial les classes d en dites classes de jumeaux et s il y a une classe d c de taille examinons cette puisque nous devrons l exclure la classe c de taille est unique et donc canonique si x a une telle classe et y ne l a pas ou si les deux ont de telles classes mais de tailles alors x et y ne sont pas isomorphes si x y ont des classes de jumeaux cx cy de la taille nous choisissons alt tel que cx cy nous supposons m en y par nous notre au cas cx cy l exemple le plus simple de ce que babai appelle aligner nous avons cx et cy alors soit c cx cy la partition c c de induit une partition de ssi k j de c et j de il est de montrer que j kj pour j k donc pour j nous avons notre celui de isoh x y h alt c ici le besoin de prendre un stabilisateur d ensemble savoir alt c ne pose aucun souci nous engendrons h en prenant des de deux de alt c alt deux de alt c alt et un alt de la forme c si le nombre de est moindre et la discussion se notre se celui de isoh x pour y et h alt c alt c i comme c est une classe de jumeaux pour x tout de alt c laisse x invariant si alors isoh x y soit alors nous avons notre celui de isoh rappelons que h agit sur avec des orbites de longueur nous donc comme dans le cas non transitif de la de luks preuve du thm des aux de johnson x y relations sur weisfeiler leman lem me des designs une couleur domine oui coupe ou johnson non discutons maintenant le cas de g primitif et plus g isomorphe alt k m k maintenant nous pouvons supposer que nos x y n ont pas de classes de jumeaux de taille les outils principaux que nous lem me des designs nous seront utiles voire essentiels aussi dans le cas de g imprimitif nous avons une bijection entre les de et s k pour x nous avons donc une structure relationnelle x ri sur xk ri si xk sont tous et x i est l de qui correspond xk nous appliquons x le foncteur qui fait d elle une structure de partition puis le foncteur encore qui nous donne une et le foncteur par nous obtenons ainsi un x cx c qui est une comme sont des foncteurs l assignation de cx x est canonique elle nous sera donc utile si cx et cy ne sont pas isomorphes sous l action de altm alors x et y ne sont pas isomorphes sous l action de alt k m non plus nous obtiendrons une classique de canonique partir de cx lem me des designs soit cette nouvelle sera non triviale soit nous obtiendrons un coloriage canonique sans couleur dominante ce qui nous permettra de le un certain nombre de pour des plus courtes comme dans l algorithme de luks supposons alors que nous disposons d une classique non triviale de canonique x la nous donnera l un ou l autre de ces deux soit un canonique de soit un de johnson de canonique dans dans un cas comme dans l autre avoir une telle structure canonique limite fortement l ensemble d isomorphismes et automorphismes possibles nous pourrons g un avec dans le cas du ou m dans le cas de johnson est pour une lem me des designs une x c c et un une couleur i est dite si c i pour valeurs de la classe de couleurs c i est elle aussi dite dominante par contre si pour toute couleur i la classe c i est de taille le coloriage est dit un comme avant deux sont des jumeaux par rapport une structure x ici une sur si aut x proposition lem me des designs soit x c c une k soit supposons qu il n y a aucune classe de jumeaux dans avec alors au moins une des options suivantes est vraie a il existe k tels que n a pas de couleur b il existe k tels que a une couleur c et c n est pas une clique la notation a dans les sections en particulier le est tout simplement un coloriage de lem me lem me de la grande clique soit x c une classique soit c une classe de couleurs avec si x c est une clique alors c est une classe de jumeaux preuve supposons que c n est pas une classe de jumeaux il y a donc un x et une couleur disons azur telle que c x y est de cette couleur pour au moins un y c mais pas pour tous comme x c est une clique x appelons la couleur de c carmin et celle de x bronze soit b l ensemble des de couleur bronze il s agit de construire un block design qui contredise l de fisher prop ab y c by azur pour b comme x b et c xy azur pour au moins un y c et c xy la couleur de y tous les de ab sont carmin par la de x et la des nombres d intersection def azur bronze et donc ne pas de comme nous l avons dit au donc c pour tout b montrez de similaire que pour v c la taille de b b v ab b b c bv azur ne pas de comme x c est une clique c v v est de la couleur pour tous v v c v v appelons cette couleur montrez que b b v v ab azur alors c ab est un block design incomplet en par l de fisher or nous savons que b c et b c contradiction preuve du lem me des designs prop supposons que pour chaque k a une couleur c et en plus si k c est une clique nous arriverons une contradiction soit c c vide comme c est trop grande pour un ensemble de jumeaux donc il existe u v c u v tels que uv aut x soit de longueur minimale r entre les satisfaisant c c par cette et les dans la yr sont tous distincts en les permutant nous pouvons assurer que u v et sans perte de que soit i u v yr u soit ii u yr dans le cas i nous choisissons r et voyons que u v dans le cas ii nous choisissons et obtenons u v cx v u nous aurons donc une contradiction avec notre supposition une fois que nous aurons que u v c le fait que u v c s ensuivra de l c c cette son tour se par du fait que pour de longueur k et z z c c z pourquoi vrai nous sommes en train de supposer que c est une clique et que donc par le lem me de la grande clique tous les de c sont des jumeaux en en particulier pour u c z u zu ne pas de puisque le coloriage de sommets en est un de celui en par la de la il s ensuit que soit c c z soit c c soit c z comme les deux sont exclues nous appliquons le lem me des designs avec la x x est par x de la au de la section nous parcourons tous les tuples possibles k jusqu trouver un tuple pour lequel la ou la conclusion du lem me des designs est vraie si la conclusion est vraie nous cx et sautons la section si la conclusion est vraie nous passons au ayant c c est la couleur de coupe ou johnson nous avons une classique non triviale c nous rappelons que ceci est un coloriage c du graphe complet sur tel que a les sommets ont leur couleur propre couleur diagonale b les x y x y ne sont pas toutes de la couleur c la couleur c x y de l x y c y x et d l axiome de se nous voudrions trouver des structures qui canoniquement de et qui contraignent son groupe d automorphismes il est raisonnable de s attendre ce que de telles structures existent par le si le groupe d automorphismes est transitif soit il est imprimitif et donc il laisse une partition invariante soit il est d alt k m k qui laisse invariant un de johnson soit il est petit et donc le stabilisateur de quelques points aura des orbites petites et ainsi nous donnera un coloriage sans couleur dominante le est de trouver de telles structures et de le faire canoniquement si n est pas primitif la est facile soit r la couleur non diagonale la plus rouge telle que le graphe gr x y x y c x y r est connexe par l exercice ceci donne une partition de dans des ensembles de la taille coupe ou johnson soit x c une classique uniprimitive soit en temps nous pouvons trouver soit un de soit un de johnson sur et un h sym avec sym h log tel que le voire le est canonique en relation le groupe h sera comme un stabilisateur de points en la valeur dans l est assez arbitraire toute valeur serait valable une valeur proche les constantes implicites preuve choisissons un x arbitraire donnons chaque y la couleur de c x y ce coloriage est canonique en relation gx s il n y a aucune classe de couleur c de taille la partition triviale de chaque classe nous donne un de et nous avons supposons par contre qu il y ait une classe de couleur disons clin de taille comme la relation rlin de cette couleur est non c y z lin ssi c z y lin le de rlin ou de toute autre relation est de exercice soient x z tels que c x z lin et soit y tel que c x y c z y lin appelons c x y bis et c z y terre le graphe biparti a avec sommets clin cbis et rterre le graphe est non vide par et par l exercice par et le nombre de y tels que c y w est d une couleur est de donc il est toujours n pour lin appliquant ceci terre et nous voyons que le a est et donc comme le graphe n est pas complet nous appliquons donc la proposition a avec notons que nous travaillerons donc avec un graphe biparti a la sera d essayer soit de rendre plus petit par au moins un facteur constant soit de trouver des structures en lui soit ces structures nous permettront de quand soit elles nous aideront ou trouver un de johnson assez grand sur tout d abord nous devrons borner la en c voire les jumeaux il y a deux raisons ceci si nous une structure assez riche en cela impliquerait peu ou rien sur si beaucoup d de se connectent de la si est petit nous colorierons chaque sommet de par son ensemble de voisins en ceci nous donnera un coloriage canonique en relation g or dans ce coloriage deux sommets en auront la couleur ssi ils sont des jumeaux donc si aucune classe de jumeaux en n a nous aurons un exercice soit a un graphe biparti et non trivial alors aucune classe de jumeaux en n a plus de solution nous assurons que en prenant le s il est soient le des sommets en et s une classe de jumeaux en montrez que et donc exercice soit a un graphe biparti sans jumeaux en soient montrez que pour au moins un i il n y a aucune classe de jumeaux en dans le graphe ci a ci exercice soit x c une soient deux classes de couleurs en soit brun une couleur d en a alors pour x y la couleur c x y si x et y sont des jumeaux dans le graphe biparti gbrun proposition coupe ou johnson biparti ou una partita a poker soit x a un graphe biparti avec et tel qu aucune classe de jumeaux en n ait plus de alors nous pouvons trouver en temps soit un de soit un de johnson sur et un h g g sym sym avec g h log tel que le voire le est canonique en relation la condition sur les classes de jumeaux ici remplie avec la place de la de la preuve du thm l exercice en ce qui concerne le temps de la nous expliciterons quelques qui pourraient ne pas ce qui sera le le plus est l indice g h le groupe h sera comme un stabilisateur de points nous devons bien le nombre de points que nous stabilisons esquissons la de la preuve ce que nous voulons est une la proposition nous pouvons produire une classique sur partir du graphe x tout simplement en utilisant ce qui demande de la ruse est de garantir que la restriction x la classe de couleurs dominante s il y a une soit non triviale pour obtenir une sur nous noterons que le graphe x induit une relation sur d est au plus le de la d de si telle chose existe sinon les nous donnent une partition de si la relation est triviale dans le sens de contenir toutes les d distincts dans nous obtenons un de johnson si elle est non triviale mais contient beaucoup de jumeaux elle nous donne une de descendre un plus petit s il n y a pas beaucoup de jumeaux nous utilisons le lem me des designs par un lem me standard sur les designs pour obtenir une classique sur ce qui trouver preuve si c c est une constante nous colorions chaque v par ce coloriage est canonique en relation h e autrement dit il n est pas canonique du tout peu importe trivialement c log nous pouvons donc supposer que si log disons alors par la discussion nous obtenons un de et donc un de ce coloriage est canonique en relation un h d indice log log nous pouvons donc supposer que log notre est d les jumeaux nous divisons dans ses classes de jumeaux et colorions chaque v par son nombre de jumeaux et par son dans le graphe a nous obtenons un de sauf s il y a un entier tel que l ensemble des sommets v sans jumeaux et de est de taille supposons que cela est le cas comme et qu il n y a pas de jumeaux en nous voyons que nous pouvons supposer que en a par son si soit h a l hypergraphe dont les sont les voisinages en a des sommets dans elles sont toutes contenues en l hypergraphe est comme il n y a pas de jumeaux dans il n y a pas d identiques si h est l hypergraphe complet alors peut avec le de johnson scoppia in un pianto angoscioso e abbraccia la testa di johnson supposons alors que h n est pas complet nous voudrions avoir un coloriage canonique sur pour un d l l log log tel que les de ne soient pas tous jumeaux si nous d et colorions vd vd h en et tout le reste en gris supposons par contre que soit d nous colorions vd en gris si les vi ne sont pas tous distincts dans le cas contraire nous donnons la couleur h h vd h cette de coloriage peut faite en temps de l ordre de log log d si les tuples avec vd distincts n avaient pas tous la couleur nous aurions un design d avec donc par la proposition pour s comme log et peut plus grand s qu une constante log log s s ce qui donne une contradiction donc pour arbitraire les tuples avec vd distincts n ont pas tous la couleur en d autres termes les de ne sont pas tous jumeaux en relation notre nouvelle structure s il y a une classe s de jumeaux de taille alors par l exercice au moins un des deux graphes s a s s a s n a aucune classe de jumeaux dans comme pour nous appliquons la proposition un de ces deux graphes disons celui sur s si les deux sont valables et terminons que la taille de est descendue seulement mais tous nos choix ont canoniques des si l on veut donc gratuits nous n avons perdu que du temps pour de temps ce qui est acceptable alors nous avons un coloriage de en relation auquel il n y a aucune classe de jumeaux en de taille nous appliquons les foncteurs weisfeilerleman ce coloriage puis nous utilisons le lem me des designs prop avec nous trouvons les d ou d dans l de la proposition par force brute en temps proportionnel nous les et nous imposons que h ce qui a un de dans le sens g si nous sommes dans le premier cas du lem me de designs pas de couleur dominante nous cueillons les classes de couleur en par la plus rouge la en comme une longueur d onde jusqu avoir une union des classes s avec ceci marche s il n y a aucune classe de taille si telles classes existent nous s comme la classe la plus grande de ce type nous appliquons l exercice et obtenons un graphe a remplissant les conditions de notre proposition avec s ou s et donc donc nous appliquons la proposition ce graphe la marche il est important ici que puisque nous avons encouru un dans l indice restons donc dans le cas du lem me des designs nous avons un coloriage de avec une classe de couleurs c telle que et une classique y non triviale sur nous un graphe avec des sommets est de couleur et vient d par le lem me des designs les seront non pas seulement celles en a en noir mais aussi les entre les de dans les couleurs par nous appliquons les et ce graphe et obtenons une x la x est un de si elle n a pas de couleur nous notre celui pour a comme avant nous pouvons appliquer la proposition un tel graphe sans changer parce que la marche ici aussi parce que il est important que soit plus petit que par un facteur constant puisque le jusqu maintenant dans l index g h est supposons donc qu il y a une classe de couleurs dans x elle doit un de c car la restriction de x n est pas une clique si elle l la restriction de y l aurait aussi et cela est impossible par l exercice nous pouvons supposer qu il existe une classe de couleurs en qui satisfait sinon nous avons un de et pouvons le fait que implique que nous pouvons supposer aussi que les de x en ne sont pas toutes de la couleur si elles l il y aurait une classe de jumeaux en dans le graphe dont x est un dans ce cas par l exercice nous aurions une a et nous pourrions en utilisant la proposition de ainsi nous avons tout la proposition nous l appliquons x nous obtenons soit un de soit un graphe biparti avec tel qu aucune classe de jumeaux en n a plus que nous pouvons supposer que parce que dans le cas contraire nous avons obtenu un de alors nous pouvons alors faire de la nous appliquons la proposition avec la place de a la se pas plus que o log pas puisque si la taille de ou de en dessous de pour la valeur originale de alors nous avons obtenu un de comme nous l avons vu coupe ou johnson biparti utilise coupe ou johnson son tour coupe ou johnson se coupe ou johnson biparti pour un graphe biparti a avec de taille au plus une de la taille du original proposition coupe ou johnson soit x c une avec des classes de couleurs de sommets supposons que ni ni est une fonction constante alors nous pouvons trouver en temps soit un de ou un graphe biparti a vi ci tel que toute classe de jumeaux en contient au plus et un y tel que le voire le graphe biparti est canonique en relation gy g sym sym il va de soi que dire que est constant dire que x est une clique preuve si la restriction x une clique alors par pour toute couleur en pourpre disons les voisinages dans gpourpre des sommets en nous donneraient un block design et sur le design est incomplet parce que c n est pas monochrome sur l de fisher nous donne que en contradiction avec nos suppositions donc n est pas une clique si x n est pas primitive la plus rouge de ses relations non connexes nous donne un canonique de par l exercice nous pouvons donc supposer que x est primitif nous avons deux cas x primitive et x imprimitive supposons d abord que x est imprimitive la relation non connexe la plus rouge dans x nous donne une partition de dans des ensembles bm m tous de la taille nous avons donc une structure en et nous l utiliserons soit pour soit pour par un facteur constant le premier pas consiste montrer qu il n y a pas de jumeaux dans comme notre est la couleur d une en sait si ses sommets sont des jumeaux en relation ex donc s il y avait des jumeaux dans en relation nous aurions soit qu une des couleurs d en donne une relation non connexe ce qui contredit le fait que x est uniprimitive soit que tous les de sont des jumeaux en relation en dans ce dernier cas par l exercice serait monochrome ce qui n est pas le cas en conclusion il n y a pas de jumeaux dans en relation notre intention est d appliquer l exercice pour obtenir un graphe biparti m avec m nous devons seulement faire attention ce que ce graphe ne soit pas trivial soit dk le de tout w dans le graphe biparti gk pour une couleur k gk consiste en les de couleur par l ex le dk ne pas de si dk pour tout k nous un w non canonique et obtenons un de en assignant la couleur c x w au sommet x supposons donc qu il y a une couleur que nous appellerons violet telle que dviolet s il y a un i m tel qu il n y a aucune classe de plus que jumeaux dans en relation bi nous un y bi d un tel i non canoniquement ainsi cet i de cette nous obtenons une au graphe biparti bi gviolet bi supposons que cela n est pas le cas donc pour chaque i il existe une classe ti de jumeaux en relation bi telle que pour chaque w bi les de w tout v ti sont de la couleur alors elles doivent violettes soit vert une couleur d en qui ne soit pas violet alors le graphe x m d dans l exercice n est pas vide comme vi i est violet pour tout v ti x n est pas complet non plus comme x est il n y a aucune classe de jumeaux en en relation m avec ex nous avons donc tout un graphe biparti x du type que nous maintenant le cas de x primitive fixons un y arbitraire non canoniquement nous pouvons supposer qu il y a une couleur disons violet telle que dviolet puisque sinon les couleurs des qui connectent les de avec y nous donneraient un de lviolet y x c x y violet donc soit bleu une couleur d en x telle que le de gbleu est positif et une telle couleur existe parce que x n est pas une clique s il y a plusieurs couleurs comme cela nous choisissons la plus bleue d entre elles alors lbleu y satisfait le graphe biparti gviolet w u est par l exercice il est non vide parce que pour tout u u et donc lviolet u s il complet nous aurions lviolet u pour tout u comme y u ceci impliquerait que lviolet u or cela voudrait dire que y et u sont des jumeaux dans le graphe gviolet par le argument qu avant sur l exercice la de x et le fait que ne soit pas monochrome impliquent qu il n y a pas de jumeaux dans en relation au graphe gviolet donc gviolet n est pas complet par l exercice nous obtenons qu aucune classe de jumeaux dans gviolet n a plus de nous avons donc le dans la preuve originale de babai ce point ce qui suit est un argument alternatif par lui col rumore sordo di un galoppo lorsque cet article en train d il est plus concis et que l argument d origine en plus d correct avant la preuve faisait deux fois ou plus recours la proposition ce qui faisait l indice g h de catastrophique et une couleur domine non de aligner g transitif etc de m le cas sans couleurs dominantes nous sommes dans le cas dans lequel un coloriage cx c n a pas de couleur dominante ici cx est l image d une structure x sous un foncteur f qui commute avec l action de h h alt xi le fait que cx n a pas de couleur dominante nous servira pour trouver ou ses isomorphismes possibles en h pour trouver ou des isomorphismes en tout h alt nous n avons qu travailler avec un ensemble de s des classes de dans h et faire l union de cx cyi pour yi isoh x y x yi x yi cx cyi ceci est similaire l en le de la est par s si le coloriage cx n est pas une permutation en du coloriage cyi alors cx cyi supposons par contre qu il y a au moins un tel que cx nous disons que aligne cx et cyi il est trivial de trouver or x yi x i cx comme cx n a pas de couleur dominante ceci est assez contraignant ce que nous voulions appliquons cette au cas de g primitif que nous sommes en train de discuter il y a une bijection s k donc cx induit un coloriage p ki ki i ki k nous sommes dans une situation similaire celle de la du mais en mieux il est facile de montrer que comme aucune classe de couleur de c plus de aucune classe de couleur de plus de nous alors comme dans le cas intransitif de la preuve de luks thm ce qui le n d isomorphisme de pour des de longueur et de longueur totale le dernier pas lifting consiste trouver des de g qui induisent une bijection ceci est trivial le cas du maintenant un de d un ensemble de sommets ce sera canoniquement savoir en tant que l image d une structure x sous un foncteur tout comme le coloriage au nous pouvons supposer que le a une classe de couleurs c dominante puisque dans le cas contraire nous pouvons passer au nous voulons savoir quels de alt respectent le ceci nous aidera contraindre les isomorphismes de x tout comme en par la de c est en ensembles de la taille les seules permutations en c qui sont permises sont celles qui respectent la partition le groupe qui respecte la partition est isomorphe nous avons donc notre un avec avoir ce nous travaillons comme dans le sur les autres classes de couleurs deux nous si les partitions des deux ont le nombre d ensembles de la taille pour chaque couleur puis nous alignons les deux et exactement comme pour le de l automorphisme le cas du de johnson soit un de johnson sur un ensemble de sommets ou deux de johnson j mi ki ki mi sur des ensembles de sommets de la taille nous avons vu au comment explicitement avec les ensembles de taille ki d un ensemble de taille mi si et nos structures ne sont pas isomorphes si et nous une bijection entre et et alignons les deux structures nous avons notre un avec m la place de la situation nous est donc plus favorable que dans le cas du nouveau nous laissons la au lecteur une petite confession le cas de g primitif que nous venons de de traiter pourrait exactement comme le cas de g imprimitif que nous examinerons maintenant la motivation du traitement pour g primitif est aucune peine n est perdue puisque toutes les techniques que nous avons nous seront essentielles dans le cas imprimitif le cas imprimitif nous avons une application surjective explicite g alt g sym est un groupe de permutation m nous pouvons supposer que c log n c arbitraire l application se factorise comme suit g alt n est le stabilisateur d un de blocs et alt est un isomorphisme nous devons isog x y x y sont des nous avons le cas n e nous attaquerons le de locale pour t nous arriverons obtenir un soit du fait que autgt x contient alt t de soit du contraire ici gt le groupe g g t g t nous calculerons tous ces pour t d une taille k si le nombre de de est grand nous aurons que autg x contient un grand groupe alternant ce qui restera faire sera une version de la du dans le cas contraire les formeront une structure dont la est nous pourrons donc appliquer le lem me des designs suivi de comme avant il y a aussi quelques autres cas particuliers mais ils nous des ce qui est aussi bien les certificats locaux d automorphismes un local pour t est soit une paire pas plein w m t m t sym t m t alt t donc pas plein et autw gt x m t soit une paire plein k t k t autgt x et k t alt t le local de x de canonique il est clair qu un plein voire pas plein garantit que autgt x est alt t voire ne l est pas si t est en tant que tuple son de l ordre de t seulement dans le sens de ne pas en le groupe e sym disons a une apparence si nous le regardons du point de vue de l ordre ou de l ordre nous construisons le par une au de chaque pas w et a w est le groupe autw gt x la w sera invariante sous a w au tout de la w et a w gt nous pouvons calculer gt comme dans l exercice en temps k k chaque pas nous ajoutons w tous les atteints par a w voir puis nous mettons a w jour selon le nouveau w nous nous si a w alt t ou si w ne plus ce qui veut dire qu aucun de w n est atteint par a w il est clair qu il y aura la dans le cas de nous retournons plein w a w dans le cas de nous retournons plein a w il est clair que le stabilisateur des points a w est contenu ou dans la nomenclature de babai global fait autgt x sym non pas seulement dans autw gt x mais aussi dans autgt x puisqu il tous les points de w nous savons que a w alt t par la proposition sous la condition que max si a w alt t est facile nous n avons qu en utilisant si deux arbitraires de alt t sont en a w de la il est simple de quels sont atteints par a w nous calculons a w x pour chaque x par et toujours par si a w x alt t ceci prend du temps polynomial en il reste voir comment mettre jour a w a w nous w pour l ancienne valeur de w tout de a w est dans a w et donc a w autw a w x comme dans l w w x x iso autw x aut x n a w n est le noyau de et parcourt des des k classes de n en a w nous pouvons trouver rapidement un a w pour tout sym par la proposition nous donne que toute orbite de n contenue en w l ensemble d atteints par a w est de longueur en par la mettre a w jour se k de de iso pour des de longueur comme le nombre d est la fait appel k fois pour des de longueur ceci comme la routine qui prenait k de temps est acceptable pour k log nous choisirons comparaisons de une de la cidessus nous permet d la relation entre deux locaux pour deux soient x t t pour s t soit xs la x i si i s xs i glauque si i glauque nous voulons calculer isogt t xw xw gt t est la classe des de g qui envoient l ensemble t t et w est la valeur de w quand la est t la place de t pour nous suivons la de la suivante nous mettrons jour dans chaque non pas seulement a w mais aussi la classe a w d isomorphismes en gt t de xw w comment le faire de analogue w w w w ison x x ison x x n est le noyau de et parcourt des des k classes de n contenues en a w comme w est par a et donc par n le fait que envoie w sur w ou non seulement de la classe de n laquelle appartient la classe iso dans la expression de est vide si w w comme avant toute orbite de n contenue en w est de longueur et le se k appels par pour des de longueur par ailleurs si t et t nous sont comme tuples t t il est facile de i x t t isog t t t xw g t t t est la classe des de g qui envoient le tuple t t en nous n avons qu puis utiliser pour les de qui envoient t t dans le bon ordre l des certificats coupe coupe locaux coupe ou relations relations de leman johnson de lem me des designs m etc oui pas de couleur dominante pullback en suivant la du pour une x nous trouvons des locaux pour chaque t de taille k k est une constante c log c log et k soit f autg x le groupe par les pleins k t soit s le support de f c l ensemble des de qui ne sont pas par tout de f notre objectif est de les isomorphismes isog x de x une autre puisque les sont canoniques l assignation de f et s une l est aussi donc si nous arrivons deux cas en suivant la pour x et pour les deux ne sont pas isomorphes cas mais aucune orbite de f n est de longueur alors nous colorions chaque de par la longueur de l orbite qui le contient ceci est un coloriage canonique soit aucune classe de couleurs n est de taille soit une classe de couleurs de taille est en ensembles de la taille dans un cas comme dans l autre nous passons une cas et une orbite de f est de longueur cas alt f nous sommes dans le cas de grande nous comme au jusqu au point nous devons isoh x y y est g et h alt k alt et soient g des arbitraires sous de deux de alt alt g par nous savons que les classes x i sont non vides puisque alt f comme k n a pas d orbites de longueur nous pouvons ces deux classes par des appels pour des de longueur et longueur totale elles engendrent auth x encore par le fait que alt f la classe isoh x y sera non vide ssi isok x y est non vide nous pouvons cette classe par des appels comme puisque k n a pas d orbites de longueur si elle est non vide nous obtenons la isoh x y auth x isok x y cas alt f soit d l entier maximal avec la que f est c f agit transitivement sur l ensemble des dtuples d distincts de par cgfs d si nous ne voulons pas utiliser cgfs nous avons la borne classique d log choisissons arbitrairement le reste de notre traitement de ce cas sera donc seulement canonique en relation g g g g xi xi i d ce qui comme nous le savons n est pas un voir le du la restriction du groupe f est transitive sur mais elle n est pas doublement transitive donc la schurienne qui lui correspond n est pas une clique nous livrons cette tout comme la du pour comparer les qui correspondent deux x nous alignons leurs classes d abord si elles ne sont pas de la taille ou si une nous donne le cas et l autre pas les ne sont pas isomorphes les isomorphismes seront donc contenus dans le stabilisateur h g de l ensemble facile comme vers la du puisque est surjective nous pouvons remplacer par l application g g de h alt puis nous construisons les comme et comparons ce que nous donne tout la nous nous occupons du de il s agit comme d habitude d appels pour des de longueur et longueur totale cas nous en alignant les supports s pour les x et en par g g tout comme dans le cas b nous allons une relation avec peu de jumeaux pour la donner au lem me des designs regardons la de toutes les et sont une action de g sur est et g est aussi nous la regardons depuis longtemps puisque nous devons comparer les couleurs sur des induites par des pour si ces sont isomorphes cette nous des couleurs par des classes d deux paires x t t t t s k sont si l ensemble des isomorphismes en est non vide nous colorions t dans le coloriage de s k correspondant x par la classe d de x t ici t est un sans si t a des elle est en gris pour x aucune classe de jumeaux en ne peut avoir k s il existait un tel ensemble avec k il contiendrait un ensemble t avec k et tous les ordres t de t auraient la couleur ceci voudrait dire que l ensemble des isomorphismes en serait non vide pour n importe quels ordres t t de t en autgt xw contiendrait des donnant toutes les permutations possibles de t ceci nous donnerait une contradiction puisque t contenu en n est pas plein alors pourvu que k nous avons un coloriage de k sans aucune classe de jumeaux avec nous pourrons donc appliquer le lem me des designs une application de habituels mais calculer ces coloriages les classes d sont par contre il n y a aucun besoin de les calculer tout ce dont nous aurons besoin pour comparer des structures qui viennent de x y sera d capables de comparer deux tuples t sur la par x ou y et t sur la par x ou y et dire si elles sont de la couleur en d autres termes nous devrons calculer au tout de la pour toute paire t t k et pour les paires de x x x y y y l ensemble d isomorphismes en ce que nous savons faire les couleurs sont donc dans la pratique des dans un index que nous enrichissons et auquel nous faisons durant nos nous invoquons donc le lem me des designs suivi par et le reste de la fine dell opera le lecteur peut que les informations jusqu ici temps pris par des type de sont assez pour donner une borne du type exp o log c pour le temps de l algorithme qui le de l isomorphisme de ceci donne une borne exp o log n c pour le de l isomorphisme de graphes avec n sommets avec un peu plus de travail il devient clair que dans un cas comme dans l autre c nous donnons les dans l appendice l exposant c est plus petit que celui d origine il est devenu possible quelques et que j ai capable d apporter remerciements je remercie vivement babai bajpai bartholdi dona kowalski kantor puccini pyber rimbaud et pour des corrections et suggestions en particulier babai a beaucoup de mes questions et m a aussi fourni des versions ou de plusieurs sections de ba en particulier les et sont sur ces nouvelles versions je voudrais aussi remercier ladret et le dret pour un grand nombre de corrections d ordre typographique et linguistique appendice analyse du temps d quelques sur la principale tout moment nous travaillons avec un groupe transitif g sym qui s agit sur un de blocs b bi i bi bi disjoints nous notons n le noyau de l action sur vrai dire nous aurons toute une tour de de blocs bk bi est un de b bk le le moins au il n y a qu un dont les blocs bi sont tous de taille et dont le noyau n est trivial nous voudrions que l action de g sur b soit primitive donc si elle ne l est pas nous ajoutons la tour un minimal tel que bk soit un de nous b n sera le noyau du nouveau si est petit bo log b b cas a du cameron nous notre plusieurs instances du avec n la place de chacune de ces instances se en plusieurs instances une pour chaque orbite de chaque orbite de n est contenue dans un bloc de les intersections de avec les blocs de nous donnent une tour de de blocs pour si nous sommes dans le cas b du nous passons b instances du avec m g g m b la place de nous passons un nouveau b blocs et l ajoutons la tour comme son nouveau dernier b de m k niveau nous notons n le noyau de l action de m sur b alors alt k m nous g par m et b b n n donc nous avons un isomorphisme de altm nous sommes dans le cas principal que babai attaque ses une de altm soit un groupe intransitif sans grandes orbites soit un produit m soit un groupe nous quelque peu nous pourrions avoir disons un produit agissant sur une orbite de grande taille m et ce peut b seulement si m g voir la note de pied de page dans l du dans ce le passage de g m est bien gratuit d autres groupes sur des petites orbites ou plusieurs produits agissant sur des petites orbites dans le cas intransitif sans grandes orbites nous comme dans la preuve de luks la aura plus que dans luks mais au manque de grandes orbites le gain dans la est aussi plus grand dans le cas de m nous la dans le cas de qui correspond un dans des ensembles de taille r de la couleur nous avons une action primitive de alts sur un de s blocs de taille nous passons alors cette action et ces blocs sans oublier les blocs b auxquels nous retournons plus tard avoir de travailler sur alts il est clair que ce type de altk en un nombre d qui n est pas et temps examinons le temps total d de l algorithme qui trouve les isomorphismes entre deux les pas individuels sont peu aucun ne plus de no log n de temps notre attention doit se porter avant tout sur la dans la une est toujours d une descente soit vers des plus courtes soit vers un groupe plus petit ou au moins dans des tranches plus par une tour de de blocs ayant plus de niveaux dans le premier type de descente le groupe reste le ou est par une restriction de dans le cas la longueur des reste la nous pouvons aussi avoir un des deux cas tant mieux le groupe devient plus petit et les se raccourcissent aussi la descente la moins et moins avantageuse est celle du cas intransitif de la de luks il pourrait arriver que g ait deux orbites sur n une de longueur n et une de longueur ceci serait compatible avec une borne polynomiale sur le temps pourvu que le temps pris avant la descente soit polynomial n nc pour c d autres types de descente sont plus mais aussi plus avantageux nous descendons des de longueur ou ou de altm m par exemple il est clair qu il est impossible de descendre plus qu un nombre logarithmique de fois de cette il est crucial de ne pas oublier qu un peut dans une perte de si nos choix ne sont canoniques qu en relation un h de notre groupe g le de leur application sera par g h voir alors le de chaque le cas intransitif de luks est comme nous l avons vu compatible avec une borne polynomiale alors sur le cas g agit de primitive sur un de blocs soit n le noyau si nous sommes dans le cas a du ou dans le cas b mais avec m c log n nous faisons appel o log n instances de la principale pour des de longueur m ceci est consistant avec une borne totale du type exp o log n c c nous pouvons donc nous concentrer sur le cas il existe un isomorphisme alt m c log la du rend cet isomorphisme explicite le premier pas est la de locaux avec comme objectif la d une relation sur si g est primitif une telle relation est trivial voir le du il y a nk locaux k log n disons nous devons les calculer et aussi comparer toute paire de le premier pas du calcul d un savoir le calcul de gt prend un temps no k plus o o k d autres calculs prennent moins de temps l usage de la par contre est relativement lourd nous faisons appel la principale k fois pour des de longueur ceci se passe pour chaque ensemble t de taille k c nk fois la pour comparer des paires de est analogue nous faisons donc appel la principale o fois pour des de longueur dans chacun de ces appels notre tour de stabilisateurs est notre groupe est un groupe transitif la restriction de n une de ses orbites n n au est un d un a w de pour deux de blocs bi notons ri le nombre de blocs de bi dans chaque bloc de il est clair que ce nombre n augmente pas quand nous passons la restriction d un de g par exemple n une de ses orbites examinons maintenant l des locaux il y a trois cas dans le premier le temps de calcul additionnel est peu trivial et nous obtenons une soit un groupe intransitif sans grandes orbites soit un produit sur une grande orbite et d autres groupes sur des orbites plus petites ici l analyse devient nous devons prendre en non seulement la taille du domaine mais aussi le groupe qui agit sur lui plus nous devons borner le nombre de fois que notre tour bk pourrait ou raccourci encore ceci sera par x ri nous supposons que nous avons des de la tour donc ri notons f n r le temps d de la principale pour des de longueur n et pour une tour de de blocs pour g telle que le est une de fait r par au moins un coloriage sans aucune grande classe de couleurs assure une descente vers des de longueur nous devrons aussi inclure un facteur de log prenant en le temps requis pour nos comparaisons de paires de locaux donc dans le cas que nous examinons f n r est par x no k f r f r f ni r o k log n p ni n et ni pour i ou no k f r x i f ni r o k log n ni n et ni pour i puisque k log n ceci est consistant avec f n r exp o r log n c pour c ou avec f n r exp o log n log r pour et par exemple le cas a un similaire un facteur constant le cas et sont dans les deux cas nous arrivons construire une relation avec d dans le cas et d k log n dans le cas puis nous appelons suivi du lem me des designs pour des et prend un temps d mo d le lem me des designs garantit l existence d un tuple d avec certaines nous cherchons un tel tuple par force brute ce qui prend un temps o md ce qui est plus important est que ce choix n est pas canonique donc le temps d de tout ce qui reste est par md mo log n prend un temps o md ici nouveau nous faisons des choix qui ne sont pas canoniques ils imposent un facteur de mo log m sur tout ce qui suit le de est soit un ce qui implique une un produit du type des plus courtes soit un de johnson ce qui implique une donc soit x f ni r f n r no k o f r mo log n f r p p ni n et ni pour i ou f n r no k o f r mo log n x i f ni r p ni n et ni pour i ici m nous pourrions travailler avec une borne moins mais cela nous servirait peu donc les et sont consistantes avec f n r exp o r log n c pour c faire ce type de comparaisons l avance nous aide mais ne pas les faire l avance ne changerait pas l ordre du temps asymptotiquement comme r n nous concluons que le temps total d de la pour les isomorphismes entre deux de longueur n est f n r eo log n ba babai graph isomorphism in quasipolynomial time disponible en ligne sur babai graph isomorphism in quasipolynomial time extended abstract dans proc acm stoc babai lectures on graph isomorphism university of toronto dept of computer science notes bcp babai cameron et on the orders of primitive groups with restricted nonabelian composition factors journal of algebra bkl babai kantor et luks computational complexity and the of simple groups dans proc ieee focs bls babai luks et seress permutation groups in nc dans proc acm stoc baps babai et saxl on the number of elements in simple groups lms comput and math cfi cai furer et immerman an optimal lower bound on the number of variables for graph combinatorica cam cameron finite permutation groups and simple groups bull london math soc evp evdokimov et ponomarenko on highly closed celular algebras and highly closed isomorphisms electr comb f fisher an examination of the possible solutions of a problem in incomplete blocks ann of eugenics fhl furst hopcroft et luks algorithms for permutation groups dans proc ieee focs hi higman finite permutation groups of rank math z iml immerman et lander describing graphs a approach to graph canonization dans complexity theory retrospective in honor of juris hartmanis on the occasion of his birthday springer lu luks isomorphism of graphs of bounded valence can be tested in polynomial time of comput and sys sci ma on the orders of primitive groups algebra py pyber a analysis of babai s quasipolynomial gi algorithm disponible en ligne sur pyber on the orders of doubly transitive permutation groups elementary estimates of combin th a rchw et wilson on osaka j math sch schreier die untergruppen der freien gruppen abh math semin univ hambg sims graphs and permutation groups math z sims computational methods for permutation groups dans computational problems in abstract algebra pp pergamon oxford sw x sun et wilmes faster canonical forms for primitive coherent dans proc acm stoc wl harald helfgott mathematisches institut bunsenstrasse allemagne helfgott
4
modified recursive cholesky rchol algorithm mar an explicit estimation and of correlation matrices vanita pawar krishna naik karamtot vanita krishnanaik cholesky decomposition plays an important role in finding the inverse of the correlation matrices as it is a fast and numerically stable for linear system solving inversion and factorization compared to singular valued decomposition svd qr factorization and lu decomposition as different methods exist to find the cholesky decomposition of a given matrix this paper presents the comparative study of a proposed rchol algorithm with the conventional methods the rchol algorithm is an explicit way to estimate the modified cholesky factors of a dynamic correlation matrix cholesky decomposition is a fast and numerically stable for linear system solving inversion and factorization compared to singular valued decomposition svd qr factorization and lu decomposition the wireless communication system is highly dependent on matrix inversion of the correlation matrix such system consists of a huge matrix inversion an outdoor wireless communication has a channel which changes dynamically for mobile user in case of narrowband channel the channel is considered constant for a symbol duration whereas for broadband it is changing within a symbol period such channel forms the special structure of channel matrix and correlation matrix to exploit such special structure a novel modified recessive cholesky rchol algorithm is introduced in our proposed rchol algorithm is a computational efficient algorithm to compute the modified cholesky factors of known as well as an unknown covariance matrix in this paper we present the comparative study of conventional cholesky algorithm and the rchol algorithm to manifest the importance of the proposed algorithm in a highly dynamic wireless communication i s ystem m odel in wireless communication system number of transmit and or received antennas are used to improve the diversity of the system the channel h between transmitter and receiver has the different form and depends on the number of antennas used at the transmitter and the receiver side the channel for siso as h for simo n as h and for output mimo h n let y n be received signal with the number of transmit antennas k multipath l and channel noise v represented as y n k x x hk n l sk n l v n n t let yn n be the received vector by stacking n successive received vectors where yn n y n y n y n n t and the transmitted symbol vector is sn s n s n n n t then yn n can be represented in matrix form as yn n hn sn n vn n and the correlation matrix for yn can be written as rn n h n let e y n yh n and rnij e yn n yn h e y n i y n j then the correlation matrix rn n and rn n at time instant and n can be represented as equation and equation respectively rn rn rn rn rn n rn n rn n rn n rn n rn n rn rn rn n rn rn n rn rn n rn rn n n rn n n rn n n rn nn rn n n rn n n ii c holesky d ecomposition the correlation matrix is complex matrix and the pseudoinverse of r can be computed from cholesky factors such that if lower triangular matrix l is cholesky factors of the correlation matrix r and can be represented as r llh then of r can be computed as the section below details the conventional cholesky algorithms and the rchol algorithm cholesky decomposition gaxpy version the cholesky decomposition factorizes a complex or hermitian symmetric matrix into a product of a lower triangular matrix and its hermitian transpose r llh where l is a lower triangular matrix and lh is hermitian of the matrix r must be a positive definite and this method needs square root operation algorithm steps compute r at each time instant n find the square root of diagonal element of r modify each column of r equate lower triangular part of r to l repeat steps to for each time instant algorithm cholesky decomposition r llh initialization r r q r order updates on r s for k to n h r k n k r k n k r k n r r k n k r k n k q r k k end that levinson recursion may be used to derive the lattice recursion for computing qr factors of data matrices and lattice recursion can be used to derive the schur recursion for computing cholesky factors of a toeplitz correlation matrix the detail algorithm is given in algorithm the schur algorithm like previously mentioned algorithm computes all n inner product to compute matrix r for initialization algorithm steps compute r at each time instant n initialize first column of r to the first column of cholesky factor h compute rest column recursively from columns of r repeat step to for each time instant algorithm schur algorithm r llh initialization for k l tril r t n n n n r n t n n n r n h b modified cholesky algorithm r ldl to avoid square root operation a modified cholesky algorithm is used which avoids square root operation by introducing a diagonal matrix d in between cholesky factors the modified cholesky algorithm does not require r to be a positive definite matrix but it s determinant must be nonzero r may be rank deficient to a certain degree d may contain negative main diagonal entries if r is not positive semidefinite algorithm steps compute r at each time instant n modify each column of r equate the strictly lower part of matrix r to with ones on the main diagonal equate main diagonal of r with the main diagonal of d repeat step to for each time instant algorithm modified cholesky decomposition r ldlh initialization r r r order updates on r s for k to n for v i r k if i r i i r n i if i end v k r k k r v r k k v k r n k r n k r n v v k end d diag daig r l tril r recursive cholesky algorithm the shcur algorithm rschur hhh the schur algorithm recursively compute the columns of the lower triangular matrix h form matrix it is shown in order updates on h s for k to n hk kref zm k k kref zm hk k k scaling factors k kref k hk hk k k note here notation is followed same as in and h represents vector the rchol algorithm r it is clear from above equation and equation that rn n can be represented from submatrix of rn n to utilize such special structure of correlation matrices we propose a modified recursive cholesky algorithm to compute the cholesky factors recursively this algorithm is modification of schur algorithm mentioned above the more general approach consists of using the schur algorithm to induce recursion for columns of dynamic this algorithm does not need n inner products to compute the correlation matrix the cholesky factors are computed explicitly such that let then can be computed as algorithm steps initialize first the first column of cholesky factor a as compute second column recursively from n and n substitute n n n to n n repeat step to for each time instant in the schur algorithm columns of cholesky factors at time instant n are computed recursively from the correlation matrix at that instant whereas in the rchol algorithm first two columns of cholesky factors at time instant n is computed recursively from previous cholesky factor and submatrix of that cholesky factors are updated recursively from previous cholesky factor at time instant n conventional cholesky algorithm mentioned here are introduced for normal matrices whereas proposed matrix is well suited for block matrices and simulations are shown for that only algorithm recursive cholesky update rchol r ldlh initialization for k n rn t n n n n r n t n n n r n order updates on a s for k iv c onclusion ak n zm n n n convention methods of cholesky factorization requires the correlation matrix which needs inner product while the recursive modied cholesky algorithm rchol algorithm is an explicit way to recursively calculating the of the matrices without estimating the correlation matrix it requires less number of iteration which avoids error propagation through column updates the rchol algorithm has most of the use in calculating the of the of a matrix which is applicable to cdma ofdm etc wireless communication systems dk n dk n im kref n n for k n ak n zm ak n dk n zm dk n n scaling factors kref n kref n n n iii s imulation results to compare proposed the rchol algorithm with schur algorithm we compared the result of both the algorithm with theoretical results fig show the ratio and difference of matrices and when the correlation matrix is unknown that has the application in blind channel and or data estimation fig a and b shows the maximum error for the rchol algorithm is while for the schur algorithm is nearly times the rchol algorithm in case of ratio fig a and b shows the maximum ratio for the rchol algorithm is while for the schur algorithm is a b c d e f g h fig comparisons of rchol algorithm vs schur algorithm for the unknown and known correlation matrix r a e proposed algorithm difference b f schur algorithm difference c g proposed algorithm ratio d h schur algorithm ratio fig show the ratio and difference of matrices and when the correlation matrix is known fig a and b shows that the maximum error for the rchol algorithm is while for the schur algorithm is nearly times the rchol algorithm in case of ratio fig e and f shows that the maximum ratio for the rchol algorithm is while for the schur algorithm is from fig it can be concluded that the schur algorithm is best suited when the correlation matrix is known but leads to huge error propagation through the column when r is unknown and can not be applied for blind channel estimation in converse the rchol algorithm is best suited for blind channel estimation and reduces error propagation through the column pawar and naik diat pune india vanietaapawar r eferences golub and van loan matrix computations pawar and krishna naik blind multipath time varying channel estimation using recursive cholesky update aeu int electron no pp hunger and report floating point operations in calculus matrix rialan and scharf fast algorithms for computing qr and cholesky factors oftoeplitz operators ieee trans pp
0
may the euclidean criterion for irreducibles pete clark abstract we recast euclid s proof of the infinitude of prime numbers as a euclidean criterion for a domain to have infinitely many atoms we make connections with furstenberg s topological proof of the infinitude of prime numbers and show that our criterion applies even in certain domains in which not all nonzero nonunits factor into products of irreducibles introduction this article has its genesis in a graduate vigre research group taught by paul pollack and me in fall introduction to the process of mathematical research rather than concentrating on a fixed topic preselected by us the goal was to guide students through the process of selecting and performing research on their own one technique we tried to inculcate is exploitation of the relation between theorems and proofs a good theorem has several proofs and you will know two proofs are different when can be used to prove further theorems the other can not in our first meeting pollack and i presented seven proofs of euclid s proposition there are infinitely many prime numbers my first proof suppose given a domain r that is not a field in which each nonzero nonunit factors into irreducibles and whenever x r is a nonzero nonunit then x is not a unit then there is at least one irreducible element and given irreducibles fn by factoring fn we get a new irreducible element it was pointed out that this argument though correct does not imply euclid s result x is a problem some salvages were suggested in z it is enough to replace fn by fn if necessary here we present a general fix a euclidean criterion for a domain to have infinitely many nonassociate irreducibles and explore its consequences we soon find ourselves on a scenic tour of century mathematics as we engage with work of jacobson furstenberg and among others acknowledgments thanks to all members of the introduction to mathematical research uga vigre group conversations with saurabh gosavi noah robert samalis lee troupe and lori watson were helpful my group coleader paul pollack made key contributions first he emphasized that the euclidean criterion automatically yields pairwise comaximality second theorem was inspired by p thm and though i came up with the statement i could prove it only in various special cases the proof included here is his i am grateful to two anonymous referees for their careful reports in particular example was suggested by the first referee date may pete clark the euclidean criterion a primer on factorization in domains by a ring we will mean a commutative ring with a multiplicative identity we denote the set of nonzero elements of r by an element x r is a unit if there is y r such that xy we denote the group of units of r by for a subset s of a ring r we denote by s the ideal of r generated by as is standard we write xn for xn ideals i and j in r are comaximal if i j elements a b r are comaximal if a and b are comaximal a b an indexed family of ideals ii is pairwise comaximal if ii ij r for all i j and similarly for pairwise comaximal elements a domain is a nonzero ring in which x y xy for x y r we say x divides y and write x y if there is c r such that cx y elements x and y are associates if y ux for some u an element x of a domain is irreducible if it is a nonzero nonunit and x yz implies y or z a prime element p r is an element p for which p is a prime ideal thus a nonzero nonunit p is prime if and only if p ab p a or p b an atom in a domain r is a principal ideal x generated by an irreducible element x thus two irreducibles of a domain r determine the same atom if and only if they are associate it is more common in the literature for the terms atom and irreducible to be fully synonymous but this minor distinction is convenient for our purposes usually we will count to count irreducibles in a domain up to associates but sometimes we will want to count irreducibles a furstenberg domain is a domain r in which every nonzero nonunit has an irreducible an atomic domain is a domain r in which for every nonzero nonunit x r there are irreducible elements fn such that x fn a unique factorization domain ufd is an atomic domain such that if fm gn are irreducibles such that fm gn then m n and there is a bijection m n such that fi for all i prime elements are irreducible in general the converse is false an atomic domain is a ufd iff every irreducible is prime thm the terminology can be confusing in light of the definition of a prime number p as a positive integer not divisible by any n p this means p is irreducible in z but euclid showed p ab p a or p b from this one can easily show the fundamental theorem of arithmetic z is a ufd a principal ideal domain pid is a domain in which each ideal is generated by a single element every pid is a ufd it follows from the euclidean algorithm that z is a pid a domain is a domain in which every finitely generated ideal is principal a ring is noetherian if all of its ideals are finitely generated noetherian domains are atomic prop thus a pid is precisely a noetherian domain a dedekind domain is a domain in which each nonzero proper ideal factors uniquely into prime ideals a domain is dedekind iff it is noetherian of dimension at most one every nonzero prime ideal is maximal and integrally closed every element of the fraction field which satisfies a monic polynomial with coefficients in r lies in r thm working in a domain rather than a general ring confers certain advantages explanation for the terminology comes in the euclidean criterion for irreducibles fact a every nonzero ideal in a ring contains a nonzero principal ideal b if r is a domain and x r gives a bijection from r to c thus for every nonzero ideal i of a domain r we have i d for nonzero ideals i and j of r i j contains ij and thus is nonzero the euclidean criterion a ring r satisfies condition e if for all x there is y r such that yx in other words if x then x by fact this is equivalent to if i is a nonzero ideal of r then i though we will defer consideration of this restatement until later on example a the ring z satisfies condition e indeed so for x take y if x is positive and y if x is negative then yx so yx b for any domain r the polynomial ring r t satisfies condition e indeed r t so for any x r t take y c r z i satisfies condition e indeed z i i so this is geometrically clear for any x z i if we multiply it by a y with large enough then yx will be much more than unit away from any point on the unit circle proposition a domain r with r satisfies condition e proof for x the map r r given by y yx is an injection thus r r so it can not be that r and here we go theorem the euclidean criterion let r be a domain not a field satisfying condition e a there is an infinite sequence an of pairwise comaximal nonunits b if r is also furstenberg it admits an infinite sequence fn of pairwise comaximal irreducibles thus fn is a sequence of distinct atoms in proof a by induction on let r be a nonzero nonunit having chosen an pairwise comaximal by condition e there is y r such that an clearly ai r for all i b by induction on since r is furstenberg and not a field it has an irreducible having chosen pairwise comaximal irreducibles fn by condition e there is y r such that x fn is a nonzero since nonunit so x has an irreducible factor for all i n we have y fj fi y so fi are comaximal finally if x and y are pairwise comaximal irreducibles then x y r and x y x y r so we must have x y here are two applications of the euclidean criterion the first two are immediate theorem a for any domain r r t has infinitely many atoms b in particular let d be a ufd and let r d tn then r is a ufd satisfying condition e so r has infinitely many nonassociate prime elements c the gaussian integers z i have infinitely many atoms since z i is a pid there are infinitely many nonassociate prime elements pete clark theorem let r be a furstenberg domain not a field such that r then r has infinitely many atoms theorem let r be a furstenberg domain let i be the set of all irreducible elements of then i is either empty if r is a field or infinite otherwise proof assume i and fix f i if is finite theorem yields infinitely many atoms if is infinite then uf u is an infinite subset of i supplement irreducibles in residue classes we switch from an ancient theorem to matters of contemporary interest if we ask for infinitely many primes satisfying certain additional conditions here is a result along these lines relatively modest over z but of a general algebraic nature lemma let a b c be elements of a ring if a b r and c a b then a c b c proof let d r be such that cd a b then a c a cd a a b a b lemma let r be a domain not a field satisfying condition e for any at b r t with a there is x r such that ax b is a nonzero nonunit proof put p t at b if b take any nonzero nonunit x if b by condition e there is x r such that ax so p x b ax is a nonzero nonunit if b r is a nonzero nonunit take x the proof of the following result was suggested to me by paul pollack theorem let r be an atomic domain satisfying condition e let i be a nonzero ideal of r and let h be a proper subgroup of then there are infinitely many pairwise comaximal irreducibles f such that the class of f modulo i lies in proof let r r be the quotient map let r be such that r h and let r be such that i inductively assume that we have pairwise irreducibles fn of r such that fi fi i r for all i and such that r fi let p t fn r t we need to include the base case n and in this case fn by lemma there is x r such that y fn is a nonzero nonunit so we get an irreducible factorization y gs with s then r r gs r y r h so gj i for all j and there is at least one gj say such that r now can not be associate to any fi if so and hence also fi would divide if this contradicts the irreducibility of fi if not this contradicts fi the euclidean criterion for irreducibles moreover y fn mod so y hence also finally since fn fn mod we have fn r so by lemma we have fn r so fi r for all i thus we may take completing the induction when r z we get for any proper subgroup h z there are infinitely many prime numbers p such that mod n moreover in this classical case one can run the argument with positive integers only and so get rid of the annoying this is a special case of dirichlet s theorem on primes in arithmetic progressions it is an observation of granville unpublished by him but reproduced in p thm that this case can be proved in an elementary euclidean way the special case of trivial h for all n there are infinitely many primes p mod n is older and better known it is also simpler just consider n this case does not use that z is a ufd but granville s argument does the most auspicious replacement for coprimality arguments is by comaximality and that is what we ve done here a topological interlude furstenberg s lemma in this section we will give several proofs of the following result theorem let r be a furstenberg domain with at least one and only finitely many irreducibles fn then a we have b more precisely there is a nonzero ideal i of r such that i theorem is the contrapositive of part b of the euclidean criterion without the information on comaximality the proofs that we give here are inspired by the famous paper of furstenberg the essential core of his argument is the observation that in z the set of elements not divisible by any prime number is notice that has nothing to do with the natural ordering of z that underlies most of the classical proofs of euclid s theorem in fact the property of z being used is that z is a furstenberg domain lemma furstenberg s lemma t a a domain r is a furstenberg domain iff f irreducible r f b in a furstenbergtdomain with at least one and only finitely many irreducibles fn we have r fi the proof is virtually immediate and is left to the reader following furstenberg let r be a domain by fact for each x r the family c x x i i is a nonzero ideal of r pete clark is closed under finite intersections so c x is a system of neighborhood bases for a topology on r let us call it the adic topology in which u r is open iff for all x u there is a nonzero ideal i with x i u by fact every nonempty open has cardinality proof of theorem let r be a furstenberg domain with at least one and only finitely many irreducibles fn then each fi is open hence its complement r fi being a union of cosets of fi is also open by furstenberg s lemma t r fi is open since we have more precisely i for some nonzero ideal of following let r be a domain and let be the field of two elements for an ideal i of r a function f r is if f x y f x for all x x and y lemma let r be a domain and let i in be nonzero ideals of a if and f r is it is also b if for all i n fi r is ii then the pointwise product fn r is in c if f r is then for all x r we have y r f y f x proof a this is immediate t from the definition t b certainly fn is ii and ii in apply part a c choose a nonzero i then f x f x and proof of theorem step for i n let r be the characteristic function of fi put n y each is fi hence so too is and thus is fn t moreover is the characteristic function of r fi step since x r x r part a step more precisely fn so fn part b following mercer let r be a domain call a subset x r lovely if it is of the form x i for x r and a nonzero ideal i of r if it is a coset of a nonzero ideal call a subset x r pleasant if it is a union of lovely subsets if i is a nonzero ideal of r then i is a union of cosets of i hence pleasant if x y r are pleasant sets and x x y there are nonzero ideals i j of r such that x i x and x j y by fact i is a lovely subset of x containing x so x is pleasant by fact every nonempty pleasant subset has cardinality proof of theorem let r be a furstenberg domain with at least one and only finitely many irreducibles fn by furstenberg s lemma is the finite intersection of complements of nonzero ideals so is pleasant since we have more precisely i for some nonzero ideal of the euclidean criterion for irreducibles debriefing the three proofs given above are generalizations of the proofs of euclid s theorem given by furstenberg and mercer the latter two works take the detopologization of furstenberg s proof as their goal our presentation of the argument of differs superficially from mercer s we chose the words lovely and pleasant precisely because they do not have a commonly understood technical mathematical meaning had we said basic and open then the reader s attention would have been drawn to the fact that since the basic sets are closed under finite intersections they form the base of a topology mercer s exposition takes pains to point out that the underlying fact here is just that finite intersections of unions are unions of finite intersections of course this is a basic logical principle conjunctions distribute over disjunctions and conversely like many basic logical principles it is completely innocuous when used in context as in our version of the argument that the pleasant sets form a topology on r is no more and no less than a crisp enunciation of the facts we need to check in the first part of the proof i find it quite striking and pleasant that the facts can be enunciated in this way but i must now agree with those who have claimed that there is no essential topological content in furstenberg s the use of periodic functions involves slightly more packaging but of a standard kind it is well known that the boolean ring of subsets of r can be represented as the ring maps r with pointwise addition and multiplication we recommend wikipedia and glaymann as references glaymann develops this correspondence and applies it to prove such identities as c a b in a manner intended to be used in the high school classroom this is an interesting snapshot of the new math near its zenith the ubiquitous theorem here is a result that complements theorem it is not deep but it will play a recurring role for us as a common intersection of various constructions and themes the first proof that we give follows the topological conceit of this section we will give other simpler proofs later on theorem let r be a domain not a field with only finitely many maximal ideals mn then a we have b more precisely there is a nonzero ideal i of r such that i proof we endow r with the topology for which for x r c x x m m is a maximal ideal of r is a neighborhood subbase at x that is u r is open iff for all x u there is a subset j n such that x mi x mi does not claim a topological proof of the infinitude of the primes but rather a topological proof of the infinitude of the primes pete clark t fact gives mi so every nonempty open has cardinality each r mi being a union of cosets of mi is also open therefore n r mi is open since we have t r r more precisely t there is a subset j n such that mi r and thus also mi supplement further topologies on a domain here is a common generalization of theorems and let j be a family of nonzero t ideals of a domain r t and suppose there are in j such that r ii then ii so in particular look again at theorem instead of taking j to be the family of all nonzero ideals we could take j fn and endow r with the unique translationinvariant topology with j as a neighborhood subbase at this coarsens the adic t so that being open yields the sharper conclusion fi in particular fn r we are back to a version of euclid s argument the adic topology on z is not very interesting as a topological space it is countably infinite metrizable totally disconnected and without isolated points hence homeomorphic to the euclidean topology on q in golomb proved euclid s theorem using the topology on with base the arithmetic progressions an b n for coprime a b golomb s topology makes into a countably infinite connected hausdorff space which is already interesting in a domain r that is not a field we may consider the golomb topology with neighborhood base at x r given by c x x i i is a nonzero ideal with x i r in this topology every maximal ideal is closed so in a domain that is not a field with only finitely many maximal ideals mn is open and thus contains i for some nonzero ideal i we get another proof of theorem the golomb topology is never hausdorff in fact however the induced topology on can be it is for z we leave further exploration to the reader connections with ideal theory for a ring r we denote by maxspec r the set of all maximal ideals of comaximal ideals lemma let in be a sequence of pairwise comaximal proper ideals in a ring then maxspec r is infinite proof for n let mn be a maximal ideal containing in if for we had then r contradiction in particular part a of the euclidean criterion implies that a domain that is not a field and that satisfies condition e has infinitely many maximal ideals thus we get another proof of theorem but by no means our last adic topology on a domain is always hausdorff but in a furstenberg domain with finitely many irreducibles this new topology is not the euclidean criterion for irreducibles euclid meets jacobson now is the time to examine the more explicitly statement of condition e for all nonzero ideals i we have i some readers will now see or will have already seen the connection with the jacobson radical but we will not assume a prior familiarity in fact we will use the euclidean criterion to motivate a discussion of this and other concepts proposition prop for a ring r let m j r r the jacobson radical of for x r the following are equivalent i x j r ii for all y r yx proof i ii by contraposition suppose there is y r such that z yx then z lies in some maximal ideal if also x m then yx m and thus also z yx m contradiction so x does not lie in m and thus x j r ii i again by contraposition suppose that there is a maximal ideal m such that x then m m x so m x it follows that there is m m and y r such that m yx thus x m so is not a unit we get immediately corollary a ring r satisfies condition e iff j r this gives a third proof of theorem if r has only finitely many maximal ideals mn then n n y mi mi j r apply corollary a ring with zero jacobson radical is called some questions and some answers we now raise some natural questions and answer them question in part b of the euclidean criterion must we assume that r is a furstenberg domain question a semiprimitive domain not a field has infinitely many maximal ideals must a domain with infinitely many maximal ideals be semiprimitive question let r be a furstenberg domain a if r is not semiprimitive can it still have infinitely many atoms b can r have finitely many maximal ideals and infinitely many atoms jacobson semisimple or pete clark example the ring z of all algebraic integers is not a furstenberg domain in fact it is an antimatter domain there are no irreducibles whatsoever if z is an algebraic integer then so is z so we can always factor z z z moreover z is not a field for all integers n if n z then z q z contradiction if i is a nonzero ideal of z then the constant coefficient of the minimal polynomial of a nonzero element i is a nonzero integer in i it follows that if j z then there is n that is contained in every m maxspec z choose a prime number p n then p is not a unit in z otherwise z q z so there is at least one maximal ideal mp of z containing in fact the set of maximal ideals of z containing p has continuum cardinality then mp n p z contradiction so the answer to question is yes a semiprimitive domain that is not a field can have no irreducibles whatsoever the following result answers questions and for dedekind domains and shows that the euclidean criterion is in principle completely efficacious in determining whether a dedekind domain has infinitely many atoms theorem for a dedekind domain r that is not a field the following are equivalent i r is semiprimitive ii r has infinitely many maximal ideals iii r has infinitely many atoms proof we know i ii in any domain ii i in a dedekind domain any nonzero element is contained in only finitely many maximal ideals so in fact for any infinite subset m maxspec r t we have m i iii dedekind domains are noetherian hence furstenberg domains so the euclidean criterion applies iii i by contraposition a dedekind domain with finitely many maximal ideals is a pid thm and in a pid there is no distinction between maximal ideals principal ideals generated by prime elements and atoms question let k be a number field with ring of integers zk the set of prime numbers is an infinite sequence of pairwise comaximal nonunits of zk so as is well known zk has infinitely many prime ideals and thus is semiprimitive when k q or is imaginary quadratic the finiteness of k leads to a direct verification of condition e is there a similarly direct verification for all k this is a question we will leave to the reader to address proposition let r be a noetherian domain of dimension at most one nonzero prime ideals are maximal if maxspec r is infinite then r is semiprimitive and thus has infinitely many pairwise comaximal irreducibles proof if r is not semiprimitive then every maximal ideal m of r is a minimal prime ideal of r since r is noetherian so is r and a noetherian ring has only finitely many minimal prime ideals thm a jacobson ring is a ring in which every prime ideal is the intersection of the maximal ideals containing it since in a domain is prime a jacobson domain the euclidean criterion for irreducibles must be semiprimitive any quotient of a jacobson ring is again a jacobson ring if r is a jacobson ring and s is a commutative finitely generated then s is a jacobson ring thm so theorem a a jacobson furstenberg domain that is not a field has infinitely many pairwise comaximal irreducibles b let f be a field and let p be a prime but not maximal ideal of f tn then the ring r f tn a coordinate ring of an integral affine variety of positive dimension has infinitely many pairwise comaximal irreducibles c a domain r that is finitely generated over z and not a field has infinitely many pairwise comaximal irreducibles to sum up if we want to see a domain that has infinitely many maximal ideals but is not semiprimitive it can not be finitely generated over a field and if noetherian it must have a nonzero prime ideal that is not maximal this cues us up for the following example which gives a negative answer to question example consider the ring z t of formal power series with integral coefficients it is not hard to show that z t is an atomic domain in fact z t is a noetherian ufd thm since t z t the jacobson radical j z t contains t and is thus nonzero since j z t the hypotheses of the euclidean criterion do not apply nevertheless there are infinitely many pairwise comaximal prime elements namely the prime numbers hence there are infinitely many maximal ideals here we could have replaced z with any pid with infinitely many maximal ideals thus the answer to question is yes moreover a nonsemiprimitive domain can have infinitely many comaximal irreducibles example let k be a field recall that k x y is a ufd and let k k x y be its fraction field let r be the subring of k x y consisting of rational functions f x y g x y that when written in lowest terms have g then r is itself a ufd factorization in r proceeds as in k x y except that the prime elements p x y k x y such that p become units in r in which an element x y is a unit iff f thus m fg x y f is the unique maximal ideal so j r m and r is very far from being semiprimitive nevertheless it has infinitely many prime elements y xn in more geometric language the irreducibles are the irreducible curves in the affine plane passing through thus the answer to question is yes however there is more to say the preceding example can be vastly generalized using the following striking result theorem let r be an atomic domain with finitely many atoms then a r has only finitely many prime ideals b r is noetherian c every nonzero prime ideal of r is maximal proof a in an atomic domain r whenever a prime ideal p of r contains a nonzero element x we may factor x fr into irreducibles and thus see that p contains some irreducible element f dividing x thus given any set of generators of a prime ideal p we can replace it with a set of irreducible generators in a set of generators pete clark of an ideal replacing each element by any one of its associates does not change the ideal generated and thus if we have only finitely many nonassociate irreducibles we can only generate finitely many prime ideals b it follows from the proof of part a that every prime ideal of r is finitely generated by a famous result of cohen thm all ideals are finitely generated this is an instance of the prime ideal principle of c if not there are prime ideals since r is noetherian this implies there are infinitely many prime ideals between and cor a domain is an atomic domain with finitely many atoms the work does not give a complete classification we are left with the case of a noetherian domain r with finitely many nonzero prime ideals all of which are maximal if r is a dedekind domain then by theorem there are only finitely many atoms so the remaining case is when r is not integrally closed in its fraction field in which case the integral closure r is a dedekind domain with finitely many prime ideals cor one might expect that this forces r to be this need not be the case example let k be a field and consider the subring r k k k t p n of the formal power series ring k t for f an t k t we define v f to be the least n such that an then v is a discrete valuation on k t and the only nonzero prime ideal of k t is t f r v f in particular k t is a pid so is the isomorphic subring k and is a generating set for r as a k so by standard pid structure theory every ideal of r canp be generated by two elements thus r is noetherian hence atomic n for f and thus an t r we have f r x an tn is the unique maximal ideal of we will give a complete description of the atoms of first we claim that f r is irreducible iff v f indeed a nontrivial factorization f xy involves v x v y hence v f conversely if v f then f is a nontrivial factorization since k every irreducible is associate to one of the form x an tn v f case or one of the form x an tn v f case associate elements have the same valuation so certainly no irreducible p of the first type is associate to an irreducible of the second type we claim that an tn p p is associate to bn tn iff and an tn is associate to p bn tn iff this can be done by direct computation the euclidean criterion for irreducibles so and there is a unique choice of leading to an bn for all n the v f case is similar thus there are precisely k atoms and r is iff k is finite example cor for a prime power q and d e the ring r fq te fqd t is a domain with exactly d d q irreducibles none of which are one nonzero prime ideal and exactly e prime unless d e the paper was mostly forgotten for many years until the breakthrough work of anderson and mott gave a complete characterization of cohenkaplansky domains in fact they give characterizations here is one theorem for an atomic domain r the following are equivalent i r is a domain ii r is noetherian of dimension at most one nonzero prime ideals are maximal has finitely many prime ideals the integral closure r of r is finitely generated as an maxspec r maxspec r and for all nonprincipal ideals m maxspec r is finite example let k be a field of characteristic different from or and consider the localization of k x y y x at x y the localization of k x y y at x y the localization of k x y y at x y then is always it is a dedekind domain with one maximal ideal is never maxspec maxspec is iff k is finite euclid beyond atomicity in the case of an atomic domain the part of the euclidean criterion that yields infinitely many maximal ideals is much weaker than the theorem however there is life beyond atomic domains example let hol c be the ring of entire functions f c for f hol c put z f z c f z if f g hol c then z f and z g are countable sets hence so is z f g z f z g so f g thus h c is a domain the map c z gives a bijection from c to the atoms of hol c an element f hol c is a unit iff z f and a nonzero nonunit f is a finite product of atoms iff z f is finite and nonempty so hol c is not atomic consider f z sin z but it is furstenberg if f is a nonzero nonunit then f vanishes at some c and thus is divisible by the irreducible element z moreover hol c satisfies condition e if f hol c then there is w c such that f w let g z f w then gf w so gf hol c thus the euclidean criterion applies in hol c theorem let be cardinal numbers there is a domain r satisfying all of the following properties i r is a domain every finitely generated ideal is principal ii r has exactly atoms each of which is a maximal ideal pete clark iii r has exactly maximal ideals iv r has exactly nonzero prime ideals v r is an atomic domain iff vi r is a furstenberg domain iff vii r is semiprimitive iff we postpone the proof of theorem in order to discuss its significance by taking and we get furstenberg domains with any number of irreducibles and any number max nonzero prime ideals in particular a furstenberg domain can have any finite positive number of irreducibles and any infinite number of prime ideals so the theorem does not extend from atomic domains to furstenberg domains for any and we get a semiprimitive furstenberg domain that is not an atomic domain now we come to the proof of theorem which requires somewhat more specialized results a completely presentation would require more space than we want to devote here so we will make use of the material of fs ch ii and iii and our treatment will be at the level of a detailed sketch let r be a domain with fraction field to x k we attach the principal fractional ideal x ax a r when x r this coincides with the usual notion of a principal ideal for x y k we have x y iff there is u such that y ux the principal fractional ideals of k form a commutative group under pointwise multiplication we have x y xy we call this the group of divisibility of r and denote it g r it is partially ordered by reverse inclusion that is for x y k we put x y iff y x this order reversal is actually rather familiar for x y k we write x y xy r and then we have x y if x y to contain is to divide let gi be an indexed family of nonzero totally ordered commutative groups and let g gi be the direct sum endowed with the pointwise partial ordering x y iff xi yi for all i i let g gi be projection onto the ith coordinate by the theorem fs thm there is a domain r and an isomorphism g r g of partially ordered commutative groups see fs example let v be the l ite k k gi then the maximal ideals of r are precisely mi x r v x for i i thus no element of lies in infinitely many maximal ideals so r is semiprimitive iff i is infinite an atom in a partially ordered commutative group is a minimal positive element this is a direct generalization of our previous use of the term if r is a domain the minimal positive elements of the group of divisibility g r are precisely the principal fractional ideals x for an irreducible element x for every atom x g there is i i such that xi is an atom of gi and xj for all j i and conversely all such elements give atoms of since each gi is totally ordered it has at most one atom the least positive element of gi if such an element exists it follows that r is furstenberg iff each gi has a least positive element similarly a nonzero nonunit x r factors into irreducibles iff v x g is a sum of atoms iff for all i i gi has a least positive element ai and vi r nai for some n thus r is an atomic domain iff each gi z the domain r is each nonzero prime ideal is contained in a unique maximal ideal fs loc the nonzero prime ideals contained in mi correspond the euclidean criterion for irreducibles bijectively to the proper convex subgroups of gi a subset y of a totally ordered set x is convex if for all x y z x if x z y then also y y we will take each gi to be a lexicographic product of copies of subgroups of r indexed by an ordinal then the convex subgroups of of gi are precisely where is the set of all elements of gi with zero for all j so there are nonzero prime ideals in mi we will take a family of nonzero totally ordered commutative groups gi parameterized by i this gives us maximal ideals and r is semiprimitive iff we are left to choose the groups gi in terms of and so as to attain the other assertions we define an ordinal if is finite it is the positive integer if is infinite it is the successor ordinal to what matters in this case is that is a set of cardinality and with a largest element there are cases if we take r to be a pid with nonzero prime ideals if and min we take gi z for all i we take to be the cartesian product of copies of z indexed by endowed with the lexicographic ordering then has a least positive element the element that is in all factors but the last and in the last factor so all gi have least elements and r is a furstenberg domain moreover so z and r is not an atomic domain it has nonzero prime ideals if we take to be the cartesian product of copies of z indexed by for i we take gi z and for i we take gi supplement rings with infinitely many maximal ideals let us briefly consider the case of an arbitrary commutative ring though others have done so see it is beyond our ambitions to pursue a factorization theory in the presence of zero divisors but we can still ask for criteria under which there are infinitely many maximal ideals in this more general context j r is no longer sufficient j c c and there are only two maximal ideals nevertheless both euclid and jacobson have a role to play proposition prop let i be an ideal of r contained in the jacobson radical then for all x r if the image of x in is a unit then x is a unit in particular the natural map is surjective proof if the image of x in is a unit then there is y r such that xy mod i xy i j r thus for every maximal ideal m of r xy m so we can not have x so x lies in no maximal ideal of r and thus x theorem dubuque let r be an infinite ring if r then maxspec r is infinite proof we will show by induction on n that for all n r has n maximal ideals base case since r is infinite it is nonzero and thus it has a maximal ideal induction step let mm be maximal ideals and put m y mi case suppose i r then i moreover i j r so by proposition is surjective it follows qn that r r by the chinese remainder theorem hence there is an pete clark injection putting the last two sentences together we conclude r and thus since is a field and r is infinite finally this gives the contradiction n y r i i r case so there is x i r let be a maximal ideal containing x for all i n we have x i mi so x x mi so is an n st maximal ideal of r completing the induction step a special case of theorem appears in k exc for a ring r consider the quotient r the maximal ideals of r correspond to the maximal ideals of r containing j r that is to the maximal ideals of thus r is semiprimitive thus we can replace any ring with a semiprimitive ring without changing its maxspec however this jacobson semisimplification need not carry domains to domains q if r is a domain with n maximal ideals mn then r here is a generalization theorem a for a ring r the following are equivalent i r has only finitely many maximal ideals ii r is a finite product of fields iii r has only finitely many ideals iv r is artinian there are no infinite descending chains of ideals b a semiprimitive ring with finitely many maximal ideals has finitely many ideals proof a i ii if the maximal ideals of r are mn then by the chinese remainder theorem thm we have r n mi n y ii iii iv immediately iv i maximal ideals of r correspond bijectively to maximal ideals of and an artinian ring has only finitely many maximal ideals thm b this follows from part a but what about primes our take on euclid s argument has been as a criterion for the existence of irreducibles the distinction evaporates in a ufd a pid with only finitely many prime ideals is a ufd with only finitely many principal prime ideals it turns out that the converse is also theorem let r be a ufd not a field with only finitely many atoms then r is a pid with finitely many prime ideals and r is known to the experts see the euclidean criterion for irreducibles proof a ufd with finitely many nonassociate prime elements is a domain so maxspec r is finite and r by theorem by theorem every nonzero prime ideal of r is maximal the proof of theorem shows every nonzero prime ideal p contains a prime element since p is maximal we have p p thus every prime ideal is principal so r is a pid thm this is another case of the prime ideal principle let us now move away from ufds from example we deduce theorem let be a cardinal there is a noetherian domain r with exactly one nonzero prime ideal exactly irreducibles and no prime elements proof let k be a field of cardinality k q by example r k is a noetherian domain with one nonzero prime ideal m and irreducibles since m is not principal r has no prime elements showed that an atomic domain that is neither a field nor a ufd must have at least atoms their argument is a nice one we must have at least one nonprime irreducible since is not prime it is properly contained in some prime ideal p which must therefore contain a nonassociate irreducible since p is not a unit and therefore it is divisible by an irreducible which can not be associate to either or finally we consider dedekind domains question let r be a dedekind domain with infinitely many prime ideals must r have infinitely many atoms in an important classical case the answer is yes as most number theorists know theorem for each number field k the ring of integers zk has infinitely many nonassociate prime elements proof step for any number field l the number of rational primes that split completely in l is infinite this is a special case of the chebotarev density theorem which however can be proved in a more elementary way as was shown in using some basic algebraic number theory which we omit here it comes down to showing that for every nonconstant polynomial f z t the set of prime numbers p dividing f n for some n z is infinite if f this is trivial if f let pk be the prime divisors of f we allow k and let be any finite set of primes not dividing f for i k let ai be such that pai i f and pai i f for n consider xn f n pakk then for all i k pai i xn and for all j qj xn so the set of n for which xn is not divisible by some prime other than pk is finite step a prime ideal p of a number field is principal iff it splits completely in the hilbert class field k of so every prime ideal p of k lying above any one of the infinitely many prime numbers p that split completely in k is principal looking at the above argument one wonders were we working working too hard perhaps some simple argument gives a general affirmative answer to question in fact question was answered negatively by claborn example pete clark the construction is impressively direct start with a dedekind domain a that is not a pid let p be the set of prime elements of r and pass to r a the prime ideals of r are precisely the nonprincipal prime ideals of a which remain nonprincipal in r this construction also appears in a work of samuel thm and is therein attributed to nagata cf lemma for a dedekind domain a write cl a for its ideal class group the quotient of the monoid of nonzero ideals of a under the equivalence relation i j iff there are with i j in the setting of the construction r is the localization of a at the multiplicative subset generated by the prime elements we have that cl r cl theorem let be an infinite cardinal there is a dedekind domain r with exactly atoms and no prime elements proof we will use some properties of elliptic dedekind domains for more details see let k be an algebraically closed field of characteristic and cardinality and put r k x y y x then r is a dedekind domain and by the nullstellensatz the nonzero prime ideals of r are all of the form p x y y x for pairs k such that in other words they are the points on the projective elliptic curve e y z xz excluding the point at infinity o moreover by the theorem since o the prime ideal p is not principal thus r is a dedekind domain with maxspec r r and without prime elements because r is dedekind every ideal can be generated by two elements thm this together with the fact that dedekind domains are atomic domains implies that for all p maxspec r there are irreducibles pp qp such that p pp qp thus if is the number of irreducibles of r we have maxspec r r so since is infinite so is and thus references fs anderson and mott domains integral domains with a finite number of irreducible elements algebra anderson and factorization in commutative rings with zero divisors rocky mountain j math clark commutative algebra http clark elliptic dedekind domains revisited enseignement math cohen and kaplansky rings with a finite number of primes trans amer math soc claborn dedekind domains and rings of quotients pacific j math cohn unique factorization domains amer math monthly coykendall and spicer domains and the goldbach conjecture proc amer math soc cass and wildenberg math bite a novel proof of the infinitude of primes revisited mathematics magazine vol dubuque http fuchs and salce modules over domains mathematical surveys and monographs american mathematical society providence ri furstenberg on the infinitude of primes amer math monthly the euclidean criterion for irreducibles k p glaymann characteristic functions and sets mathematics teacher golomb a connected topology for the integers amer math monthly kaplansky commutative rings allyn and bacon boston mass lam and reyes a prime ideal principle in commutative algebra algebra mercer on furstenberg s proof of the infinitude of primes amer math monthly nagata a remark on the unique factorization theorem j math soc japan pollack not always buried deep a second course in elementary number theory american mathematical society providence ri poonen http samuel lectures on unique factorization domains notes by pavman murthy tata institute of fundamental research lectures on mathematics no tata institute of fundamental research bombay zafrullah http
0
sufficient conditions for the tightness of shannon s capacity bounds for channels lin fady and jan sufficient conditions for determining in closed form the capacity region of memoryless channels twcs are derived the proposed conditions not only relax shannon s condition which can identify only twcs with a certain symmetry property but also generalize other existing results examples are given to demonstrate the advantages of the proposed conditions index information theory channels capacity region inner and outer bounds channel symmetry i ntroduction finding the capacity region of discrete memoryless channels twcs in form is a open problem the difficulty lies in the causality of transmission since the senders are allowed to generate channel inputs by adapting to previously received channel outputs in shannon gave an uncomputable expression for the capacity region another expression using directed information was given in the capacity region of twcs is known only for some special channels such as twcs with additive white gaussian noise determinisitc twcs twcs with discrete additive noise and injective twcs thus shannon s inner and outer bounds still play an important role in characterizing the capacity region in the literature shannon s symmetry condition and a condition established by chaaban varshney and alouini cva are two known sufficient conditions under which shannon s inner and outer bounds coincide thus directly characterizing the capacity region shannon s condition focuses on a certain symmetry structure for the channel transition probabilities while the cva condition focuses on the existence of independent inputs which achieve shannon s outer bound although the two conditions can be used to determine the capacity region of a large class of twcs it is of interest to establish new conditions for wider families of channels in this paper four sufficient conditions guaranteeing that shannon s inner and outer bounds coincide are derived similar to the cva condition our conditions identify independent inputs which achieve shannon s outer bound based on the approach that a twc can be viewed as two channels the authors are with the department of mathematics and statistics queen s university kingston on canada emails fady linder the author was with the department of mathematics and statistics queen s university kingston on canada she is now with contextere ottawa on canada email lin this work was supported in part by nserc of canada user twc user fig block diagram of transmission with state two of the derived results are shown to be substantial generalizations of the shannon and cva conditions moreover our simplest condition can be easily verified by observing the channel marginal distributions the rest of this paper is organized as follows in section ii the system model and prior results are reviewed new conditions for finding the capacity region are provided in section iii a discussion of the connections between the new conditions and prior results is given in section iv along with illustrative examples concluding remarks are given in section ii p reliminaries in a communication system as shown in fig two users want to exchange their own messages and via n uses of a twc here the messages and are assumed to be mutually independent and uniformly distributed on and respectively where n and n are integers for j let xj and yj respectively denote the finite channel input and output alphabets for user j the joint distribution of the inputs and outputs of a memoryless twc is governed by the channel transition probability a channel code for a twc is defined as follows definition an n code for a twc consists of two message sets and two sequences of encoding functions n and n with n and for n n and two decoding functions and when messages and are encoded the channel inputs at time n are only functions of the messages mj for j but all the other channel inputs are generated by also adapting to the previous channel outputs yj via xj n fj n mj for j and n n after receiving n channel outputs user j reconstructs mi as gj mj yjn for i j with i j and the probability of decoding error n is defined as pe pr or based on this performance index we define achievable rate pairs and the capacity region definition a rate pair is said to be achievable if there exists a sequence of n codes such that n limn pe the capacity region c of a twc is the closure of the convex hull of all achievable rate pairs to date a computable expression for the capacity region of general memoryless twcs has not been found in shannon established inner and outer bounds for the capacity region let r denote the set of rate pairs with i and i where the joint distribution of all random variables is given by then the capacity region of a discrete memoryless twc with transition probability is inner bounded by ci co and outer bounded by co co r r where co denotes taking the closure of the convex hull in general ci and co are different but if they coincide then the exact capacity region is obtained and independent inputs can be used to achieve any point of the capacity region we note that there exist other improved bounds for twcs however those bounds are either restricted for the particular case of the binary multiplier twc or expressed with auxiliary random variables which do not match our approach we next review the shannon and cva conditions that imply the coincidence of ci and co for a finite set a let a a a be a permutation bijection and for any two symbols and in a let a a denote the transposition which swaps and in a but leaves the other symbols unaffected moreover let px z y px py z denote a probability distribution defined on finite sets x y and z we define two functionals for conditional entropies h px z py z x px z x z py z z log x z y py z z and px py z x px x py log x y py p where py z py z z in particular if p for any x we let pz z x px z x z and define px pz py z x x y p px x qy log qy where qy z py z z pz z note that given any we have h yj h pyj h and h where pyj is a marginal of the channel probability and j furthermore for any which can be factorized as we have h and h finally let p xj denote the set of all probability distributions on xj for j proposition shannon s symmetry condition for a memoryless twc with transition probability we have c ci co if for any pair of distinct input symbols there exists a pair of permutations on and which depend on and such that for all proposition cva condition for a memoryless twc with transition probability we have c ci co if for any h pyj does not depend on for given and there exists p such that and we remark that proposition describes a channel symmetry property with respect to the channel input of user but an analogous condition can be obtained by exchanging the roles of users and also the invariance of h pyj in proposition in fact imposes a certain symmetry constraint on the channel marginal distribution pyj in the literature a twc with independent additive noise is an example that satisfies both the shannon and cva conditions iii c onditions for the t ightness of s hannon s i nner and o uter b ounds in this section we present four results regarding the tightness of shannon s inner and outer bounds we adopt the viewpoint that a channel consists of two channels with state for example the channel from user to user is governed by the marginal distribution derived from the channel probability distribution where and are respectively the input and the output of the channel with state let px and py be probability distributions on finite sets x and y to simplify the presentation we define i px py x x y px x py log p py px py which is the mutual information i x y between input x governed by px and corresponding output y of a channel with transition probability py a useful fact is that i is concave in the first argument when the second argument is fixed moreover the conditional mutual information i and i can be expressed as i and i respectively by viewing a twc as two channels with state each of the following four theorems comprises two conditions one for each direction of the transmission by symmetry these theorems are also valid if the roles of users and are swapped for simplicity we will use i k xi yj and h k yj to denote the conditional mutual information and conditional entropy evaluated k under input distribution for i j with i j for k k k pxi pxi the conditional entropy h k yi proof given any let is evaluated under the marginal distribution pyi yi p k xj pxj xj pyi xi yi xi theorem for a given memoryless twc if both of the following conditions are satisfied then ci co i there exists px p such that for all we have arg maxpx i px ii i does not depend on for any fixed p proof for any let px p by the same argument as in we obtain via i that i i moreover k px where px is given by i in light of i we have i x i x x max i i x i moreover i x i x i x i x i x i i where holds by the invariance assumption in ii holds since the functional i is concave in the first argument and is obtained from the invariance assumption in ii combining the above yields r r px which implies that co ci and hence ci co theorem for a given memoryless twc if for any both of the following conditions are satisfied then ci co i there exists px p such that for all we have arg maxpx i px ii h does not depend on given and and the common maximizer px in i also satisfies i h h px px h px px px h px h h i where and follow from the definitions in section ii and is due to condition ii consequently and r r px hence co ci so that ci co theorem for a given memoryless twc if both of the following conditions are satisfied then ci co i i does not depend on for any fixed p ii i does not depend on for any fixed p proof from conditions i and ii we know that i has a common maximizer px for all and i has a common maximizer px for all for any p by the same let px argument as in we conclude that i i and i i px thus r r px which yields ci co similar to the cva condition complex computations are often inevitable for checking the above conditions we next present a useful condition which needs little computational effort let resp denote the marginal transition probability matrix obtained from resp whose columns and rows are indexed according to a fixed order on the symbols in and resp and theorem for a given memoryless twc if both of the following conditions are satisfied then ci co i the matrices are column permutations of each other ii the matrices are column permutations of each other since the proof is similar to the second part of the proof of theorem in the next section the details are omitted iv d iscussion and e xamples a comparison with other conditions as already noted the relationship between propositions and is unclear as examples that satisfy the shannon condition but not the cva condition seem hard to construct in this section we show that theorems and in fact generalize the shannon and cva results respectively to see this it suffices to show that the shannon and cva conditions imply the conditions in theorems and respectively theorem a twc satisfying shannon s symmetry condition in proposition must satisfy the conditions in theorem proof for a twc satisfying the condition of proposition the optimal input probability distribution that achieves capacity is of the form for some p this result implies that condition i of theorem is satisfied because a common maximizer exists for all x and is given by px to prove that condition ii is also satisfied we consider the two marginal matrices and for some fixed and show that these matrices are column permutations of each other and hence i i the former claim is true because where is obtained by marginalizing on both sides of and follows from the definition of transposition the second claim can be verified by a direct computation on i with the above result straightforwardly and hence the details are omitted remark example in the next subsection demonstrates that a twc that satisfies the conditions in theorem may not satisfy shannon s symmetry condition in proposition since the common maximizer is not necessarily the uniform input distribution hence theorem is a more general result than proposition theorem a twc satisfying the cva condition in proposition must satisfy the conditions in theorem proof suppose that the condition of proposition is satisfied to prove the theorem we first claim that for j h yj h yj for all and given arbitrary pairs and with consider the two probability distributions if a and b a b otherwise and a b if a and b otherwise noting that we have h yj h yj h px px pyj h px px pyj h yj h yj for some p and define px since h yj does not depend on is in fact a maximizer for h note px may not be unique but any that the maximizer px choice works for our purposes now for by the cva condition there exists p such that on for fixed h yj does not depend on next we show that condition i of theorem holds by constructing the common maximizer from the cva condition for each let arg maxpx i px arg maxpx h h where and are due to the definitions of and respectively follows from the cva condition the claim is proved since and holds since p h yj h yj and h yj does not depend px is the maximizer set since px for h we have px h x h x x max h h thus px px x h x h since h h for each we obtain h h achieves the same value of h as px for all consequently is a common maximizer and thus condition i of theorem is satisfied moreover since the common maximizer is provided by the cva condition condition ii of theorem automatically holds remark example below shows that a twc that satisfies the conditions in theorem does not necessarily satisfy the condition in proposition because our conditions allow h to depend on for given hence theorem is more general than proposition b examples we next illustrate the effectiveness of our conditions via two examples in which the twc in example satisfies the conditions of theorems and the capacity region is rectangular the twc in example satisfies the conditions of theorem and and has a capacity region however neither of the constructed twcs satisfy the shannon or the cva conditions example consider the twc with the corresponding channel marginal distributions are given by for this twc shannon s symmetry condition in proposition does not hold since there are no permutations on and which can result in furthermore since h hb and h hb where hb denotes the binary entropy function h depends on for given thus the cva condition in proposition does not hold either however by theorem shannon s inner and outer bounds coincide since resp can be obtained by permuting the columns of resp since the conditions in theorem imply the conditions in theorem and the conditions in theorem further imply the conditions in theorem the conditions of theorems and are also satisfied moreover the optimal input distribution for this twc can be obtained by searching for the common maximizer for each of the two channels via the algorithm yielding px px thus the capacity region is achieved by the input distribution px px p c finally we note that this twc also satisfies the conditions of theorem in which the first condition is already implied by the conditions of theorem to verify the second condition we consider fig the capacity region of the twc in example using the same arguments as in example one can easily see that this twc satisfies neither the shannon nor the cva conditions however it satisfies the conditions in theorem since a common maximizer exists for the channel from users to px and condition ii trivially holds to verify that this channel also satisfies the conditions in theorem the same argument as in the previous example is used finally by considering all input distributions of the form px p the capacity region of this channel is determined as shown in fig c onclusions in this paper four conditions on the coincidence of shannon s capacity inner and outer bounds were derived these invariance conditions were shown to generalize existing results thus enlarging the class of twcs whose capacity region can be exactly determined numerical examples illustrate the applications of the new conditions in situations where prior results do not apply r eferences shannon communications channels in proc berkeley symp math stat chicago il usa jun pp massey causality feedback and directed information in proc int symp information theory its applic waikiki hi usa pp kramer directed information for channels with feedback dissertation swiss federal institute of technology zurich han a general coding scheme for the channel ieee here for all h trans inf theory vol no pp hb and h hb cheng and devroye networks when adaptation is thus h does not depend on useless ieee trans inf theory vol no pp mar given together with the substitutions song alajaji and linder adaptation is useless for two discrete into we then obtain that and px p channels in proc ieee int symp inf theory barcelona spain jul pp p p p px p p x x y x x y x chaaban varshney and alouini the capacity of injective therefore the second condition of theorem holds channels in proc ieee int symp inf theory aachen germany jun pp example consider the twc with schalkwijk the binary multiplying channel a coding scheme that operates beyond shannons inner bound region ieee trans inf theory vol no pp schalkwijk on an extension of an achievable rate region for the binary multiplying channel ieee trans inf theory vol no pp may zhang berger and schalkwijk new outer bounds to capacity regions of channels ieee trans inf theory vol no pp may where two channel marginal distributions are hekstra and willems dependence balance bounds for single output channels ieee trans inf theory vol no pp
7
on of pseudovarieties of the form v d costa nogueira teixeira feb february abstract this paper deals with the reducibility property of semidirect products of the form v d relatively to graph equation systems where d denotes the pseudovariety of definite semigroups we show that if the pseudovariety v is reducible with respect to the canonical signature consisting of the multiplication and the then v d is also reducible with respect to keywords pseudovariety definite semigroup semidirect product implicit signature graph equations reducibility introduction a semigroup resp monoid pseudovariety is a class of finite semigroups resp monoids closed under taking subsemigroups resp submonoids homomorphic images and finite direct products it is said to be decidable if there is an algorithm to test membership of a finite semigroup resp monoid in that pseudovariety the semidirect product of pseudovariets has been getting much attention mainly due to the decomposition theorem in turn the pseudovarieties of the form where d is the pseudovariety of all finite semigroups whose idempotents are right zeros are among the most studied semidirect products for a pseudovariety v of monoids lv denotes the pseudovariety of all finite semigroups s such that ese v for all idempotents e of we know from that v d is contained in lv and that v d lv if and only if v is local in the sense of tilson in particular the equalities sl d lsl and g d lg hold for the pseudovarieties sl of semilattices and g of groups costa teixeira cmat dep e universidade do minho campus de gualtar braga portugal jcosta mlurdes nogueira cmat escola superior de tecnologia e instituto de leiria campus morro do lena alto vieiro leiria portugal costa nogueira teixeira it is known that the semidirect product operator does not preserve decidability of pseudovarieties the notion of tameness was introduced by almeida and steinberg as a tool for proving decidability of semidirect products the fundamental property for tameness is reducibility this property was originally formulated in terms of graph equation systems and latter extended to any system of equations it is parameterized by an implicit signature a set of implicit operations on semigroups containing the multiplication and we speak of for short given an equation system with rational constraints a pseudovariety v is relatively to when the existence of a solution of by implicit operations over v implies the existence of a solution of by over v and satisfying the same constraints the pseudovariety v is said to be if it is with respect to every finite graph equation system the implicit signature which is most commonly encountered in the literature is the canonical signature ab consisting of the multiplication and the for instance the pseudovarieties d g j of all finite j semigroups lsl and r of all finite semigroups are in this paper we study the property of semidirect products of the form this research is essentially inspired by the papers see also where a stronger form of was established for lsl we prove that if v is then v d is in particular this gives a new and simpler proof though with the same basic idea of the of lsl and establishes the of the pseudovarieties lg j d and r combined with the recent proof that the problem for lg is decidable this shows that lg is a problem proposed by almeida a few years ago this also extends part of our work in the paper where we proved that under mild hypotheses on an implicit signature if v is relatively to pointlike systems of equations systems of equations of the form xn then v d is pointlike as well as in we use results from where various kinds of of semidirect products with an pseudovariety were considered more specifically we know from that a pseudovariety of the form v dk is when v is where dk is the pseudovariety defined by the identity xk xk as s v d k v dk we utilize this result as a way to achieve our property concerning the pseudovarieties v the method used in this paper is similar to that of however some significant changes inspired by had to be introduced in order to deal with the much more intricate graph equation systems preliminaries the reader is referred to the standard bibliography on finite semigroups namely for general background and undefined terminology for basic definitions and results about combinatorics on words the reader may wish to consult on of pseudovarieties of the form v d words and pseudowords throughout this paper a denotes a finite set called an alphabet the free semigroup and the free monoid generated by a are denoted respectively by and the empty word is represented by and the length of a word w is denoted by a word is called primitive if it can not be written in the form un with n two words u and v are said to be conjugate if u and v for some words a lyndon word is a primitive word which is minimal in its conjugacy class for the lexicographic order on a word on a is a sequence w an n of letters of a indexed by also written w the set of all words on a will be denoted by and we put the set is endowed with a semigroup structure by defining a product as follows if w z then wz is already defined words are right zeros finally if w is a word and z bn is a finite word then wz is the word wz bn a word w of the form v uuuv with u and v is said to be ultimately periodic in case v the word w is named periodic for a periodic word w if u is a primitive word then it will be called the root of w and its length will be said to be the period of for a pseudovariety v of semigroups we denote by v the relatively free semigroup generated by the set a for each semigroup s and each function a s there is a unique continuous homomorphism v s extending the elements of v are called pseudowords or implicit operations over a pseudovariety v is called when the subsemigroup v of v generated by a is finite in which case v v and effectively computable recall that for the pseudovariety s of all finite semigroups s is identified with the free semigroup the elements of s will then be called infinite pseudowords a pseudoidentity is a formal equality of pseudowords s over we say that v satisfies the pseudoidentity and write v if for every continuous homomorphism s s into a semigroup s v which is equivalent to saying that pv pv for the natural projection pv s pseudoidentities over v dk for a positive integer k let dk be the pseudovariety of all finite semigroups satisfying the identity xk xk denote by ak the set of words over a with length k and by ak the set w k of words over a with length at most we notice that dk may be identified with the semigroup whose support set is ak and whose multiplication is given by u v tk uv where tk w denotes the longest suffix of length at most k of a given finite or word then the dk are pseudovarieties such that s d k dk moreover it is that d is isomorphic to the semigroup for each pseudoword s we denote by tk the unique smallest word of ak such that dk tk simetrically we denote by ik the smallest word of ak such costa nogueira teixeira that kk ik where kk is the dual pseudovariety of dk defined by the identity xk y xk let be the function that sends each word w to the sequence of factors of length k of w in the order they occur in we still denote by see and lemma its unique continuous extension s s this function is a homomorphism with the meaning that it verifies the conditions i w for every w ak ii tk ik for every throughout the paper v denotes a trivial pseudovariety of semigroups for any pseudowords s it is known from theorem that v dk ik ik tk tk and v implicit signatures and by an implicit signature we mean a set of pseudowords over s containing the multiplication in particular we represent by the implicit signature ab usually called the canonical signature every profinite semigroup has a natural structure of a via the natural interpretation of pseudowords on profinite semigroups the of s generated by a is denoted by it is freely generated by a in the variety of generated by the pseudovariety s and its elements are called over s to a directed multi graph e v e with vertex set v edge set e and edges we associate the system of all equations of the form e with e e let s be a finite semigroup s s be the continuous homomorphism respecting the choice of generators and s be an evaluation mapping such that we say that a mapping s is a of with respect to when and v for all u v furthermore if s for an implicit signature then is called a v the pseudovariety v is said to be relatively to the system if the existence of a of with respect to a pair entails the existence of a v of with respect to the same pair we say that v is if it is relatively to for all finite graphs of v d let v be a given trivial pseudovariety the purpose of this paper is to prove the of the pseudovariety v so we fix a finite graph and a finite semigroup s and consider a v s of the system with respect to a pair where s is an evaluation mapping such that s and s s is a continuous homomorphism respecting the choice of generators we have to construct a v d s of with respect to the same pair on of pseudovarieties of the form v d initial considerations suppose that g is such that u with u since and are supposed to be v of the system with respect to we must have and so in particular g as the homomorphism s s is arbitrarily fixed it may happen that the equality g holds only when g u in that case we would be obliged to define g u since we want to describe an algorithm to define that should work for any given graph and solution we will then construct a solution verifying the following condition g suppose next that a vertex v v is such that d with u that is suppose that pd because is an arbitrary graph it could include for instance an edge e such that v and the labeling could be such that u since d is a subpseudovariety of v d is a of with respect to hence as by condition we want to preserve finite labels it would follow in that case that d v u v and thus that d v this observation suggests that we should preserve the projection into d of labelings of vertices v such that pd with u more generally we will construct the v d in such a way that the following condition holds v pd z with u and z pd v pd let max u and u for some g be the maximum length of finite labels under of elements of to be able to make some reductions on the graph and solution described in section we want to verify the extra condition below where l is a integer to be specified later on section v with u al v with simplifications on the solution we begin this section by reducing to the case in which all vertices of are labeled by infinite e pseudowords under suppose first that there is an edge v w such that uv and ue with uv and ue so that uv ue drop the edge e and consider the restrictions and of and respectively to the graph e then is a of the system with respect to the pair assume that there is a v d of with respect to verifying condition then v uv and w uv ue let be the extension of to obtained by letting e ue then is a v d of with respect to by induction on the number of edges labeled by finite words under beginning in vertices also labeled by finite words under we may therefore assume that there are no such edges in now we remove all vertices v of labeled by finite words under such that v is not the beginning of an edge thus obtaining a graph as above if is a v d of costa nogueira teixeira then we build a v d of by letting coincide with on and letting v for each vertex v so we may assume that all vertices of labeled by finite words under are the beginning of some edge e suppose next that v w is an edge such that u and with u and s notice that since it is an infinite pseudoword can be written as with both and being infinite pseudowords drop the edge e and the vertex v in case e is the only edge beginning in v and let be a new vertex and w be a new edge thus obtaining a new graph let and be the labelings of defined as follows and coincide respectively with and on and then is a v of the system with respect to the pair assume that there is a v of with respect to verifying conditions and in particular since l is chosen to be greater than with let be the extension of to obtained by letting e and v u in case v as one can easily verify is a v d of with respect to by induction on the number of edges beginning in vertices labeled by finite words under we may therefore assume that all vertices of are labeled by infinite pseudowords under suppose at last that an edge e is labeled under by a finite word u an where n and ai denote and vn in this case we drop the edge e and for each ei vi to the graph let i n we add a new vertex vi and a new edge be the graph thus obtained and let and be the labelings of defined as follows and coincide respectively with and on e for each i n vi ai ei ai vi vi and ei ei hence is a v of the system with respect to the pair suppose there exists a v d of with respect to verifying condition let be the extension of to obtained by letting e u then is a v d solution of with respect to by induction on the number of edges labeled by finite words under we may further assume that each edge of labeled by a finite word under is in fact labeled by a letter of the alphabet borders of the solution the main objective of this section is to define a certain class of finite words called borders of the solution since the equations of we have to deal with are of the form e these borders will serve to signalize the transition from a vertex to the edge for each vertex v of denote by dv the projection pd of into d and let dv v v we say that two words are confinal if they on of pseudovarieties of the form v d have a common prefix y that is if and for some words as one easily verifies the relation defined for each by dv dv if and only if and are confinal is an equivalence on for each we fix a word and words zv for each vertex v with dv such that dv zv moreover when dv is ultimately periodic we choose of the form with u a lyndon word and fix zv not having u as a prefix the word u and its length will be said to be respectively a root and a period of the solution without loss of generality we assume that has at least one root otherwise we could easily modify the graph and the solution in order to include one we fix a few of the integers that will be used in the construction of the v d they depend only on the mapping and on the semigroup definition constants ns l e and q we let ns be the exponent of s which as one recalls is the least integer such that sns is idempotent for every element s of the finite semigroup s lcm u is a root of l max v v e be an integer such that e ns and for each word w ae there is a factor e of w for which is an idempotent of notice that for each root u of e and uns is an idempotent of s q l for each positive integer m we denote by bm the set bm tm am is a if is a periodic word then the element y tm of bm will be said to be periodic with root u and period for words bm we define the gap between and as the positive integer g min n u and for some v u or u and notice that g g proposition consider the constant q introduced in definition there exists qq n such that for all integers m qq the following conditions hold costa nogueira teixeira a if and are distinct elements of bm then g q b if y is a element of bm then g y y q proof suppose that for every qq n there is an integer m qq and elements and of bm such that g q hence there exist a strictly increasing sequence mi i of positive integers and an integer r q such that g ymi ymi i is constant and equal to moreover since the graph is finite we may assume that ymi tmi and ymi tmi for every i and some and it then follows that u or u for some word u ar hence and are confinal words whence and are the same therefore for every m and have the same length and are suffixes of the word and so and are the same word this proves already a now notice that u meaning that is the periodic word this shows b and completes the proof of the proposition we now fix two more integers definition constants m and k we let m be an integer such that m is a multiple of and m is greater than or equal to the integer qq of proposition and notice that m q k m q the elements of the set bm will be called the borders of the solution we remark that the borders of are finite words of length m such that by proposition for any two distinct occurrences of borders and in a finite word either these occurrences have a gap of size at least q between them or and are the same periodic border y in this case y is a power of its root u since m is a multiple of the period and g y y is getting a v dk as v dk is a subpseudovariety of v d is a v dk of with respect to the given pseudovariety v was assumed to be so by corollary v dk is too therefore there is a v dk s of with respect to the same pair moreover as observed in remark one can constrain the values g of each g with respect to properties which can be tested in a finite semigroup since the prefixes and the suffixes of length at most k can be tested in the finite semigroup kk dk we may assume further that g and have the same prefixes and the same suffixes of length at most we then denote ig ik g ik and tg tk g tk for each g notice that by the simplifications introduced in section if is a finite word then g is an edge and is a letter ag and so ig tg ag otherwise ig and tg are on of pseudovarieties of the form v d length k words in particular condition holds that is e for every edge e such that is a finite word on the other hand lemma ii of which is stated only for edges can be extended easily to vertices so that g can be assumed to be an infinite pseudoword for every g such that is infinite thus in particular v is an infinite pseudoword for all vertices notice that for each vertex v there exists a border yv of such that the finite word yv zv is a suffix of on the other hand by definitions and l q and k m q so as m tv xv yv zv and v tv for some infinite and some word xv with q basic transformations the objective of this section is to introduce the basic steps that will allow to transform the v dk into a v d the process of construction of from is close to the one used in to handle with systems of pointlike equations both procedures are supported by basic transformations of the form ak aj ai aj ak which replace words of length k by those procedures differ in the way the indices i j are determined in the pointlike case the only condition that a basic transformation had to comply with was that j had to be minimum such that the value of the word ak under is preserved in the present case the basic transformations have to preserve the value under as well but the equations e impose an extra restriction that is not required by pointlike equations indeed we need to verify in particular and e e so somewhat informally for a word ak that has an occurrence overlapping both the factors and e of the pseudoword e the introduction of the factor ai aj by the basic transformation should be done either in or in e and not in both simultaneously the borders of the solution were introduced to help us to deal with this extra restriction informally speaking the borders will be used to detect the passage from the labeling under of a vertex to the labeling of the edge e and to avoid that the introduction of ai aj affect the labelings under of or consider an arbitrary word w an an integer m m n will be called a bound of w if the factor w m am of w is a border where m m the bound m will be said to be periodic or according to the border w m is periodic or not if w admits bounds then there is a maximum one that we name the last bound of in this case if is the last bound of w then the border w will be called the last border of notice that by proposition and the choice of m if and are two bounds of w with then either q or w and w are the same periodic border costa nogueira teixeira let w ak be a word of length notice that since k m q if w has a last bound then is the unique bound of we split the word w in two parts lw the of w and rw the of w by setting l w as and rw ak where s the splitting point of w is defined as follows if w has a last bound then s otherwise s in case w has a periodic last bound the splitting point s will be said to be periodic then s is not periodic in two situations either w has a last border or w has not a last border the factorization w lw rw will be called the splitting factorization of we have s m q so by definition of e there exist integers i and j such that s e i j s and the factor e ai aj of lw verifies we begin by fixing the maximum such j and for that j we fix next an integer i and a word ew ai aj called the essential factor of w as follows notice that if the splitting point s is periodic and u is the root of the last border of w then uns is idempotent and the of w is of the form lw uns hence in this case j s and we let ew uns thus defining i as j ns suppose now that the splitting point is not periodic in this case we let i be the maximum integer such that ai aj is idempotent the word w can be factorized as w ew rw where we then denote by w b the following w b ew rw aj ai aj ak and notice that b moreover e and so m e q e it is also convenient to introduce two derived from w b w aj ai aj w ai aj ak this defines two mappings ak s that can be extended to s as done in although they are not formally the same mappings used in that paper because of the different choice of the integers i and j we keep the same notation since the selection process of those integers is absolutely irrelevant for the purpose of the mappings that is with the above adjustment the mappings maintain the properties stated in the next lemma presents a property of the that is fundamental to our purposes lemma for a word w of length k let ak and be the two factors of w of length if w ak and w then x for some word x in particular on of pseudovarieties of the form v d proof write bk with bi let and be the splitting points of and respectively whence and to prove that there exists a word x such that x we have to show that under this hypothesis we then deduce that is an occurrence of the essential factor in which proves that assume first that has a last bound in which case by definition m if m then the last border of occurs in one position to the left relatively to hence is a bound of and so has a last bound such that it follows in this case that and suppose now that m since m by definition the condition holds trivially in this case suppose now that has not a last bound then moreover either does not have a last bound or k is its last bound in both circumstances k whence this concludes the proof of the lemma in the conditions of the above lemma and as in we define s s as the only continuous monoid homomorphism which extends the mapping s and let the function s s is a continuous homomorphism since it is the composition of the continuous homomorphism with the continuous homomorphism we remark that a word w an of length n k has precisely r n k factors of length k and w an an where for each p r ep is the essential factor ewp aip ajp of the word wp ap and fp ajp p r above for each p r we have replaced each expression with since indeed these expressions represent the same more generally one can certainly replace an expression of the form xn with xn using this reduction rule as long as possible w can be written as w called the reduced form of w where q r nq r fnp for p q and is fnq if nq r and it is the empty word otherwise costa nogueira teixeira definition of the v d we are now in conditions to describe the procedure to transform the v dk into the v d the mapping s is defined for each g as g g g g where for each i s is a function defined as follows first of all we let that is that is that g is indeed a for every g follows from the fact that g is a and transforms into see next for each vertex v consider the length k words iv ik v ik and tv tk v tk we let v iv and v tv where the mappings and were defined in note that by tv xv yv zv moreover the occurrence of yv shown in this factorization is the last occurrence of a border in tv hence the rtv of tv is precisely zv therefore one has v iv eiv and v tv zv consider now an arbitrary edge suppose that is a finite word then is a letter ae and e is also ae in this case then e ae because is a homomorphism since we want e to be ae we then define for instance e ae and e suppose at last that and so also e is an infinite pseudoword we let e te and notice that e indeed as is a v dk of it follows from that te tk e tk the definition of e is more elaborate let v be the vertex and consider the word tv ie this word has r k factors of length suppose that tv ie is and consider its reduced form tv ie notice that tv ie for some words hence there is a unique and with and index m q such that tv m m m m m a then tv ie where enm fm and fm enq and we let e note that the word fq is ajr whence er ie the next lemma is a key result that justifies the definition of the on of pseudovarieties of the form v d lemma let e be an edge such that is infinite then with the above notation v and so tv ie v e moreover e ie proof we begin by recalling that tv ie and tv ie where ep is the essential factor ewp aip ajp of the word wp ap and fp ajp for each note also that ie er is a suffix of and is idempotent so to prove the equality e ie it suffices to show that e we know from that tv xv yv zv with q so xv yv ah am and zv am ak for some h q there are two cases to verify case yv is a border consider the factor wh ah of tv ie by the choice of m and k the prefix yv is the only occurrence of a border in wh hence m is the last bound of wh and so its splitting point it follows that wh yv is the splitting factorization of wh therefore as one can verify for an arbitrary p h there is only one occurrence of a border in wp precisely yv and the splitting factorization of wp is wp ap yv zv whence ep with jp m h and so fp for p so the prefix of tv ie reduces to consider now the factor hence either does not have a last bound or k is its last bound in both situations the splitting point of is k and its splitting factorization is therefore one deduces from lemma that for every p h r the occurrence aip ajp of the essential factor ewp in wp is in fact an occurrence in the suffix am of tv ie since yv m and l it follows that k yv zv m l h whence is a suffix of ie and so k ip jp for all p h r this means in particular that the is introduced at the suffix ie of tv ie hence ajh ak and its reduced form is ak v which proves that and v are the same moreover from k ip one deduces that the word ep is a suffix of ajp which proves that e case yv is a periodic border let u be the root of yv then since m was fixed as a m multiple of yv umu where mu if the prefix yv is the only occurrence of a border in wh then one deduces the lemma as in case above so we assume that there is another occurrence of a border y in wh hence by proposition and the choice of m and k y is precisely yv furthermore since u is a lyndon word and k m q with q m wh yv ud for some positive integer d and some word such that u is not a prefix of notice that since u is not a prefix of zv by definition of this word zv is a proper prefix of u on the other hand wh ud yv and the occurrence of yv shown in costa nogueira teixeira this factorization is the last occurrence of yv in wh thus wh ud yv is the splitting factorization of wh therefore w ch ud yv uns and eh uns more generally for any p h yv is a factor of wp and it is the only border that occurs in wp hence the splitting point of wp is periodic and ep uns moreover as one can verify m h and the prefix of tv ie is u d and so analogously to case it reduces to ud since zv is a proper prefix of u and d k jh this allows already deduce that the reduced form of is uns zv v thus concluding the proof of the first part of the lemma now there are two possible events in which case e is trivially verified or m q either m q and and the was not eliminated in the reduction process of tv ie this means that the splitting point of the word is not determined by one of the occurrences of the border yv in the prefix of tv ie then as in case above one deduces that k ip for each p r and so that e in both cases v and e ie hence the proof of the lemma is complete notice that as shown in the proof of lemma above if a vertex v is such that yv is a periodic border with root u then v uns zv so the definition of the mapping on vertices assures condition proof that is a v d this section will be dedicated to showing that is a v d of with respect to the pair verifying conditions and we begin by noticing that g is a for every g indeed as observed above each g is a that both g and g are too is easily seen by their definitions let us now show the following properties proposition conditions and hold proof as is a v dk of with respect to and so the equality holds to deduce that holds it suffices to establish the equality consider first a vertex v then v iv eiv and v tv zv in this case the equality v v is a direct application of proposition where the authors proved that ik tk for every pseudoword moreover by definition of the therefore and v are of the form and v with u al and so condition holds on of pseudovarieties of the form v d consider next an edge e if e is a finite word ae then e e e e ae ae e whence e e holds trivially moreover since e in this case and every vertex is labeled under by an infinite pseudoword it follows that condition holds suppose at last that e is infinite and let v then e te on the other hand by lemma e ie hence by and since is a homomorphism e e e e ie e te this ends the proof of the proposition e consider an arbitrary edge v w of to achieve the objectives of this section it remains to prove that v d satisfies v e since is a v dk of v dk satisfies v e hence by iv ik v e ik w iw and tk v e tk w tw thus v iv eiv eiw iw w and w tw zw as shown in the proof of proposition it then follows that v d satisfies v e w and so v d v v e w w w w on the other hand from the fact that is a homomorphism one deduces v e v tv e v tv ie e suppose that e is an infinite pseudoword in this case te tw whence e moreover by lemma tv ie v e therefore by conditions and v d satisfies v e assume now that e is a finite word whence e ae a and e ae since is a of d v ae w and thus dv ae dw hence the words dv and dw are confinal and so hence dv zv dw zw and yv yw tk where is the of dv and dw it follows that zv ae zw and tk tv ae tw in this case v e v tv ae on the other hand tv ae ak tw is a word of length k and so tv ae tv ae is of the form tv ae f the splitting factorizations of tv and tw are respectively tv xv yv zv and tw xw yw zw since yv yw it follows that etv etw suppose that zv ae zw in this case it is clear that f so that tv ae since v ends with it then follows that v e v therefore v v e w v v w on the other hand w tw zw zv ae v ae so by one has that v d satisfies v ae v v v ae v v w suppose now that zv ae zw in this case one deduces from the equality zv ae zw that is a periodic word let u be its root so that etv uns and since by definition u is a primitive word which is not a prefix of zv nor a prefix of costa nogueira teixeira zw we conclude that zv ae u and zw in this case f u whence tv ae u then v e v u v u therefore v v e w v v u w moreover u w zw u uns uns u zv ae v ae therefore using one deduces as above that v d satisfies v ae we have proved the main theorem of the paper theorem if v is then v d is this result applies for instance to the pseudovarieties sl g j and since the problem for the pseudovariety lg of local groups is already solved we obtain the following corollary corollary the pseudovariety lg is final remarks in this paper we fixed our attention on the canonical signature while in we dealt with a more generic class of signatures verifying certain undemanding conditions theorem is still valid for such generic signatures but we preferred to treat only the instance of the signature to keep the proofs clearer and a little less technical references almeida finite semigroups and universal algebra world scientific singapore english translation almeida finite semigroups an introduction to a unified theory of pseudovarieties in semigroups algorithms automata and languages coimbra world scientific pp almeida and azevedo on regular implicit operations mathematica almeida azevedo and teixeira on finitely based pseudovarieties of the forms v d and v dn j pure appl algebra almeida costa and teixeira semidirect product with an pseudovariety and tameness semigroup forum almeida costa and zeitoun tameness of pseudovariety joins involving r monatsh math almeida and steinberg syntactic and global semigroup theory a synthesis approach in algorithmic problems in groups and semigroups lincoln ne trends math boston boston ma pp on of pseudovarieties of the form v d almeida and steinberg on the decidability of iterated semidirect products and applications to complexity proc london math soc almeida and zeitoun tameness of some locally trivial pseudovarieties comm algebra ash inevitable graphs a proof of the type ii conjecture and some related decision procedures int algebra comput auinger and steinberg on the extension problem for partial permutations proc amer math soc costa reducibility of joins involving some locally trivial pseudovarieties comm algebra costa and nogueira complete reducibility of the pseudovariety lsl int algebra comput costa nogueira and teixeira the word problem for over the pseudovariety of local groups submitted preprint available at http costa nogueira and teixeira pointlike reducibility of pseudovarieties of the form v d int algebra doi to appear preprint available at http costa and teixeira tameness of the pseudovariety lsl int algebra comput eilenberg automata languages and machines vol b academic press new york krohn and rhodes algebraic theory of machines i prime decomposition theorem for finite semigroups and machines trans amer math soc lothaire algebraic combinatorics on words cambridge university press rhodes undecidability automata and pseudovarieties of finite semigroups int algebra comput rhodes and steinberg the of finite semigroups a new approach springer monographs in mathematics steinberg a delay theorem for pointlikes semigroup forum straubing finite semigroup varieties of the form v d j pure appl algebra costa nogueira teixeira and weiss graph congruences and wreath products j pure appl algebra tilson categories as algebra an essential ingredient in the theory of monoids j pure appl algebra
4
mar linearly related polyominoes viviana ene herzog takayuki hibi abstract we classify all convex polyomino ideals which are linearly related or have a linear resolution convex stack polyominoes whose ideals are extremal gorenstein are also classified in addition we characterize in combinatorial terms the distributive lattices whose ideals are extremal gorenstein or have a linear resolution introduction the ideal of inner minors of a polyomino a polyomino ideal is generated by certain subsets of of an m x of indeterminates such ideals have first been studied by qureshi in they include the ladder determinantal ideals of which may also be viewed as the ideal of a planar distributive lattice it is a challenging problem to understand the graded free resolution of such ideals in ene rauf and qureshi succeeded to compute the regularity of such ideals sharpe showed that the ideal x of all of x is linearly related which means that x has linear relations moreover he described these relations explicitly and conjectured that also the ideals of it x are generated by a certain type of linear relations this conjecture was then proved by kurano in the case that the base field over which it x is defined contains the rational numbers lascoux gives the explicit free resolution of all ideals of unfortunately the resolution of it x in general may depend on the characteristic of the base field indeed hashimoto showed that for t min m n the second betti number of it x depends on the characteristic on the other hand by using squarefree divisor complexes as introduced by bruns and the second author of this paper it follows from theorem that for t is independent of the characteristic in this paper we use as a main tool squarefree divisor complexes to study the first syzygy module of a polyomino ideal in particular we classify all convex polyominoes which are linearly related see theorem this is the main result of this paper in the first section we recall the concept of polyomino ideals and show that the polyomino ideal of a convex polyomino has a quadratic basis the second section of the paper is devoted to state and to prove theorem as mentioned before the proof heavily depends on the theory of squarefree divisor complexes which allow to compute the betti numbers of a toric ideal to apply this theory one observes that the polyomino ideal of a convex polyomino mathematics subject classification key words and phrases binomial ideals linear syzygies polyominoes the first author was supported by the grant uefiscdi may be naturally identified with a toric ideal the crucial conclusion deduced from this observation formulated in corollary is then that the betti numbers of a polyomino ideal is bounded below by the betti numbers of the polyomino ideal of any induced subpolyomino corollary allows to reduce the study of the relation of polyomino ideals to that of a finite number of polyominoes with a small number of cells which all can be analyzed by the use of a computer algebra system in the last section we classify all convex polyominoes whose polyomino ideal has a linear resolution theorem and all convex stack polyominoes whose polyomino ideal is extremal gorenstein theorem since polyomino ideals overlap with ideals it is of interest which of the ideals among the ideals have a linear resolution or are extremal gorenstein the answers are given in theorem and theorem it turns out that the classifications for both classes of ideals almost lead to the same result polyominoes in this section we consider polyomino ideals this class of ideals of was introduced by qureshi to this end we consider on the natural partial order defined as follows i j k l if and only if i k and j the set together with this partial order is a distributive lattice if a b with a b then the set a b c a c b is an interval of the interval c a b with b a is called a cell of the elements of c are called the vertices of c and a is called the left lower corner of the egdes of the cell c are the sets a a a a a a and a a let p be a finite collection of cells and c d then c and d are connected if there is a sequence of cells of p given by c cm d such that ci is an edge of ci for i m if in addition ci cj for all i j then c is called a path connecting c and d the collection of cells p is called a polyomino if any two cells of p are connected see figure the set of vertices of p denoted v p is the union of the vertices of all cells belonging to two polyominoes are called isomorphic if they are mapped to each other by a composition of translations reflections and rotations figure a polyomino we call a polyomino p row convex if for any two cells c d of p with left lower corner a i j and b k j respectively and such that k i it follows that all cells with left lower corner l j with i l k belong to similarly one defines column convex polyominoes the polyomino p is called convex if it is row and column convex the polyomino displayed in figure is not convex while figure shows a convex polyomino note that a convex polyomino is not convex in the common geometric sense figure a convex polyomino now let p be any collection of cells we may assume that the vertices of all the cells of p belong to the interval m n fix a field k and let s be the polynomial ring over k in the variables xij with i j the ideal of inner minors ip s of p is the ideal generated by all xil xkj xkl xij for which i j k l v p furthermore we denote by k p the if p happens to be a polyomino then ip will also be called a polyomino ideal for example the polyomino p displayed in figure may be embedded into the interval then in these coordinates ip is generated by the the following result has been shown by qureshi in theorem theorem let p be a convex polyomino then k p is a normal macaulay domain the proof of this theorem is based on the fact that ip may be viewed as follows as a toric ideal with the assumptions and notation as introduced before we may assume that v p m n consider the homomorphism s t with xij si tj for all i j v p here t k sm tn is the polynomial ring over k in the variables si and tj then as observed by qureshi ip ker it follows that k p may be identified with the edge ring of the bipartite graph gp on the vertex set sm tn and edges si tj with i j v p with this interpretation of k p in mind and by using we obtain proposition let p be a convex polyomino then ip has a quadratic basis proof we use the crucial fact proved in that the toric ideal which defines the edge ring of a bipartite graph has a quadratic basis if and only if each with r has a chord by what we explained before a after identifying the vertices of p with the edges of a bipartite graph is nothing but a sequence of vertices of p with ik jk and jk for k r such that ik and jk for all k r and k a typical such sequence of pairs of integers is the following here the first row is the sequence of the first component and the second row the sequence of the second component of the vertices ai this pair of sequences represents an it follows from lemma that there exist integers s and t with t s r and t s s such that either is it or it is suppose that is it since is js and js are vertices of p and since p is convex it follows that it js this vertex corresponds to a chord of the cycle similarly one argues if it is lemma let r be an integer and f r z a function such that f i f j for i j r and f r f then there exist s t r such that one has either f s f t f s or f s f t f s proof let say f f since f r f there is q r with f f f q f q let q then since q r one has f f r f f r let q r and f q f since f q f f f q it follows that there is s q with f s f q f s let q r and f q f then one has f q f f q the case of f f can be discussed similarly we denote the graded betti numbers of ip by ip corollary let p be a convex polyomino then ip for j proof by proposition there exists a monomial order such that in ip is generated in degree therefore it follows from corollary that in ip for j since ip in ip see for example corollary the desired conclusion follows the first syzygy module of a polyomino ideal let p be a convex polyomino and let fm be the minors generating ip in this section we study the relation module ip of ip which is the kernel of the l homomorphism m sei ip with ei fi for i the graded module ip has generators in degree and no generators in degree as we have seen in corollary we say that ip or simply p is linearly related if ip is generated only in degree let fi and fj be two distinct generators of ip then the koszul relation fi ej ei belongs ip we call fi fj a koszul relation pair if fi ej fj ei is a minimal generator of ip the main result of this section is the following theorem let p be a convex polyomino the following conditions are equivalent a p is linearly related b ip admits no koszul relation pairs c let as we may assume m n be the smallest interval with the property that v p m n we refer to the elements m n and m n as the corners then p has the shape as displayed in figure and one of the following conditions hold i at most one of the corners does not belong to v p ii two of the corners do not belong to v p but they are not opposite to each other in other words the missing corners are not the corners n m or the corners m n iii three of the corners do not belong to v p if the missing corners are m n and m n which one may assume without loss of generality then referring to figure the following conditions must be satisfied either m and or n and as an essential tool in the proof of this theorem we recall the squarefree divisor complex as introduced in let k be field h nn an affine semigroup and k h the semigroup ring attached to it suppose that hm nn is the unique minimal set of generators of we consider the polynomial ring t k tn q h j in the variables tn then k h k um t where ui tj i and where hi j denotes the jth component of the integer vector hi we choose a presentation s k xm k h with xi ui for i the kernel ih of this homomorphism is called the toric ideal of we assign a zn to s by setting deg xi hi then k h as well as ih become zn graded thus k h admits a minimal zn f with l fi s k h in the case that all ui are monomials of the same degree one can assign to k h the structure of a standard graded by setting deg ui for all i the degree of h with respect to this standard grading will be denoted given h h we define the squarefree divisor complex as follows is the simplicial complex whose faces f ik are the subsets of n such that h uik divides th n in k h we denote by k the ith reduced n simplicial homology of a simplicial complex proposition with the notation and assumptions introduced one has tori k h k h k in particular k h dimk k let h be a subsemigroup of h generated by a subset of the set of generators of h and let s be the polynomial ring over k in the variables xi with hi generator of h furthermore let the zn free s of k h then since the s is a flat s s is a zn free of n inclusion k h k h induces a z complex homomorphism f s tensoring this complex homomorphism with k where m is the graded maximal ideal of s we obtain the following sequence of isomorphisms and natural maps of zn tors k h k tors k h k hi s k hi k hi k i i for later applications we need corollary with the notation and assumptions introduced let h be a subsemigroup of h generated by a subset of the set of generators of h and let h be an element of h with the property that hi h whenever h hi then the natural space homomorphism torsi k h k h torsi k h k h is an isomorphism for all i proof let be the squarefree divisor complex of h where h is viewed as an element of h then we obtain the following commutative diagram tori k h k h tori k h k h y y k k the vertical maps are isomorphisms and also the lower horizontal map is an isomorphism simply because due to assumptions on this yields the desired conclusion let h nn be an affine semigroup generated by hm an affine subsemigroup h h generated by a subset of hm will be called a homological pure subsemigroup of h if for all h h and all hi with h hi h it follows that hi h as an immediate consequence of corollary we obtain corollary let h be a homologically pure subsemigroup of then torsi k h k torsi k h k is injective for all i in other words if is the minimal zn free s of k h and f is the minimal zn free of k h then the complex homomorphism f induces an injective map f in particular any minimal set of generators of syzi k h is part of a minimal set of generators of syzi k h moreover ih ih for all i and j we fix a field k and let p m n be a convex polyomino let as before s be the polynomial ring over k in the variables xij with i j v p and k p the of the polynomial ring t k sm tn generated by the monomials uij si tj with i j v p viewing k p as a semigroup ring k h it is convenient to identify the semigroup elements with the monomial they represent given sets is and jt of integers with ik m and jk n for all k we let h be the subsemigroup of h generated by the elements sik tjl with ik jl v p then h a homologically pure subsemigroup of note that h is also a combinatorially pure subsemigroup of h in the sense of a collection of cells p will be called a collection of cells of p induced by the columns is and the rows jt if the following holds k l v p if and only if ik jl v p observe that k p is always a domain since it is a of k p the map v p v p k l ik jl identifies ip with the ideal contained in ip generated by those of i p which only involve the variables xik jl in the following we always identify ip with this subideal of ip if the induced collection of cells of p is a polyomino we call it an induced polyomino any induced polyomino p of p is again convex consider for example the polyomino p on the left side of figure with left lower corner then the induced polyomino p shown on the right side of figure is induced by the columns and the rows p figure obviously corollary implies corollary let p be an induced collection of cells of then ip ip for all i and j and each minimal relation of ip is also a minimal relation of ip we will now use corollary to isolate step by step the linearly related polyominoes lemma suppose p admits an induced collection of cells p isomorphic to one of those displayed in figure then ip has a koszul relation pair proof we may assume that v p by using cocoa or singular to compute ip we see that the minors fa and fb form a koszul relation pair of ip thus the assertion follows from corollary a b figure p corollary let p be a convex polyomino and let m n be the smallest interval with the property that v p m n we assume that m n if one of the vertices m m n or n does not belong to v p then ip has a koszul relation pair and hence ip is not linearly related proof we may assume that v p then the vertices of the interval do not belong to v p since m n is the smallest interval containing v p there exist therefore integers i and j with i m and j n such that the cells i i and j j belong to then the collection of cells induced by the rows i i and the columns j j is isomorphic to one of the collections p of figure thus the assertion follows from lemma and corollary corollary shows that the convex polyomino p should contain all the vertices m m n and n in order to be linearly related thus a polyomino which is linearly related must have the shape as indicated in figure the number is also allowed to be in which case also in this case the polyomino contains the corner a similar convention applies to the other corners in figure all for corners n m and m n are missing the convex polyomino displayed in figure however is not linearly related though it has the shape as shown in figure thus there must still be other obstructions for a polyomino to be linearly related now we proceed further in eliminating those polyominoes which are not linearly related lemma let p be a convex polyomino and let m n be the smallest interval with the property that v p m n if p misses only two opposite corners say and m n or p misses all four corners n m and m n then ip admits a koszul pair and hence is not linearly related n m n m figure possible shape figure not linearly related proof let us first assume that and m n do not belong to v p but n and m belong to v p the collection of cells induced by the rows m m and the columns n n is shown in figure all the light colored cells some of them or none of them are present according to whether or not all some or none of the equations m and n hold for example if m and n then the light colored cells and belong and the other two light colored cells do not belong to it can easily be checked that the ideal displayed in figure has a koszul relation pairs in all possible cases and so does ip by corollary next we assume that none of the four corners n m and m n belong to in the following arguments we refer to figure in the first case suppose and then the collection of cells induced by the columns m and the rows n is the polyomino displayed in figure which has a koszul relation pair as can be verified by computer thus p has a koszul relation pair a similar argument applies if or next assume that or by symmetry we may discuss only then we may assume that and figure we choose the columns and the rows n then the induced polyomino by these rows and columns is if if and if see figure in all three cases the corresponding induced polyomino ideal has a koszul relation pair and hence so does ip figure lemma let p be a convex polyomino and let m n be the smallest interval with the property that v p m n suppose p misses three corners say n m m n and suppose that m and n or m and or n and then ip has a koszul relation pair and hence is not linearly related proof we proceed as in the proofs of the previous lemmata in the case that m and n we consider the collection of cells p induced by the columns m and the rows n this collection of cells p is depicted in figure it is easily seen that ip is generated by a regular sequence of length which is a koszul relation pair in the case that m and we choose the columns m m and the rows the polyomino p induced by this choice of rows and columns has two opposite missing corners hence by lemma it has a koszul pair the case n and is symmetric in both cases the induced polyomino ideal has a koszul relation pair hence in all three cases ip itself has a koszul relation pair figure proof of theorem implication a b is obvious implication b c follows by corollary lemma and lemma it remains to prove c a let p be a convex polyomino which satisfies one of the conditions i iii we have to show that p is linearly related by corollary we only need to prove that ip viewing k p as a semigroup ring k h it follows that one has to check that ip for all h h with the main idea of this proof is to use corollary let h with jq siq tiq for q and i minq iq k maxq iq j minq jq and maxq jq therefore all the points hq lie in the possible degenerate rectangle q of vertices i j k j i k if q is degenerate that is all the vertices of q are contained in a vertical or horizontal line segment in p then ip since in this case the simplicial complex is just a simplex let us now consider q if all the vertices of q belong to p then the rectangle q is an induced subpolyomino of therefore by corollary we have ip iq the latter equality being true since q is linearly related next let us assume that some of the vertices of q do not belong to as p has one of the forms i iii it follows that at most three verices of q do not belong to consequently we have to analyze the following cases case exactly one vertex of q does not belong to without loss of generality we may assume that k p which implies that k m and in this case any relation in degree h of p is a relation of same degree of one of the polyominoes displayed in figure one may check with a computer algebra system that all polyominoes displayed in figure are linearly related hence they do not have any relation in degree actually one has to check only the shapes a b and d since the polyomino displayed in c is isomorphic to that one from b hence ip case two vertices of q do not belong to we may assume that the missing vertices from p are i and k hence we have i k m and in this case any relation in degree h of p is a relation of same degree of one of the polyominoes displayed in figure a c note that the polyominoes b and c are isomorphic one easily checks with the computer that all these polyominoes are linearly related thus ip case finally we assume that there are three vertices of q which do not belong to we may assume that these vertices are i k and k j in this case any relation in degree h of p is a relation of same degree of the polyomino displayed in figure d which is linearly related as one may easily check with the computer therefore we get again ip a b c d figure a b c d figure polyomino ideals with linear resolution in this final section we classify all convex polyominoes which have a linear resolution and the convex stack polyominoes which are extremal gorenstein theorem let p be a convex polyomino then the following conditions are equivalent a ip has a linear resolution b there exists a positive integer m such that p is isomorphic to the polyomino with cells i i i i i m proof b a if the polyomino is of the shape as described in b then ip is just the ideal of of a it is that the ideal of of such a matrix has a linear resolution indeed the complex whose chain maps are described by matrices with linear entries provides a free resolution of the ideal of maximal minors of any matrix of indeterminates see for example page a b we may assume that m n is the smallest interval containing v p we may further assume that m or n the few remaining cases can easily be checked with the computer so let us assume that m then we have to show that n suppose that n we first assume that all the corners n m and m n belong to v p then the polyomino p induced by the columns m and the rows n is the polyomino which is displayed on the right of figure the ideal ip is a gorenstein ideal and hence it is does not have a linear resolution therefore by corollary the ideal ip does not have a linear resolution as well a contradiction next assume that one of the corners say is missing since ip has a linear a linear resolution ip is linearly related and hence has a shape as indicated in figure let and be the numbers as shown in figure and let p the polyomino of p induced by the columns and the rows a where a if and a if if and we let p to be the polyomino induced by the columns and the rows in any case p is isomorphic to that one displayed on the left of figure since ip is again a gorenstein ideal we conclude as in the first case that ip does not a have linear resolution a contradiction as mentioned in the introduction polyomino ideals overlap with ideals of planar lattices in the next result we show that the ideal of any lattice has linear resolution if and only if it is a polyomino as described in theorem with methods different from those which are used in this paper the classification of ideals with linear resolution was first given in corollary let l be a finite distributive lattice pp a element of l is an element l which is not a unique minimal element and which possesses the property that for all l let p be the set of elements of we regard p as a poset partially ordered set which inherits its ordering from that of a subset j of p is called an order ideal of p if a j b p together with b a imply b j in particular the empty set of p is an order ideal of p let j p denote the set of order ideals of p ordered by inclusion it then follows that j p is a distributive lattice moreover birkhoff s fundamental structure theorem of finite distributive lattices proposition guarantees that l coincides with j p let l j p be a finite distributive lattice and k l k l the polynomial ring in variables over the ideal il of l is the ideal of k l which is generated by those binomials where l are incomparable in it is known that il is a prime ideal and the quotient ring k l is normal and moreover k l is gorenstein if and only if p is pure a finite poset is pure if every maximal chain totally ordered subset of p has the same cardinality now let p be a finite poset where i j if and l j p a linear extension of p is a permutation id of n n such that j j if a descent of id is an index j with ij let d denote the set of descents of the of l is the sequence h l where hi is the number of permutations of n with i thus in particular it follows from that the hilbert series of k l is of the form we say that a finite distributive lattice l j p is simple if l has no elements and with such that each element l satisfies either or in other words l is simple if and only if p possesses no element for which every p satisfies either or theorem let l j p be a simple finite distributive lattice then the ideal il has a linear resolution if and only if l is of the form shown in figure figure proof since il is generated in degree it follows that il has a linear resolution if and only if the regularity of k l is equal to we may assume that k is infinite since k l is we may divide by a regular sequence of linear forms to obtain a a with reg a reg k l whose coincides with that of reg k l since reg a max i ai see for example exercise it follows that il has a linear resolution if and only if the of l is of the form h l q where q is an integer clearly if p is a finite poset of figure then for each linear extension of p thus il has a linear resolution conversely suppose that il has a linear resolution in other words one has for each linear extension of p then p has no clutter a clutter of p is a subset a of p with the property that no two elements belonging to a are comparable in p since l j p is simple it follows that p contains a clutter hence dilworth s theorem says that p c c where c and c are chains of p with c c let and let c and c be minimal elements of p let c and c be maximal elements of p since l j p is simple it follows that and thus there is a linear extension of p with thus il can not have a linear resolution hence either or as desired a gorenstein ideal can never have a linear resolution unless it is a principal ideal however if the resolution is as much linear as possible then it is called extremal gorenstein since polyomino ideals are generated in degree we restrict ourselves in the following definition of extremal gorenstein ideals to graded ideals generated in degree let s be a polynomial ring over field and i s a graded ideal which is not principal and is generated in degree following we say that i is an extremal gorenstein ideal if is gorenstein and if the shifts of the graded minimal free resolution are p p p where p is the projective dimension of i with similar arguments as in the proof of theorem we see that i is an extremal gorenstein ideal if and only if i is a gorenstein ideal and reg and that this is the case if and only if is and the of is of the form h l q where q is an integer in the following theorem we classify all convex stack polyominoes p for which ip is extremal gorenstein convex stack polyominoes have been considered in in that paper qureshi characterizes those convex stack polyominoes p for which ip is gorenstein let p be a polyomino we may assume that m n is the smallest interval containing v p then p is called a stack polyomino if it is column convex and for i m the cells i i belong to figure displays stack polyominoes the right polyomino is convex the left is not the number of cells of the bottom row is called the width of p and the number of cells in a maximal column is called the height of figure stack polyominoes let p be a convex stack polyomino removing the first k bottom rows of cells of p we obtain again a convex stack polyomino which we denote by pk we also set let h be the height of the polymino and let kr h be the numbers with the property that width pki width furthermore we set for example for the convex stack polyomino in figure we have and with the terminology and notation introduced the characterization of gorenstein convex stack polyominoes is given in the following theorem theorem qureshi let p be a convex stack polyomino of height then the following conditions are equivalent a ip is a gorenstein ideal b width pki height pki for i according to this theorem the convex stack polyomino displayed in figure is not gorenstein because width and height an example of a gorenstein stack polyomino is shown in figure figure a gorenstein stack polyomino combining theorem with the results of section we obtain theorem let ip be convex stack polyomino then ip is extremal gorenstein if and only if p is isomorphic to one of the polyominoes in figure figure extremal convex stack polyominoes proof it can be easily checked that ip is extremal gorenstein if p is isomorphic to one of the two polyominoes shown in figure conversely assume that ip is extremal gorenstein without loss of generality we may assume that m n is the smallest interval containing v p then theorem implies that m suppose first that v p n n then by theorem of ene rauf and qureshi it follows that the regularity of ip is equal to since ip is extremal gorenstein its regularity is equal to thus n next assume that v p is properly contained in n n since ip is linearly related corollary together with theorem imply that the top row of p consists of only one cell and that n n v p let p be the polyomino induced by the rows n and the columns n then p is the polyomino with v p n n by applying again theorem it follows that reg n corollary then implies that reg ip reg ip n and since reg ip we deduce that n if n then is the ideal of of a which has betti numbers and since p is an induced polyomino of p and since ip is extremal gorenstein corollary yields a contradiction up to isomorphism there exist for n precisely the gorenstein polyominoes displayed in figure they are all not extremal gorenstein as can be easily checked with cocoa or singular for n any gorenstein polyomino is isomorphic to one of the two polyominoes shown in figure this yields the desired conclusion figure gorenstein polyominoes of width the following theorem shows that besides of the two polyominoes listed in theorem whose polyomino ideal is extremal gorenstein there exist precisely two more ideals having this property theorem let l j p be a simple finite distributive lattice then the joinmeet ideal il is an extremal gorenstein ideal if and only if l is one of the following displayed in figure figure proof suppose that l j p is simple and that k l is gorenstein it then follows that p is pure and there is no element p for which every p satisfies either or since h l q no clutter is contained in p suppose that a clutter a is contained in p if none of the elements belonging to a is a minimal element of p then since l j p is simple there exist at least two minimal elements hence there exists a linear extension of p with a contradiction thus at least one of the elements belonging to a is a minimal element of p similarly at least one of the elements belonging to a is a maximal element let an element x a which is both minimal and maximal then since p is pure one has p a let a with a p where is a minimal element and is a maximal element let be a maximal element with and a minimal element with then neither nor belongs to a if is either minimal or maximal then there exists a linear extension of p with a contradiction hence can be neither minimal nor maximal then since p is pure there exist with and with such that is a clutter hence there exists a linear extension of p with a contradiction consequently if p contains a clutter a then p must coincide with a moreover if p is a clutter then h l and il is an extremal gorenstein ideal now suppose that p contains no clutter a with let a chain c with be contained in p let be the minimal elements of p and the maximal elements of p with and since l j p is simple and since p is pure it follows that there exist maximal chains and such that for i then one has a linear extension of p with d a contradiction hence the cardinality of all maximal chains of p is at most however if the cardinality of all maximal chains of p is equal to then h l thus il can not be an extremal gorenstein ideal if the cardinality of all maximal chains of p is equal to then p is the posets displayed in figure for each of them the ideal il is an extremal gorenstein ideal h l h l h l figure references garsia and stanley an introduction to partially ordered sets in ordered sets i rival ed springer netherlands pp bruns herzog semigroup rings and simplicial complexes j pure appl algebra cocoateam cocoa a system for doing computations in commutative algebra available at http decker greuel pfister singular a computer algebra system for polynomial computations http dilworth a decomposition theorem for partially ordered sets annals of math eisenbud commutative algebra with a view toward algebraic geometry graduate texts in mathematics springer ene a qureshi rauf regularity of ideals of distributive lattices electron combin hashimoto determinantal ideals without minimal free resolutions nagoya math j herzog hibi monomial ideals graduate texts in mathematics springer herzog srinivasan a note on the subadditivity problem for maximal shifts in free resolutions to appear in msri arxiv hibi algebraic combinatorics on convex polytopes carslaw publications glebe australia hibi distributive lattices affine semigroup rings and algebras with straightening laws in commutative algebra and combinatorics nagata and matsumura eds adv stud pure math amsterdam pp kurano the first syzygies of determinantal ideals algebra lascoux syzygies des determinantales adv in math ohsugi herzog hibi combinatorial pure subrings osaka j math bf ohsugi hibi koszul bipartite graphs adv in appl math qureshi ideals generated by collections of cells and stack polyominoes algebra schenzel uber die freien extremaler ringe algebra sharpe on certain polynomial ideals defined by matrices quart j math oxford sharpe the syzygies and of certain ideals defined by matrices proc london math soc viviana ene faculty of mathematics and computer science ovidius university bd mamaia constanta romania and simion stoilow institute of mathematics of the romanian academy research group of the project bucharest romania address vivian herzog fachbereich mathematik campus essen essen germany address takayuki hibi department of pure and applied mathematics graduate school of information science and technology osaka university toyonaka osaka japan address hibi
0
a class of msr codes for clustered distributed storage sohn beongjun choi and jaekyun moon jan kaist school of electrical engineering email jmoon distributed storage models real data centers where and repair bandwidths are different in this paper msr codes achieving capacity of clustered distributed storage are designed focus is given on two cases and where is the ratio of the available and repair bandwidths n is the total number of distributed nodes and k is the number of contact nodes in data retrieval the former represents the scenario where communication is not allowed while the latter corresponds to the case of minimum bandwidth that is possible under the minimum storage overhead constraint for the case two types of locally repairable codes are proven to achieve the msr point as for n k an explicit msr coding scheme is suggested for the situation under the specific of condition of n i ntroduction distributed storage systems dsss have been deployed by various enterprises to reliably store massive amounts of data under the frequent storage node failure events a failed node is regenerated repaired by collecting information from other survived nodes with the regeneration process guided by a predefined network coding scheme under this setting dimakis et al obtained the expression for the maximum reliably storable file size denoted as capacity c as a function of given system parameters the node capacity and the bandwidth required for repairing a failed node the capacity analysis in underscores the following key messages first there exists a network coding scheme which utilizes the resources and enables a reliable storage of a file of size c second it is not feasible to find a network coding scheme which can reliably store a file larger than c given the available resources of in subsequent research efforts the authors of proposed explicit network coding schemes which achieve the capacity of dsss these coding schemes are optimal in the sense of efficiently utilizing resources for maintaining the reliable storage systems focus on the clustered nature of distributed storage has been a recent research direction taken by several researchers according to these recent papers storage nodes dispersed into multiple racks in real data centers are seen as forming clusters in particular the authors of the present paper proposed a system model for clustered dsss in that reflects the difference between and bandwidths in the system model of the file to be stored is coded and distributed into n storage nodes which are evenly dispersed into l clusters each node has storage capacity of and the data collector contacts arbitrary k out of n existing nodes to retrieve the file since nodes are dispersed into multiple clusters the regeneration process involves utilization of both and repair bandwidths denoted by and respectively in this proposed system model the authors of obtained the expression for the maximum reliably storable file size or capacity c of the clustered dss furthermore it has been shown that network coding exists that can achieve the capacity of clustered dsss however explicit constructions of network coding schemes for clustered dsss have yet to be found this paper proposes a network coding scheme which achieves capacity of the clustered dss with a minimum required node storage overhead in other words the suggested code is shown to be a msr code of the clustered dss this paper focuses on two important cases of and n k where represents the ratio of to repair bandwidths the former represents the system where crosscluster communication is not possible the latter corresponds to the minimum value that can achieve the minimum storage overhead of where m is the file size when it is shown that appropriate application of locally repairable codes suggested in achieves the msr point for general n k l settings with the application rule depending on the parameter setting for the case an explicit coding scheme is suggested which is proven to be an msr code under the conditions of l and n there have been some previous works on code construction for dss with clustered storage nodes but to a limited extent the works of suggested a coding scheme which can reduce the repair bandwidth but these schemes are not proven to be an msr code that achieves capacity of clustered dsss with minimum storage overhead the authors of provided an explicit coding scheme which reduces the repair bandwidth of a clustered dss under the condition that each failed node can be exactly regenerated by contacting any one of other clusters however the approach of is different from that of the present paper in the sense that it does not consider the scenario with unequal and repair bandwidths moreover the coding scheme proposed in is shown to be a regenerating mbr code for some limited parameter setting while the present paper deals with an msr code an msr code for clustered dsss has been suggested in but this paper has the data retrieval condition different from the present paper the authors of considered the scenario where data can be collected by contacting arbitrary k out of n clusters while data can be retrieved by contacting arbitrary k out of n nodes in the present paper thus the two models have the identical condition only when each cluster has one node the difference in data retrieval conditions results in different capacity values and different msr points in short the code in and the code in this paper achieves different msr points a data collector dc retrieves the original file m by contacting arbitrary k out of n nodes this property is called the mds property the clustered distributed storage system with parameters n k l is called an n k l dss in an n k l dss with given parameters of capacity c is defined in as the maximum data that can be reliably stored the expression for c is obtained in theorem of aiming at reliably storing file m the set of pair values is said to be feasible if c m holds according to corollaries and of the set of feasible points shows the optimal relationship between and as illustrated in fig in the optimal curve the point with minimum node capacity is called the msr point explicit regenerating codes that achieve the msr point are called the msr codes according to theorem of node capacity of the msr point satisfies if if mbr point fig the optimal relationship between and in the clustered distributed storage modeled in a given file of m symbols is encoded and distributed into n nodes each of which has node capacity the storage nodes are evenly distributed into l clusters so that each cluster contains ni nodes a failed node is regenerated by obtaining information from other survived nodes ni nodes in the same cluster help by sending each while nodes in other clusters help by sending each thus repairing each node requires the overall repair bandwidth of node capacity repair bandwidth msr point ii backgrounds and n otations ni n ni cluster cluster cluster fig representation of clustered distributed storage n l ni divides b similarly write a b if a does not divide b for given k and ni we define k c ni m mod k ni k qni q b for vectors we use lower case letters for a given vector a the transpose of a is denoted as at for natural numbers m and n m the set ym yn is represented as yi for a matrix g the entry of g at the ith row and j th column is denoted as gi j we also express the nodes in a clustered dss using a representation in the structure illustrated in fig n l j represents the node at the lth row and the j th column finally we recall definitions on the locally repairable codes lrcs in as defined in an n k r represents a code of length n which is encoded from k information symbols every coded symbol of the n k r can be regenerated by accessing at most r other symbols as defined in an n r d m takes a file of size m and encodes it into n coded symbols where each symbol is composed of bits moreover any coded symbol can be regenerated by contacting at most r other symbols and the code has the minimum distance of note that is the minimum storage overhead to satisfy the mds property as stated in thus n k is the scenario with minimum communication when the minimum storage overhead constraint is imposed here we introduce some useful notations used in the paper for a positive integer n n represents the set n for natural numbers a and b we use the notation a b if a iii msr c ode d esign for in this section msr codes for is designed under this setting no communication is allowed in the node repair process first the system parameters for the msr point are examined second two types of locally repairable codes lrcs suggested in are proven to achieve the msr point under the settings of ni k and ni k respectively parameter setting for the msr point we consider the msr point which can reliably store file the following property specifies the system parameters for the case proposition consider an n k l clustered dss to reliably store file the msr point for is m m ni where q is defined in this point satisfies mds mds proof see appendix a mds precoding b code construction for ni k we now examine how to construct an msr code for the ni k case the following theorem shows that a locally repairable code constructed in with locality r ni is a valid msr code for ni theorem msr code construction for ni k let c be the n r d m explicitly constructed in for locality r ni consider allocating coded symbols of c in a n k l dss where r ni nodes within the same repair group of c are located in the same cluster then the code c is an msr code for the n k l clustered dss under the conditions of and ni proof see appendix a fig illustrates an example of the msr code for the and ni k case which is constructed using the lrc in in the n k l clustered dss scenario the parameters are set to cluster cluster b allocation of coded symbols into n nodes fig msr code for with ni k n k l the construction rule follows the instruction in while the concept of the repair group in can be interpreted as the cluster in the present paper authors of while the present paper proves that this code also achieves the msr point of the n k l clustered dss in the case of and ni ni m k q k c code construction for ni k thus each storage node contains symbols while the n k l clustered dss aims to reliably store a file of size m this code has two properties exact regeneration and data reconstruction any failed node can be exactly regenerated by contacting ni nodes in the same cluster contacting any k nodes can recover the original file j xi i j of size m the first property is obtained from the fact that yi yi and si yi yi form a mds code for i the second property is obtained as follows for contacting arbitrary k nodes three distinct coded symbols having superscript one and three distinct coded symbols having superscript two can be obtained for some and from fig the information suffice to recover similarly the information suffice to recover this completes the proof for the second property note that this coding scheme is already suggested by the here we construct an msr code when the given system parameters satisfy ni the theorem below shows that the optimal n k q ni designed in is a valid msr code when ni k holds theorem msr code construction for ni k let c be the constructed in for n k q and ni consider allocating the coded symbols of c in a n k l dss where r ni nodes within the same repair group of c are located in the same cluster then c is an msr code for the n k l dss under the conditions of and ni proof see appendix b fig illustrates an example of code construction for the ni k case without loss of generality we consider case parallel application of this code multiple times achieves the msr point for general n where n is the set rs code mds mds cluster cluster cluster mds cluster mds a encoding structure proposition the msr point for n k is m m n ni ni k k this point satisfies n k and m k n k proof see appendix b allocation of coded symbols into n nodes fig msr code for with ni k case n k l the encoding structure follows from the instruction in which constructed lrc this paper utilizes n k q ni lrc to construct msr code for n k l clustered dss in the case of with ni of positivie integers in the n k l clustered dss with the code and system parameters are n k q ni m k q k c from proposition the code in fig satisfies the exact regeneration and data reconstruction properties any failed node can be exactly regenerated by contacting ni nodes in the same cluster contacting any k nodes can recover the original file xi i of size m note that yi in fig is a set of coded symbols generated by a code and this statement also holds for yi this proves the first property the second property is directly from the result of which states that the minimum distance of the lrc is d note that the lrc is already suggested by the authors of while the present paper proves that applying this code with n k q ni achieves the msr point of the n k l dss in the case of and ni iv msr c ode d esign for we propose an msr code for in clustered dsss from and recall that is the minimum value which allows the minimum storage of first we obtain the system parameters for the msr point second we design a coding scheme which is shown to be an msr code under the conditions of n and l parameter setting for the msr point the following property specifies the system parameters for the n k case without a loss of generality we set the repair bandwidth as b code construction for n k l k here we construct an msr code under the constraints of n and l since we consider the n case the system parameters in proposition are set to n k k m k construction suppose that we are given m k source symbols mi j i j k moreover let the encoding matrix k k k gk gk gk j be a k k matrix where each encoding gi is a k k matrix for j k node n j stores mj and node n j stores pj where mi mi k t pi pi k t k x j mtj gi remark the code generated in construction satisfies the followings a every node in cluster contains k message symbols b every node in cluster contains k parity symbols note that this remark is consistent with which states under this construction we have the following theorem which specifies the msr construction rule for the n k l with n k theorem msr code construction for if all square of g are invertible the code designed by construction is an msr code for n k l k with n k proof see appendix the following result suggests an explicit construction of an msr code using the finite field corollary applying construction with encoding matrix g set to the k k cauchy matrix achieves the msr point for an n k l a finite field of size suffices to design proof the proof is directly from theorem and the fact that all of a cauchy matrix has full rank as stated in moreover the cauchy matrix of size n n can be cluster cluster n n n n cluster cluster parities are inaccessible messages are accessible fig repairing a failed node in proposed msr code example for n k l fig msr example for n k l constructed using a finite field of size according to an example of msr code designed by construction is illustrated in fig in the case of n k l this coding scheme utilizes a cauchy matrix using the finite field gf with the primitive polynomial x the element c in gf is denoted by the decimal number of abc where is the primitive element for example is denoted by in the generator matrix when n k l the system parameters are m from proposition which holds for the example in fig here we show that the proposed coding scheme satisfies two properties exact regeneration of any failed node and recovery of m message symbols by contacting any k nodes exact regeneration fig illustrates the regeneration process suppose that node n containing the message fails then node n transmits symbols and nodes n and n transmit symbol each for example and respectively then from the received symbols of and matrix g we obtain thus the contents of the failed node can be regenerated by where the matrix inversion is over gf note that the exact regeneration property holds irrespective of the contents transmitted by n and n since the encoding matrix is a cauchy matrix all submatrices of which are invertible data recovery first if dc contacts two systematic nodes the proof is trivial second contacting two parity nodes can recover the original message since g is invertible third suppose that dc contacts one systematic node and one parity node for example n and n then dc can retrieve message symbols and parity symbols using the retrieved symbols and the information on the encoding matrix g dc additionally obtains thus dc obtains which completes the data recovery property of the suggested code c onclusion a class of msr codes for clustered distributed storage modeled in has been constructed the proposed coding schemes can be applied in practical data centers with multiple racks where the available bandwidth is limited compared to the bandwidth two important cases of and n k are considered where represents the ratio of available to repair bandwidth under the constraint of zero repair bandwidth appropriate application of two locally repairable codes suggested in is shown to achieve the msr point of clustered distributed storage moreover an explicit msr coding scheme is suggested for n k when the system parameters satisfy n and l the proposed coding scheme can be implemented in a finite field by using a cauchy generator matrix a ppendix a p roof of t heorem we focus on code c the explicit n r d m constructed in section v of this code has the parameters r k where r is the repair locality and d is the minimum distance and other parameters n m have physical meanings identical to those in the present paper by setting r ni the code has node capacity of n r d n k m ni m m m ni k k where the last equality holds from the ni k condition and the definition of q in we first prove that any node failure can be exactly regenerated by using the system parameters in according to the description in section of any node is contained in a unique corresponding repair group of size r ni so that a failed node can be exactly repaired by contacting r ni other nodes in the same repair group this implies that a failed node does not need to contact other repair groups in the exact regeneration process by setting each repair group as a cluster note that each cluster contains ni nodes we can achieve moreover section of illustrates that the exact regeneration of a failed node is possible by contacting the entire symbols contained in r ni nodes in the same repair group and applying the xor operation this implies which result in m ni ni combined with and from and we can conclude that code c satisfies the exact regeneration of any failed node using the parameters in now we prove that contacting any k nodes suffices to recover original data in the clustered dss with code c applied note that the minimum distance is d n k from thus the information from k nodes suffices to pick the correct codeword this completes the proof of theorem a ppendix b p roof of t heorem we first prove that the code c has minimum distance of d which implies that the original file of size m can be recovered by contacting arbitrary k nodes second we prove that any failed node can be exactly regenerated under the setting of recall that the constructed in has the following property as stated in theorem of lemma theorem of the code constructed in has locality and optimal minimum distance d d e when note that we consider code c of optimal n k q ni since ni divides n lemma can be applied the result of lemma implies that the minimum distance of c is d n k q ni since we consider the ni k case we have k qni m m ni from inserting into we have ni q m d n k q ni n k q q n k cluster cluster fig code construction for ni k case where the second last equality holds since m ni from thus this proves that contacting arbitrary k nodes suffices to recover the original source file now all we need to prove is that any failed node can be exactly regenerated under the setting of system parameters specified in proposition according to the rule illustrated in the construction of code c can be shown as in fig first we have m k q source symbols xi to store reliably by applying a t k q code to the source symbols we obtain zi where t l ni then we partition zi symbols into l groups where each group contains ni symbols next each group of zi symbols is encoded by an ni ni code which result in a group of ni symbols of yi finally we store symbol yni in node n l j by this allocation rule yi symbols in the same group are located in the same cluster assume that n l j the j th node at lth cluster containing yni symbol fails for l l and j ni from fig i we know that ni symbols of yni stored in lth cluster can decode the ni ni code for group thus the contents of yni can be recovered by retrieving symbols from nodes in the the lth cluster the same cluster where the failed node is in this proves the ability of exactly regenerating an arbitrary failed node the regeneration process satisfies moreover note that the code in fig has m k q source symbols since parameters obtained in and are consistent with proposition we can confirm that code c is a valid msr point under the conditions and ni a ppendix c p roof of t heorem recall that the code designed by construction allocates systematic nodes at cluster and parity nodes at cluster as illustrated in fig moreover recall that the system parameters for n k l k with n k are k cluster cluster fig code construction for n k l k dss when n k from proposition and the definition of first we show that exact regeneration of systematic nodes in the first cluster is possible using k in the n k l k dss with construction we use the concept of the projection vector to illustrate the repair process for l l l let vi j be the lth projection vector assigned for n j in repairing n i similarly let vi j be the projection vector assigned for n j in repairing n i assume that the node n i containing mi mi k t fails l then node n j transmits k symbols mtj vi j t while node n j transmits symbol pj vi j for l simplicity we set vi j el and vi j ek where ei is the kdimensional standard basis vector with a in the ith coordinate and s elsewhere this means that node n j transmits k symbols mj mj k t it contains while n j transmits the last symbol it contains the symbol pj k thus the newcomer node for regenerating systematic node n i obtains the following information mi mj s j k i s k pj k we now show how the newcomer node regenerates mi mi k t using information mi recall that the parity symbols and message symbols are related as in the following k equations g pk mk obtained from and among these k parity symbols k parity symbols received by the newcomer node can be expressed as k gk k pk k subtracting the constant known values from results in gk gk gk ik ik mi k yk gk gk gk ik where mk k where the matrix in is generated by removing k k rows from since we are aware of k message symbols of mj s j k i s k and the entries of g matrix yl pl k k k x x glk mj s for l k note that the matrix in can be obtained by removing k k columns from the matrix in since every square of g is invertible we can obtain mi mi k t which completes the proof for exactly regenerating the failed systematic node second we prove that exact regeneration of the parity l nodes in the second cluster is possible let j be the lth projection vector assigned for n j in repairing n i similarly let j be the projection vector assigned for n j in repairing n i assume that the parity node n i fails which contains pi pi k t then node l n j transmits k symbols ptj j while node n j transmits symbol mtj j for simplicity we l set j el and j ek this means that node n j transmits k symbols pj pj k t it contains while n j transmits the last symbol it contains the symbol mj k thus the newcomer node for regenerating parity node n i obtains the following information pi pj s j k i s k mj k we show how the newcomer node regenerates pi pi k t using the information pi among k parity symbols in k k parity symbols received by the newcomer node can be expressed as k k k g m g g mk k pk gk gk j where gi is defined in construction note that is a k k k matrix which is generated by removing lth rows from g for l i k i k ik since we know the values of k message symbols mj k and the entries of g matrix subtracting constant known values from results in where is generated by removing lth columns from for l k k similarly is generated by removing lth rows from m for l k k thus is an invertible k k k k matrix so that we can obtain which contains mj s j k s k since pi contains every message symbol mj s j s k we can regenerate pi pi k t using this completes the proof for exactly regenerating the failed parity node finally we prove that m k message symbols can be obtained by contacting arbitrary k nodes in this proof we use a slightly modified notation for representing message and parity symbols for j s k the message symbol mj s and the parity symbol pj s are denoted as m and p respectively then is expressed as g suppose that the data collector dc contacts e nodes from the cluster and k e nodes from the cluster for e k then dc obtains k k e parity symbols and ke message symbols since there exists total of m k message symbols the number of message symbols that dc can not obtain is m ke k k e let the parity symbols obtained by dc be pik and the message symbols not obtained by dc be mjk then the known parities can be expressed as pik if m ni where is a k matrix generated by taking the k lth columns from for l jt since is invertible k we obtain the unknown message symbols mji this completes the proof m q ni k k q where q and m are defined in and when m ni can be expressed as k where the last equality is from thus from and we have ni holds for the m ni case similarly using and we can confirm that holds for the m ni case inserting into we obtain m m ni since ni for from we obtain k q which completes the proof b proof of proposition we consider the case without losing generality this implies that n k according to the definition now we observe the expressions for and from corollary of the msr point for n k is illustrated as m m k k where from corollary of the msr point for is given else m ni a proof of proposition m a ppendix d p roof of p ropositions k q mk k where is a k k e k matrix obtained by taking lth k rows from g for l it since we know ke message symbols and the elements of g subtracting the known constant values from results in mjk by ni q ni qni ni qni m where are defined in this paper does not review the explicit form of the definitions but shows how looks like from the proof of lemma of we have n k ni n ni ni from the definition of si in and the setting of k combining and result in note that in can be expressed as n ni ni n ni ni n k where the last equality holds due to combining and we obtain m k which result in m k n k using in we have n this completes the proof r eferences dimakis godfrey wu wainwright and ramchandran network coding for distributed storage systems ieee transactions on information theory vol no pp rashmi shah kumar and ramchandran explicit construction of optimal exact regenerating codes for distributed storage in communication control and computing allerton annual allerton conference on ieee pp cadambe jafar maleki ramchandran and suh asymptotic interference alignment for optimal repair of mds codes in distributed storage ieee transactions on information theory vol no pp ernvall codes between mbr and msr points with exact repair property ieee transactions on information theory vol no pp sohn choi yoon and j moon capacity of clustered distributed storage in ieee international conference on communications icc may sohn choi yoon and j moon capacity of clustered distributed storage corr vol online available http prakash abdrashitov and the storage vs repairbandwidth for clustered storage systems arxiv preprint hu li zhang lee zhang zhou and feng optimal repair layering for data centers from theory to practice arxiv preprint papailiopoulos and dimakis locally repairable codes ieee transactions on information theory vol no pp tamo papailiopoulos and dimakis optimal locally repairable codes and connections to matroid theory ieee transactions on information theory vol no pp tebbi chan and sung a code design framework for distributed storage in information theory workshop itw ieee ieee pp sahraei and gastpar increasing availability in distributed storage systems via clustering arxiv preprint bernstein matrix mathematics theory facts and formulas second edition princeton university press online available http shah rashmi kumar and ramchandran explicit codes minimizing repair bandwidth for distributed storage in information theory itw cairo ieee information theory workshop on ieee pp suh and ramchandran mds code construction using interference alignment ieee transactions on information theory vol no pp
7
nov designing and pattern matching algorithms gilles didier and laurent tichit cnrs centrale marseille marseille france march abstract given a pattern w and a text t the speed of a pattern matching algorithm over t with regard to w is the ratio of the length of t to the number of text accesses performed to search w into we first propose a general method for computing the limit of the expected speed of pattern matching algorithms with regard to w over iid texts next we show how to determine the greatest speed which can be achieved among a large class of algorithms altogether with an algorithm running this speed since the complexity of this determination makes it impossible to deal with patterns of length greater than we propose a polynomial heuristic finally our approaches are compared with pattern matching algorithms from both a theoretical and a practical point of view both in terms of limit expected speed on iid texts and in terms of observed average speed on real data in all cases the algorithms are outperformed introduction we focus on algorithms solving the online string matching problem which consists in reporting all and only the occurrence positions of a pattern w in a text t online meaning that no of the text is allowed as one of the oldest problems addressed in computer science it has been extensively studied we refer to for a comprehensive list and an evaluation of all the pattern matching algorithms developed so far by the authors count more than algorithms have already been proposed among which more than a half were published during the last ten years this fact sounds quite paradoxical since the algorithm which is optimal in terms of worst case analysis dates back to a possible explanation is that there is wide gap between the worst case complexity of algorithms and their computation times on real data for instance there are pattern matching algorithms with worst case complexities which perform much better than on english texts basically the average case analysis is way more suited to assess the relevance of a pattern matching algorithm from a practical point of view the average case analysis of some pattern matching algorithms notably and has already been carried out from various points of view we provide here a general method for studying the limit average behavior of a pattern algorithm over iid texts more precisely following we consider the limit expectation of the ratio of the text length to the number of text accesses performed by an algorithm for searching a pattern w in iid texts this limit expectation is called the asymptotic speed of the algorithm with regard to w under the iid model the computation of the asymptotic speed is based on machines which are structures able to simulate the behavior of a pattern matching algorithm while searching the pattern the underlying idea is the same as in and can be seen as a generalization of the string matching automaton in the companion paper didier provided a theoretical analysis of the asymptotic speed of pattern matching algorithms over iid texts in particular he showed that for a given pattern w the greatest asymptotic speed among a large class of pattern matching algorithms is achieved by a machine in which the states are essentially subsets of positions of such machines are called strategies below we provide here a brute force algorithm computing the fastest strategy for a given pattern w and the frequencies of an iid model the algorithm is based on an original structure associated to the pattern w and called its position lattice which gives a full representation of the overlap relations between the subsets of positions of since the brute force algorithm can not be applied on patterns of length greater than because of its very high we propose a polynomial in which the polynomial order k may be chosen by the user the fastest and approaches are finally compared with several pattern matching algorithms from a theoretical point of view by computing their limit expected speeds with regard to various patterns and iid models from a practical point of view by computing their average speeds over two sources an english text and a dna sequence in both cases the fastest and with k large enough approaches outperform the algorithms the software and the data used to perform the tests are available at https the rest of the paper is organized as follows section presents the notations and recalls some concepts and results from it is followed by two sections which introduce the central objects of this work the strategies and the position lattice of a pattern in particular we provide an algorithm computing the position lattice of a given pattern section shows how to use the position lattice of a pattern to obtain the fastest strategy with regard to this pattern and an iid model in section we provide a polynomial heuristic allowing to compute fast strategies section presents the results of various comparisons between pattern matching algorithms the and each time it is possible the fastest strategy the results are discussed in the last section notations and definitions notations and general definition for all finite sets s p s is the power set of s and is its cardinal an alphabet is a finite set a of elements called letters or symbols a word a text or a pattern on a is a finite sequence of symbols of a we put for the length of a word words are indexed from v we write v i j for the subword of v starting at its position i and ending at its position j v i j vi vj the concatenate of two words u and v is the word uv for any length n we note an the s set of words of length n on a and a the set of finite words on a a an unless otherwise specified all the texts and patterns considered below are on a fixed alphabet a a pattern matching algorithm takes a pattern w and a text t as inputs an reports all and only the occurrence positions of w in for all patterns w we say that two pattern matching algorithms are if for all texts t they access exactly the same positions of t on the input w t matching machines and the generic algorithm for all patterns w a machine is q o f where q is a finite set of states o q is the initial state f q is the subset of states q n is the function which is such that for all q f q q a q is the transition state function q a n is the shift function by convention the set of states of a matching machine always contains a sink state which is such that for all symbols x a x and x the order of a matching machine q o f is defined as q the machines carry the same information as the deterministic arithmetic automatons defined in the generic algorithm takes a machine and a text t as inputs and outputs positions of t algorithm input a machine q o f and a text t output all the occurrence positions of w in t q p o while p do if q f and q q then print occurrence at position p q p q q p q q algorithm the generic algorithm each component of a machine makes sense in regard to the way it is used by the generic algorithm the states in f are those which lead to report an occurrence of the pattern at the current position if the of the pattern matches the corresponding position in the text line of algorithm the condition q for all q f in the definition of machines is technical and used in a machine is valid if for all texts t the execution of the generic algorithm on the input t outputs all and only the occurrence positions of w in since one has to check all the positions of the pattern w before concluding that it occurs somewhere in a text the order of a valid machine is at least we claim that for all the pattern matching algorithms developed so far and all patterns w there exists a machine which is such that for all texts t the generic algorithm and the pattern matching algorithm access exactly the same positions of t on the inputs t and w t respectively for instance figure displays a machine which accesses the same positions as the naive algorithm while searching abb expansion standard matching machines we present here a transformation on matching machines which split their states according to the text positions read from the current position during an execution of the generic algorithm the main point of this transformation is that the average complexity of matching machines such obtained may then be computed through algebraic methods sections and for all n n rn is the set of subsets h of n a verifying that for all i n there exists at most one pair in h with i as first entry for all h rn we put f h for the set comprising all the first entries of figure machine of the naive algorithm the check are displayed below all states and edges from states si are labelled with si x for all symbols x the transition associated to a match is the pairs in h namely f h i a with i x h for all k n and h rn the of h is k h u k y u y h with u k the subset of rn obtained by subtracting k from the first entries of the pairs in h and by keeping only the pairs with first entries the full memory expansion of a machine q o f is the machine obtained by removing the unreachable states of f defined as q o q h q q h x q x f f q h x q x q x h q x if a q a h if x q a h q x q x h if q x h b b b a a a b b a figure full memory expansion of the machine of figure by construction at all iterations of the generic algorithm on the input t if the current state and position are q h and p respectively then the positions of j p j f h are exactly the positions of t greater than p accessed so far the second entries of the corresponding elements of h give the symbols read for all texts t the generic algorithm access the same positions of t on the inputs t and t let us remark that the full memory expansion of the full memory expansion of a matching machine is equal to its full memory expansion up to a state isomorphism a machine is standard if each state q of appears in a unique of its full memory expansion or equivalently if it is equal to its full memory expansion for instance the machine of figure is not standard since the matching machine of figure is a full memory expansion it is standard for all states q of a standard matching machine we put q for the second entry of the unique of in which q appears we implemented a basic algorithm computing the expansion q o f of a machine q o f in o time we have a but the size of q may vary a lot with regard to the matching considered a machine is compact if it contains no state q which always leads to the same state formally q o f is compact if there is no q q such that one of the following assertions holds there exists a symbol x with x y x and y for all symbols for all symbols x and y we have both x y and x y basically a machine performs useless text accesses in it is shown that any machine can be turned into a compact and faster machine iid and markov models an independent identically distributed iid model aka bernoulli model is fully specified by a probability distribution on the alphabet x is the probability of the symbol x in the model such a model will be simply referred to as below under the probability of a text t is y t ti a markov model m over a given set of states q is a where is a probability distribution on q the initial distribution and associates a pair of states q q with the probability for q to be followed by q the transition probability under a markov model m the probability of a sequence s of states is pm s y si theorem let q o f be a machine if a text t follows an iid model and is standard then the sequence of states parsed by the generic algorithm on the input t follows a markov model proof whatever the text model and the machine the sequence of states always starts with o with probability we have o and q for all q o the probability q q that the state q follows the state q during an execution of the generic algorithm is equal to if there exists a symbol x such that q x q and q x q if the relative position q was already checked with x occurring at it x x otherwise x q x q independently of the previous states asymptotic speed let m be a text model and a be an algorithm the speed of a under m is the limit expectation under m of the ratio of the text length to the number of text accesses performed by a namely by putting aa t for the number of text accesses performed by a to parse t and pm t for the probability of t with regard to m the asymptotic speed of a under m is asm a lim x pm t aa t the asymptotic speed asm of a machines is that the generic algorithm with as first input from theorem of the asymptotic speed of a standard machine q o f under an iid model exists and is given by x e q where are the limit frequencies of the states of the markov model associated to and in theorem and q x if q x q p e q q x otherwise x x computing the asymptotic speed of a pattern matching algorithm with regard to a pattern w and an iid model is performed by following the stages below we get a machine which simulates the behavior of the algorithm while looking for w figure the transformation of the algorithms presented in section and a few others see our github repository into machines given w has been implemented we obtain the expansion of figure section we compute the limit frequencies of the markov model associated to and in theorem this mainly needs to solve a system of linear equations of dimension we finally obtain the asymptotic speed of the algorithm from these limit frequencies and by using equation the most stage is the computation of the limit frequencies which has o time complexity where the number of states of the full memory expansion is smaller than a strategies for all sets i n and k n we define the of i as i k i k i and i k a s q o f is a machine such that q p and q o f s q figure two with the same conventions as in figure q is such that for all s q s s and s s q q a is such that for all states s and all symbols x s x min k s x if s k and wj for all j s k min k s x if s k and wj for all j s k q a q is such that for all s q and all symbols x s x s s s x figure shows two which differ notably in the of state proposition a is a standard compact valid and machine proof by construction a is standard compact and the validity of a follows from theorem of proposition there is a which achieves the greatest asymptotic speed among all the machines of order if s f otherwise proof the corollary of implies that there exists a machine which achieves the greatest asymptotic speed among those of order and which is standard compact valid in which all the states are relevant such that they may lead to a match without any positive shift such that there is no pair of states q q with q q and q q let us verify that a machine q o f of order satisfying the properties above is isomorphic to a since it verifies in particular the properties and its set of states q is in bijection with a subset of p let us identify all states q of q with f q its corresponding element of p since is standard compact and of order we do not have q moreover since is standard we have s x s i s x for all s q last by construction if s x min k s x if s k and wj for all j s k min k s x if s k and wj for all j s k if s f otherwise then is not valid and if s x min k s x if s k and wj for all j s k min k s x if s k and wj for all j s k if q f otherwise then s x is not relevant position lattices w w the position lattice of a pattern w is the l w q w w w where by putting s for s q w p the set made of all the subsets of positions of w but w is a map from s a to w is a map from s a to q w for all s q w for all s q w figure position lattice of the pattern abb vertices represent the states of l abb for all states s there is an outgoing edge for all pairs i x with i s and x a this outgoing edge is labeled with abb abb i i x is colored according to i and goes to i x where for all s q w all i s and all x a we have w i x min k s x if s k and wj for all j s k min k s x if s k and wj for all j s k and w i x s i w i x w in particular if x wi and then we have i x and w i x s i let us remark that since max s for all s q w we have for all w i s and all x a s i thus i x which is consistent w with the definition of w the edges of l w are the pairs s i x for all s q w all i s and all x a see figure if otherwise remark the position lattice of w contains states and edges remark let s be a state of q w i and j be two positions in s such that i j and x and y be two symbols of a we have w w i x j w i x y w i x w w i x w w j y j w i x y i w j y x w j y and w w j y i w j y x by considering the particular case where x wi we get w i j y w i j y w w j y w w j y i w j y wi w j y and i w j y wi let precw be the table indexed on a and in which for all positions i of w and all symbols x of a the entry precw i x is defined as max j i wj x if j i wj x precw i x null otherwise for instance the table precabb is a b null lemma let s be a state of q w i a position in s and x a symbol of a if x wi then w w if then i x and i x where b is the length of the longest proper suffix of w which is a prefix of w w w otherwise i x s i and i x if x wi a if s then precw i x w i x w i x i precw i x if precw i x null otherwise if precw i x null otherwise b if s then for all s we have w i x w w w i x w i x i x w w w w i x w i x w i x proof the only case which does not immediately follow from the definition of l w is when x wi and s which is given by remark the relation on q w is defined as follows for all sets s and in q w we have s if one of the following properties holds s and min s difference of s and s where s is the symmetric s the relation defines a total order on q w we write s for s and s lemma let s be a state of q w with i a position in s and x a symbol w of a if x wi then max s i x proof under the assumption that we have min s min s max s w by construction the fact that x wi implies that max s i x if we have w min s max s i max s i x w w then max s i x thus max s i x w otherwise we have max s i x but since necessarily w w min max s i x max s i x w we get again max s i x theorem algorithm computes the position lattice of the pattern w in o time by using the same amount of memory proof let us first show that algorithm determines the shifts and the transitions of the state s before those of the state if and only if s the loop at lines computes the shifts and the transitions of next the loop at lines computes the shifts and the transitions of the singletons from to the last loop lines determines the shifts and the transitions of the states corresponding to the subsets of increasing cardinals from to b length of the longest proper suffix of w which is also a prefix for x a do last x null for i to do last wi i for x a do if last x null then w w i x i last x i x last x else w w i x i i x for i to do for j to i do for x a do w w w w i j x j x w i j x wi w i j x j x w w w j x i j x wi for j i to do for x a do if x wj then if then w w i x b i x b else w w i j x i j x i j else w w w j i wi x i j x w w i j x wi w w wi w j i wi x for to do for j to do s j j repeat s s s s s for i s do for x a do if x wi then if then w w i x b i x b else w w i x i x s i else w w w w s i x ws i x i x w i x s w i x w w w i x s s i x ws j while j and s j j do j j if j then s j s j for k j to do s k s k until j algorithm computation of the position lattice value b is the last entry of the partial match table of the kmp algorithm its computation takes a time linear with inside the last loop the way in which the next subset is computed from the current subset s both of cardinal ensures that s lines for all iterations i of the loop at lines and all symbols x we have last x precw i x at the beginning of the inner loop line from lemma w w cases and the transitions i x and the shifts i x for all positions i of w and all symbols x are correctly computed at the end of the loop the loop at lines computes the shifts and the transitions from the singleton states for all pairs of positions i j and all symbols x determining w w i j x and i j x is performed by distinguishing between two cases w if i j then j x i and its shifts and transitions were already computed formula of remark gives us those of i lines if i j we distinguish between two subcases according to the symbol x considered if x wj then the shift and the transition state are given in w lemma case otherwise we remark that since i j x is positive w we have that i j x min k x if j k and wi w this implies that i j x w w w wi j x we have wi s w thus both the shifts and the transitions of the state i wi are computed before s lines the last loop lines computes the shifts and the transitions of the states corresponding to the subsets of cardinals to for all states s with all positions i s and all symbols x wi the corresponding w w shift and transition i x and i x are computed from the shifts and w transitions of the state max s i x following lemma cases in algo w rithm we put for s max s lemma ensures that max s i x s w thus that the shifts and transitions of max s i x are computed before those of for all states s with all positions i s the shift and w w transition i wi and i wi are given in lemma case the time complexity is o k k k loop lines o we do not use more memory than needed to store the lattice which is from remark o the fastest determining the fastest which from proposition has the greatest asymptotic speed among all the machines of order may be performed by computing the asymptotic speed of all the and by returning the fastest one in order to enumerate all the let us remark that they are all contained in the position lattice of w in the sense that the set of states of a is included in that of the position lattice w all the q o f verify s x s x and w s x s x for all s q and all symbols reciprocally to any map from q w to such that s s for all states s q w there corresponds the unique s q o f for which the function coincides with on q finally our brute force algorithm takes as input a pattern w and an iid model computes the position lattice of w enumerates all the maps such that s s for all states s q w for each gets the corresponding by keeping only the states of q w reachable from with the function computes the asymptotic speed of all the under returns the with the greatest speed the time complexity of the brute force algorithm is y o k k where the first factor stands for the number of functions and the second one for the computation of the asymptotic speed of a which needs to solve a linear system of size equal to the number of states which is o its memory space complexity is what is needed to store the position lattice of under its current implementation the brute force determination of the fastest is unfeasible for patterns of length greater than a polynomial heuristic there are two points which make the complexity of the brute force algorithm given in section that high the size of the position lattice which is exponential with the length of the pattern determining the fastest strategy in the position lattice which needs a time exponential with its size our heuristic is based on two independent stages each one aiming to overcome one of these two points both of them start from the general idea that since for any current position of the text the probability that no mismatch occurs until the nth text access decreases geometrically with n the first relative positions accessed by a strategy or more generally by a pattern algorithm are those which have the greatest influence on its asymptotic speed sublattices a sufficient condition for a sublattice u q w to contain a is that w for all s u there exists at least a position i s with i x u for all x a a sublattice u verifying this condition will be said to be complete figure displays four complete sublattices extracted from the position lattice of abb figure let us introduce some additional notations here for all sets s of positions the prefix of s is defined as p s max i s j s for all j i and its rest is r s s p s for all positive integers n the sublattice of w is the sublattice u of q w which contains all and only the subsets of q w with a rest containing less than n positions the subsets of the form p x with p and by construction the sublattice of w is complete it contains o states and o transitions we adapted algorithm to compute the sublattice of w in o time with the same amount of memory space expectation we are now interested in a fast way for finding an efficient in a given complete sublattice for all integers and all states s of a sublattice u the expectation of s is defined as the greatest shift expectation one could possibly get in steps in u by starting from s conditioned on starting from s while parsing a text following an iid model namely the expectation is computed following the recursive formula w s for all w es s max s x w x w i x es w i x w where tr s i s i x u for all x a the expectation of a complete sublattice u is well defined and can be computed in o t time where t is the number of transitions of the sublattice u and by using o memory space a b c d figure four complete sublattices extracted from the position lattice of abb sublattice a resp b leads to the strategy where the is always the smallest resp the greatest relative position unchecked sublattice c resp d leads to the strategy at the top resp at the bottom of figure we finally extract a from u by setting the of all states s u to w arg max w i x es w i x i s x the combines the two approaches above in order to compute a in a time polynomial with the length of the pattern being given an order k we start by computing the sublattice of w thus in o time in order to select a from the sublattice we next compute the k expectation of all its states and extract a as described just above this computation is performed in o time since the number of transition of the sublattice is o by using o memory space let us remark that the order of the expectation does not have a priori to be strongly related to the order k of the sublattice on which it is computed by experimenting various situations we observed that considering an order greater than k generally does not improve much the performances whereas the strategies obtained from with smaller than k may be significantly slower the returns a in o time by using o memory space we insist on the fact that the generally does not return the fastest strategy even if k however we will see in the next section that it performs quite well in practice evaluation we shall compare the approaches introduced in sections and with selected pattern matching algorithms the comparison is performed first from a theoretical point of view by computing their asymptotic speeds under iid models and second in practical situations by measuring their average speed over real data the average speed with regard to a pattern w of an algorithm or a matching machine on a text t is the ratio of to the number of text accesses performed by the algorithm to search w in we are also interested in to what extent taking into account the frequencies of the letters of an iid model or a text for determining the fastest and the kheuristic strategies actually improves their asymptotic or their average speeds to this purpose we compute the fastest and the strategies from the uniform iid model next we test their efficiency in terms of asymptotic speed under a iid model and in terms of average speeds on data with frequencies of letters pattern matching algorithms more than forty years of research have already led to the development of dozens algorithms we selected the ones below for our evaluation naive quicksearch tvsbs a algorithm in which shifts are given by a badcharacter rule taking into account the two letters at distances and from the current position of the text ebom a version of the backward oracle matching algorithm which also uses a bad rule hashq which implements the algorithm on blocks of length q by using efficient hashing techniques our tests are performed with q fjs which combines the ideas of and sunday algorithms algorithms to are classics the last four ones were chosen for being known to be efficient on short patterns and small alphabets a situation in which the determination of the fastest strategy is feasible let us remark that the order of the machine associated to tvsbs is equal to thus greater than that of the fastest strategy that we compute the transformation into matching machines was implemented for a few other pattern matching approaches for instance the sa algorithm the algorithm based on bitwise operations or the automaton since the asymptotic and average speeds of these two algorithms are exactly whatever the pattern the model and the text there is no point in displaying them results we shall evaluate the pattern matching algorithms presented in section the and and the fastest strategy each time it is possible ra tt st fa st e ris tic eu ris tic eu ris tic v eu as hq h o eb m s t sb fj s h or sp o ol rc h ris ui q ck se a t hm or ra t nu t k ris m or ai ve n aaaa aaab aaba aabb abaa abab abba abbb baaa baab baba babb bbaa bbab bbba bbbb table asymptotic speeds for the patterns of length on a b under the uniform model asymptotic speed the asymptotic speeds are computed for texts and patterns on the binary alphabet a b table displays the asymptotic speeds for all the patterns of length on iid texts drawn from the uniform distribution as expected the strategy computed with the brute force algorithm last column is actually the fastest but the speeds of the and are very close the algorithms are outperformed by all our approaches even by the for all the patterns we observe that the naive and algorithms have asymptotic speeds always smaller than one can not expect them to be faster since by construction they access all the positions of a text at least once in the following we will not display their speeds nor that of quicksearch for they are always smaller than at least one of the other preexisting algorithms the full tables can easily be by using our software table displays the asymptotic speeds with regard to the same patterns as table but under the iid model this table shows the asymptotic speeds of the and the fastest strategies computed with regard to an uniform iid model the columns starting with the strategies such obtained are not optimized according to the letter probabilities of the model they may be used as general purpose approaches while the strategies obtained from the model probabilities will be called adapted below overall eu ris tic eu ris tic fa st es t u eu r ist ic st ic ris t u ni ni fa st e ist ic eu ic eu r ris t eu ni u u ni eb o m h as hq t v sb s ol fj s po or s h aaaa aaab aaba aabb abaa abab abba abbb baaa baab baba babb bbaa bbab bbba bbbb table asymptotic speeds for the patterns of length on a b under the iid model our methods are faster than the algorithms with a few exceptions horspool is faster than the for two patterns ending with the rare letter a aaaa and baaa and ebom is faster than the for searching baba the and the fastest strategies computed with regard to an uniform iid model have asymptotic speeds smaller than their counterparts obtained from the actual probabilities of the text model here highly unbalanced nevertheless the uniform approaches still perform quite well notably better than the algorithms except for the uniform and the same patterns as above considering longer patterns leads to similar observations table shows the asymptotic speeds obtained for random patterns of length the outperforms all the others approaches the fastest strategy can not be computed for this length the uniform is slower than algorithms such ebom or hashq but both the uniform and overall perform better than the algorithms though they are slightly slower for a few patterns average speed our data benchmark consists in the wigglesworthia glossinidia genome known for its bias in nucleotide composition of a t and the bible in english from table displays the average speeds of patterns randomly picked from the data let us remark that we are now dealing with real texts which are not iid in particular the fastest strategy could possibly be outperformed this is ris tic eu r is eu tic r is t i h eu ris tic eu ris tic h u u u ni ni h eu ris tic eu eb o as hq m s sb v t s fj ni l oo or sp h babbbaabab ababbbbbab aaabaaaaba bbbabbabab bbabaabbab baabbaaaaa abbbababbb baabbbabba baabbaabab bbbbababbb ic fa st es h eu ris tic eu ris tic eu ris tic fa st es t u ni ni u eu r ist ist ic eu r h u ni u ni hq as h o m eb sb s v t s fj atat tatg aaat tccc caat aacc acta tatc gtga gatt he m to usal le t hem are at d f th r th fede h or sp o ol eu r ist ic table asymptotic speeds for some patterns of length on a b drawn from the uniform distribution under the iid model table average speeds for some patterns of length picked from the benchmark data the wigglesworthia glossinidia complete genome and the bible in english fj s t eb h as hq u ni eu u ni ris tic eu u ni ris tic eu h eu tic ris tic eu ris tic eu ris tic tccttatgtaaaatataaatgtagcaattt aaaagaaccccggcgaggggagtgaaatag aattttcaactaatattaaaccacgttctg aaaggtccattaagtattactatcacagca agatttgcgtgatttaaaataatcatctaa ataggaaaagattggattaaactagatatg at the mount called the mount ith israel to wit with all t esus going up to jerusalem too them as they were able to hea o in osee i will call them my things are come upon thee the syria that dwelt at damascus full of darkness if therefor e it for there is no other sa g syria is confederate with e s o m v sb sp oo l h or tggataaaaatttgttattaccatatctat cttctttaattatgttttctatttcttttt gttctatttgttggagatttaaaataatta tcctactttaacctctaaatgtcccttatt table average speeds for some patterns of length picked from the benchmark data the wigglesworthia glossinidia complete genome and the bible in english not observed on the benchmark data the and uniform and adapted are faster than the algorithms for all the patterns whereas the is sometimes slightly outperformed by horspool horspool is almost as fast as our approaches on the bible while being sometimes significantly outperformed on the wigglesworthia glossinidia genome the average speeds are overall greater on the bible than on the dna sequence in both cases we do not observe a wide performance gap between the uniform and the adapted approaches though our benchmark data are far from following an uniform iid model let us remark that the and have almost the same performances both in the uniform and the adapted cases table shows the averages speeds with regard to patterns of length the average speeds on the bible are about twice those on the wigglesworthia glossinidia genome one actually expects the speed to be greater in average on texts with large alphabets since the less likely the match between two symbols the greater the shift expectation per iteration again the uniform or adapted outperforms the algorithms the speeds of the and of the differ in a greater amount than with patterns of length for the wigglesworthia glossinidia genome and to a smaller extent for the bible discussion in practical situations and though they do not take into account the letter frequencies the uniform and the uniform fastest strategy perform generally almost as well as their adapted counterparts the greatest difference observed is for the patterns of length on the wigglesworthia glossinidia genome table and is relatively small we do observe a notable amount of difference for the quite extreme case of the asymptotic speed under the iid model but even for these frequencies the uniform approaches show greater asymptotic speeds than any of the selected algorithms the has very good results whatever the pattern or the text there is no situation for which the performances of the are far from the best on the contrary the performance ranking of the algorithms depends heavily on the patterns and on the texts or the model for instance horspool may perform very well even almost optimally for some patterns and texts or models while its speed may completely plummet in other situations the question of selecting the most efficient order of still deserves further investigations a basic answer could be the greater the better but we should take into consideration that an higher order of heuristic comes with an increased computational cost after some experiments we observed that the asymptotic speed the tends to stop improving beyond a certain rank for instance the difference in average speed between the and for patterns of length both on the genome and on the bible probably does not justify the computational cost of the while it is worth to use the rather than the for searching patterns of length in the bible not that much for the wigglesworthia glossinidia genome the best for the order of the depends on the pattern notably its length and on the text features in particular the alphabet size and the letter frequencies it is certainly possible to obtain efficient heuristic with a lower computational cost than for the since in standard situation the length of the text is much greater than that of the pattern there is no real reason for considering only pattern matching algorithms with linear of the pattern in the extreme case where the texts are arbitrarily long with regard to the patterns any whatever its computation time would be beneficial as soon as it improves the overall speed authors contributions gilles didier provided the initial idea led the software development and wrote all the manuscript but the section evaluation laurent tichit collaborated on the software development ran the tests and wrote the section evaluation both authors read edited and approved the final manuscript references allauzen crochemore and raffinot efficient experimental string matching by weak factor recognition in combinatorial pattern matching pages springer and gonnet a new approach to text searching communications of the acm and average running time of the algorithm theoretical computer science barth an analytical comparison of two string searching algorithms information processing letters boyer and moore a fast string searching algorithm communications of the acm charras and lecroq handbook of exact string matching algorithms king s college publications cormen leiserson and rivest introduction to algorithms mit press didier optimal pattern matching algorithms http faro and lecroq efficient variants of the algorithm international journal of foundations of computer science faro and lecroq the exact online string matching problem a review of the most recent results acm comput mar franek jennings and smyth a simple fast hybrid patternmatching algorithm in combinatorial pattern matching pages springer guibas and odlyzko string overlaps pattern matching and nontransitive games journal of combinatorial theory series a horspool practical fast searching in strings software practice and experience karp and rabin efficient randomized algorithms ibm journal of research and development knuth morris jr and pratt fast pattern matching in strings siam journal on computing mahmoud smythe and analysis of heuristic random struct algorithms marschall herms kaltenbach and rahmann probabilistic arithmetic automata and their applications trans comput biol bioinformatics marschall and rahmann probabilistic arithmetic automata and their application to pattern matching statistics in ferragina and landau editors combinatorial pattern matching volume of lecture notes in computer science pages springer berlin heidelberg marschall and rahmann exact analysis of horspools and sundays pattern matching algorithms with probabilistic arithmetic automata in dediu fernau and editors language and automata theory and applications volume of lecture notes in computer science pages springer berlin heidelberg marschall and rahmann an algorithm to compute the character access count distribution for pattern matching algorithms algorithms and szpankowski complexity of sequential pattern matching algorithms in luby rolim and serna editors randomization and approximation techniques in computer science volume of lecture notes in computer science pages springer berlin heidelberg smythe the heuristic with markovian input random struct algorithms sunday a very fast substring search algorithm communications of the acm thathoo virmani sai lakshmi balakrishnan and sekar tvsbs a fast exact pattern matching algorithm for biological sequences current science tsai average case analysis of the algorithm random struct algorithms july wu manber a fast algorithm for searching tech report cs university of arizona yao the complexity of pattern matching for a random string siam journal on computing
8
intrinsic point of interest discovery from trajectory data matthew piekenbrock derek doran dept of computer science engineering research center wright state university dayton oh usa dept of computer science engineering research center wright state university dayton oh usa dec abstract this paper presents a framework for intrinsic point of interest discovery from trajectory databases intrinsic points of interest are regions of a geospatial area innately defined by the spatial and temporal aspects of trajectory data and can be of varying size shape and resolution any trajectory database exhibits such points of interest and hence are intrinsic as compared to most other point of interest definitions which are said to be extrinsic as they require trajectory metadata external knowledge about the region the trajectories are observed or other information spatial and temporal aspects are qualities of any trajectory database making the framework applicable to data from any domain and of any resolution the framework is developed under recent developments on the consistency of nonparametric hierarchical density estimators and enables the possibility of formal statistical inference and evaluation over such intrinsic points of interest comparisons of the pois uncovered by the framework in synthetic truth data to thousands of parameter settings for common poi discovery methods show a marked improvement in fidelity without the need to tune any parameters by hand acm reference format matthew piekenbrock and derek doran intrinsic point of interest discovery from trajectory data in proceedings of acm conference washington dc usa july conference pages doi introduction the development and deployment of location acquisition systems have enabled large scale capturing of movement or trajectory data from people cars and other objects technologies like global positioning systems gps global system for mobile communications gsm wide area motion imagery wami and identification rfid allow organizations and governments to collect and exploit trajectory patterns in many scenarios more recent initiatives uber s and ibm s smarter programs have even made such data available to the either the public or city planning experts at large with the rise in importance of this https https permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page copyrights for components of this work owned by others than acm must be honored abstracting with credit is permitted to copy otherwise or republish to post on servers or to redistribute to lists requires prior specific permission a fee request permissions from permissions conference washington dc usa acm doi data comes prevalent use of geographic information systems gis and related platforms such as and other related use cases of gis information have also emerged for surveillance and service lbs applications in many of these applications trajectory data is exploited for knowledge acquisition tasks the integration of movement patterns to uncover patterns of life over a region to expand situational awareness in crises and to support the value added by a lbs application in many of these knowledge acquisition tasks the notion of a location or point of interest poi is foundational to understanding the entirety of the common space in which the data are observed for example mapping systems must know the position and geometry of locations for navigation and automated guidance control purposes in lbs applications the pois and metadata such as their popularity are necessary to provide useful location recommendations because pois are not available from raw trajectory data captured by location acquisition systems they are often extrinsically defined by gazetteers such as google places foursquare geonames or openstreetmap yet external sources of location data present many difficulties when faced with the problem of understanding how a given trajectory dataset relates to the underlying geographical area where it was observed for example many gazetteers store varying types of either poi metadata or poi relational data allowing information to present a source of bias furthermore relying on gazetteers explicitly defines the set of pois that exist in a given geographical region when there is disagreement on this definition analysis becomes difficult furthermore with pois defined a priori one is faced with the problem of fitting observed trajectory data to models defined by such pois many of which may or may not be relevant to the given data at hand for example it may be desirable for a gathering movement trajectory data following a public event to discover bottleneck congestion areas like parking lots roads or sidewalk segments for the purpose of traffic analysis in this situation it would more useful to discover pois directly from the data itself during the event but such geographical pois may not be available in a gazetteer to over come these challenges this paper investigates the poi discovery problem in the most generic context possible we ask given only trajectory data without access to gazetteers can we infer subregions within a geospace that are interesting enough to call it a poi we seek intrinsic pois which are pois recoverable without the use of a gazetteer are completely defined by observed movement patterns and can be used for any application and at any scale from movements within a building to movements across an entire city region to make such a definition meaningful https https we build off recent theoretical work in clustering and introduce a statistically rigorous definition of a poi applicable to trajectory data of any and even mixed resolution the definition follows from a recent minimax analysis of the consistency of hierarchical density superlevel set estimators we use this definition to present a framework for extracting intrinsic pois yields an optimal unsupervised solution without ad hoc a comparative analysis is performed on realistic simulations involving both vehicle and pedestrian traffic validation results show marked improvements in fidelity against several state of the art sota algorithms of interest to the authors the simulation settings the resulting traffic data the validation code and the framework itself is all completely reproducible and open source available point of interest discovery this section provides preliminary information about the poi discovery problem and provides context and definitions for this work it then formally defines a poi and subsequently an intrinsic poi and the framework their discovery preliminaries we consider a trajectory database of discrete spatial data having at least the of attributes object id spatial component temporal component this minimal amount of information implies a trajectory for an object of the form t pn where pn are chronologically ordered spatial coordinates in the geographical sense these spatial components are often defined by a latitude and longitude pair but in practice could be from any coordinate system such representations require trajectory pattern mining techniques or techniques that seek to mine common spatiotemporal patterns across trajectories to assert significance over areas where trajectory patterns emerge mined patterns in trajectories are often referred to as mobility patterns characterizing some specific trajectory quality of interest such as heading stopping rate velocity rotation curvature or shape such mobility patterns exhibit properties that make the formal retrieval of significant areas challenging for example if the timespan of an observed trajectory is long the processes driving the mobility pattern may be road traffic that changes due to construction or congestion effects due to time of day shifts in the work schedule there may also be paths of objects that are transient some areas are never traveled to more than once furthermore the spatial components in trajectory data often have a high degree of autocorrelation breaking assumptions of independence a variety of models have been proposed to handle thee situations largely focusing on estimating individual trajectory statistics under these assumptions this includes for examples adaptive kalman we see this as a necessary form of usability an important feature to have in the modern clustering era it is that having several sensitive parameters results in combinatorial explosion of the parameter space of an algorithm resulting in the need for the user to use one or more methods to arrive at a solution that befits the application for review filters for vehicle navigation models and trajectory path uncertainty models the knowledge mined from individual trajectories says little of the macroscopic patterns driving such trajectory observations rather than focusing on the statistics of individual trajectories collective models preprocess the trajectory data to extract characteristics across a swath of trajectories such preprocessing is desirable as it discards highly autocorrelated data representing redundant information in favor of aggregating trajectory positions into observations of significance examples of this preprocessing scheme include extracting semantically enriched points that intersect known geographical regions aggregating trajectory positions as stay points using supplied spatial temporal thresholds or processing trajectory data into groups using some convex combination of spatial temporal and semantic similarity kernels in a collective model we refer to the important or semantically meaningful data samples aggregated from trajectory points as exemplar positions or simply exemplars definition exemplar consider a sample of n discrete points x n rd that constitute a trajectory t define an aggregation function p x n rd that maps any subset of points a trajectory segment in t to a set of exemplar positions rd the aggregation function of choice depends on the intent of the analysis for example consider an urban environmental study that defines as a mapping of some isolated trajectory segment pk pk pk to the mean coordinate of the segment if the speed of the object traveling from pk to pk exceeds a certain threshold groups of these exemplar positions may determine highemission zones in a city alternatively if the traffic is made of pedestrians such groups may represent tourist attraction areas the popularity of which are useful for lbs applications it is not difficult to find this type of trajectory preprocessing in geospatial applications and the grouping of them is foundational to countless tasks in trajectory mining we generalize this preprocessing step by referring to it as exemplar an important aspect of exemplar extraction is to choose an aggregation function that befits the intent of the analyst and thus satisfies a study s interpretation of this is inevitably and the proposed framework is agnostic to the specific form of aggregation used thus it is irrelevant to bestow a particular interpretation of what interesting means we consider a more concrete and practical definition using a popular type of aggregation in section defining a point of interest under the premise that exemplars represent meaningful aggregations of observations from a trajectory data source it is natural to define a poi as a region of exemplars we seek a definition of such regions with a statistical rather than heuristic foundation as a means of reflecting the naturally occurring structure within the data towards this end we define a poi as a contiguous high density region of exemplars to formalize this definition we follow the notation of chaudhuri et al let x be a subset of rd and define a path as a function p s where s x also denote the equivalence relation c figure illustrating the cluster tree hierarchy and its interpretation of pois consider an estimated density right panel of exemplar positions extracted from trajectories in a geospace middle bottom panel a poi is a geospatial region inhabited by exemplar positions at some density threshold with the number of the pois extracted depend on this scale parameter setting left panel higher limits a poi to being specific and small and could cause pois to be manifested by random noise or be overfitted to a particular set of observations low defines pois as very broad areas of low exemplar position density the cluster tree hierarchy left panel summarizes the set of exemplar positions representing a poi at every density threshold thus capturing the entire collection of pois over a common area middle panel upper layers as connected where xcy iff p x and p y then c partitions s into connected components or clusters each component represents an area of high density and is called a high density cluster definition high density clusters for a density function f on rd consider a partitioning x f x for some where is called the level or high density threshold parameter then all maximally connected components in this set are high density clusters at density level we relate this formal definition to the trajectory mining domain with the following definition of a point of interest defined over a extracted set of exemplars definition point of interest given a set of m exemplars and a fixed scale or resolution each high density cluster of such exemplars forms a point of interest at the density level the sets of high density clusters across all values of forms a hierarchy often referred to as the cluster tree of the density f a hierarchical definition of locations is common and matches the intuitive interpretation of a poi for example not only may a particular restaurant in a mall food court be a poi but the food court itself may also be considered a poi as well as entire mall may be yet another poi the cluster tree conceptualization formalizes a poi as a maximally connected set of exemplars falling along a higher density area implying such areas are significant and that such connected exemplars may be related a visualization of hierarchical pois and a dendrogram of the corresponding cluster tree is provided in figure the middle figure demonstrates a view of what a set of trajectories might look like with the colored dots in the left and middle figures representing exemplars the right figure demonstrates a density estimate of the positions of these exemplars that is when these exemplars are very close to each other they re said to have a high density and are thought to be related constituting a scale of the density depends on the analysis at hand a sufficiently low density threshold will designate every exemplar as one poi from this definition it may seem that any arbitrary density estimator may be used to find clusters simply estimate the density of every point by kernel density estimation kde and then iterate through all possible values of that create distinct clusters yet not every estimation will produce the same kernels and kernel bandwidths may result in a completely different hierarchy of clusters and by extension a different set of pois from the cluster tree perspective the ideal kernel fn is one that is uniformly consistent supx fn x f x as n from a given sample x n in this case a model could be fitted with the appropriate kernel and bandwidth parameter and the would kde furnish a continuous surface from which a cluster tree and its clusters can be derived the main issue is that the set of all clusters is not easy to compute for typical density estimates of f and generally require a significant amount of memory to store this computational inefficiency limits usability for large trajectory datasets often observed over wide geographical areas and over long periods of time from the applied perspective many approaches find pois by variants of hierarchical clustering to find groups of exemplars this has proved useful for problems but they are largely heuristic it is common for most clustering algorithms to have unstated or unknown statistical properties precluding the possibility of formal inference the framework we introduce therefore examines clustering methods as they are designed to infer a cluster tree without facing the computational hurdles of kdes a desirable property of any density estimator is some notion of in hartigan establsihed a reasonable definition often referred to as hartigan consistency definition hartigan consistency let cfn be the set of all clusters from the cluster tree for any sets a x let an respectively denote the smallest set of cfn containing a x n respectively x n cfn is consistent if whenever a and are different connected components of x f x for some p an is disjoint from as n this consistency definition essentially requires that two disjoint clusters from the unknown population density a and will also be disjoint components in a given empirical cluster tree an and given enough samples n the proposed framework for poi discovery is developed and implemented from the first computationally tractable and provably consistent algorithm that satisfies hartigan consistency as analyzed by chaudhuri et al to be discussed in the next section having a nonparametric model satisfying this notion of consistency is important as it transforms the unsupervised problem of poi discovery into a formal statistical estimation problem not only enabling analysis driven by data but requiring minimal assumptions regarding the nature of the data such a relation enables methods of formal statistical inference allowing one to quantify uncertainty to create hypothesis tests to discern true pois as opposed to false pois resulting from random noise or artifacts of low sample sizes or to create notions of confidence in estimation consistent cluster tree estimation we next motivate a recent cluster tree estimator and discuss its relationship and applicability to poi discovery for the propsoed framework recall that an empirical estimate of the cluster tree applied over exemplars represents a hierarchy of pois viewed from this perspective what we propose can be seen as an extension of chaudhuri et al s work on the cluster tree to a trajectory mining context consider using sl clustering an agglomerative scheme that creates a hierarchical representation of clusters using the minimum pairwise distance d between all points as a tool for clustering exemplars beginning with every exemplar x as a singleton sl iteratively merges exemplars into clusters according to the linkage function d x i x j min d x i x j x i x j sl clustering is often criticized due to its tendency to create excessive chaining wherein two clusters which may have been seen as generally unrelated are amalgamated by chance at a distance threshold that does not reflect the true dissimilarity between the resulting clusters hartigan proved sl is a consistent estimator of the cluster tree for densities in r for d and is not consistent for any d implying that any sl cluster that contains all the sample points in a will also contain nearly all sample points in in probability as n this is reflected in the geospatial sense as well that an estimator whose value is a point estimate of is consistent if as more samples are collected n converges in probability to the true value of the parameter plim recall the condition is related to the thin bridge between any two population modes fractional consistency was shown for sl if for any pair a and the ratio of inf f x x a to sup inf f x x p paths p from a to is sufficiently large figure sl excessive chaining example the bottom panel denotes a possible clustering using sl when pedestrians were found to stop between buildings consider the case where exemplars represent aggregated stops within a set of trajectories a case that we will also consider later in section if an area is observed long enough such exemplars should naturally form an area high density in areas where people stop frequently within buildings in such cases it may be useful to categorize exemplars within their respective pois this is done in supervised way applications extract semantic information see for example however it s also possible that there exist a few stops just outside of such buildings which sl has a tendency to chain together an example of this is shown in figure this discovery motivated efforts to modify sl not only to reduce this chaining to make sl more robust but also to achieve at least hartigan consistency for d and beyond the first provably consistent estimator which we consider in this effort is a generalization of sl referred to as robust single linkage rsl robust single linkage let x be a subset of rd let denote the norm and let b x r be a closed ball of radius r around the point x the rsl algorithm is given in the listing below robust single linkage algorithm for each x i set r k x i inf r b x i r contains k data points as r grows from to a construct a graph g r with nodes x i r k x i r include edge x i x j if kx i x j k b let cfn r be the connected components of g r the rsl algorithm has two free parameters which need to be set and sl is equivalent to rsl with the setting k whereas sl is equivalent to and can be efficiently computed by the minimum spanning tree mst computed over all pairwise distances rsl scales these distances by a constant factor and only reduces to the mst if the components are restricted from connecting satisfying x i r k x i r within the mst computation chaudhuri et al found that rsl is hartigan consistent and established rates of convergence for all with the optimal rate of convergence with the setting finding intrinsic points of interest using a consistent cluster tree estimator such as rsl on a set of exemplars creates a hierarchical representation of pois however a nested set of multiple solutions is not always desirable and a flat solution where each point is assigned a single label may be preferred a traditional approach in hierarchical clustering cutting the empirical cluster tree at a given density threshold value yields a set of clusters cfn c c cm that form m pois a possible flat solution however the choice of forces all pois to be of the same scale requires the user to know which granularity to choose a priori affecting the size and kinds of pois discovered for example a small may define shops in a mall as pois while a larger may define the mall itself as a poi it may not be known ahead of time what granularity level is relevant furthermore it is reasonable to expect that relevant pois exist at multiple levels of granularity such that a sprawling city park and a small restaurant could both constitute a poi thus it would useful to have some sensible notion of cluster quality that can be used and optimized as an objective function to discover pois that are not dependent on the analyst s choice of and are strongly intrinsic to the geospace itself are intrinsic pois to capture pois of any scale and hence satisfy our notion of an intrinsic pois we first recall that clusters are contiguous relatively dense areas of the data space separated by contiguous and relatively areas defined over a working definition of density over a set of exemplars from a statistical point of view we can think of a cluster as a set of points with high density around some neighborhood or volume of the support et al quantify this using a functional called excess of mass definition excess of mass for a ci cfn for some value of the excess of mass of ci is given by e ci f x ci dx x i where ci represents the lowest density level ci appears initially this measure seems like a reasonable definition of the quality of a clustering within the cluster tree estimate considering the definition of a cluster from equation where a cluster exists along a mode of local maximum of the underlying density it s far too likely that a estimation may empirically find a mode at a given point x if the data is sparse allowing an arbitrarily low probability associated x to be classified a more interesting result would be to associate a cluster with a region that exhibits relatively high probability over a neighborhood see et al for visualization along with a more in depth description of this functional however as campello et al remark this measure exhibits monotonic behavior in any direction varying the in the hierarchy and instead propose an alternative local measure of cluster quality definition relative excess of mass for a ci cfn for some value of the relative excess of mass of ci is given by e r ci x ci ci dx x i where x ci min f x ci is the density level beyond which x is no longer part of ci and ci is the highest density beyond which ci either becomes disconnected creating separate components or disappears creating singleton clusters it is important to note that relative excess of mass is defined in terms of values associated with a specific cluster as opposed to a specific clustering this implies that an optimal clustering with respect to the relative excess of mass estimate may not occur at a fixed global density threshold but rather as a result of several local density thresholds applied to the hierarchy intuitively if a given cluster ci contains many points that have high density relative to ci such a cluster will exist across several thresholds of and is thus robust to fluctuations in the scale of analysis for this reason the relative excess of mass can be thought of as a measure of cluster stability across different density levels which we posit reflects an intrinsic poi that is innately defined by the dataset independent of density level such intrinsic pois are thus defined as follows let be an indicator equal to if cluster ci cfn represents an intrinsic poi and otherwise assign values to these indicators such that the following is maximized m maximize j e r ci subject to i m exactly one per disjoint branch where the per disjoint branch constraint means that the indicator function equals exactly once for all clusters in each path from a leaf node to the root of the cluster tree the optimization of this objective function is beyond the scope of this paper we refer to campello et al s cluster extraction method for general cluster hierarchies to solve this optimization as it was developed alongside an estimator very similar to rsl is capable of producing an optimal result at several density levels and accounts for the density thresholds at which points become noise fall along densities below a given threshold experiments and discussion we next evaluate the proposed framework for intrinsic poi discovery because intrinsic pois do not rely on gazetteers and may manifest themselves in unknown locations evaluation on real data validated against ground truth external knowledge such as imported location data from sources such as openstreetmap or google places is not feasible a common approach to evaluate clusterings when ground truth is absent is to use an internal cluster validity index cvi cvis include common indices like the silhouette score the dunn index and the criterion see arbelaitz for an overview of these techniques and references therein recent work recommends validation using multiple cvis as they each score different aspects of a clustering such as the ratio of to distances sum of squares distance to centroid or scores based on similarity we do not believe scores are informative for intrinsic poi evaluation as most of these cvis operate on unrealistic assumptions symmetry or convexity of cluster shape a notion of minimal variance the existence of a centroid or medoid contrary to these widespread concepts we do not assume that a cluster of exemplars representing an intrinsic poi will maintain some or shape indeed there are a number of features within a geographical area that may be considered pois yet inevitably exhibit arbitrary shapes buildings parks gathering areas etc and manifest at varying densities a busy intersection that is small and concentrated in exemplar density a parking lot that is large and more uniform in density following the advice of guyon and luxburg et al we evaluate the efficacy of the framework in the context of its we use an external validation where truth can be defined a priori over simulated data enabling a direct evaluation of intrinsic pois against truly interesting regions while ensuring the latent patterns in the generated data mimic the real geospatial dynamics of cars and pedestrians over a region generating synthetic data to generate synthetic data for evaluation we turn to the simulation of urban mobility sumo software sumo is an open source traffic simulation system capable of generating trajectories of many objects of multiple modalities car truck person plane given a shapefile that defines avenues for travel a road network a map of footpaths within a university campus or the floor plan of a mall or large building sumo is able to generate trajectories following the avenues provided default parameter settings generate traffic and trajectories in ways that satisfy their measured physical properties have been shown to be incredibly accurate we use sumo to generate two simulations of both pedestrian and vehicular traffic under different geographical areas an urban region having a mixture of vehicle and pedestrian traffic the area surrounding the ohio state university osu and a suburban area where pedestrian traffic is more prominent the area surrounding wright state university wsu details about the simulation the simulation data used in this paper and the code that produced the resulting evaluation are all publicly available and reproducible the rsl cluster tree framework itself is part of a larger open source effort by the simulation configuration sumo requires every object to have a trip defined by departure and destination nodes which sumo refers to as junctions junctions are connected by edges representing a possible travel path given a file containing the trip definitions of every object sumo dynamically generates routes or sequences of edges the object travels along to get from departure junction a to its destination junction b we leave nearly all simulation parameters at their default settings only modifying simulation length and arrival parameters binomially distributed arrivals to generate pedestrian and vehicle demand because pedestrian traffic within unrestricted and indoor areas may constitute intrinsic pois in a realistic setting and because sumo can generate only outdoor pedestrian traffic we extended sumo to simulate indoor pedestrian traffic as well figure illustrates how this extension interplays with vehicular traffic generated by shapefiles denoting the location of buildings are first loaded into sumo the peach colored regions in figure inlet then within the shapefile a random number of junctions are generated within the building and registered to nearby edges such as sidewalks if a generated track is labeled as a pedestrian and its trip includes a junction contained within the building region figure lower right inlet the pedestrian undergoes a random walk within the junctions generated in the building this random walk is emulated by choosing a random ordered subset of the generated junctions for a random amount of see the following for simulation details for review the following package for review see for review see figure view of extending sumo to support indoor pedestrian traffic shapefiles defining buildings are loaded into sumo and registered as junction if pedestriantrack visits an attached junction during a trip the simulator chooses ordered random set of junctions to follow within the building exiting after a random period of time time the pedestrian visits these interior junctions and then travels to an exit junction attached to the building polygon continuing along the original outdoor route generated by sumo defining truth recall that intrinsic pois are inferred by exemplars representing the specific mobility pattern of interest with both vehicular and more realistic pedestrian demand generated the next step in data generation is to define an aggregation function to extract meaningful exemplars to give a concrete of the proposed framework we align our experiment with much of the applied literature related to this topic and extract exemplars representing the stay points of an object a stay point is a position where objects have stopped or significantly slowed down extracting such points from simulated sumo data is trivial as the true speed of any traveling object is known at any given time for pedestrian traffic we extract trajectory points where pedestrians stopped moving for vehicular traffic we extract either a the points where the vehicles stopped moving or b the slowest point in a vehicle braking sequence using sumos exported braking signals whichever is available from these stay points exemplars we next establish a mapping between each exemplar and its presence within a true intrinsic poi allowing external validation since exemplars represent object stopped moving a natural definition of an intrinsic poi is an assignment defined by the mechanism causing such objects to stop specifically we define a building that pedestrians stop within as a true intrinsic poi as this is very natural and useful grouping we follow a similar pattern for vehicular traffic assigning exemplars a common label if stopped at identical intersections stop signs or stop light or other junctions this mechanistic assignment of creating true intrinsic pois has the benefit of not only being tractable in the sense that sumo provides this information directly but also being semantically meaningful in the sense that the mechanisms encouraging objects to stop moving are intrinsic to the geospace experimental design to evaluate the fidelity of the pois extracted by the proposed framework under multiple settings we run sumo simulations over the osu and wsu geospaces with parameter settings reflecting differences between the two regions these settings are shown in table sumo simulation parameters region build veh ings ped region size sim length osu hours wsu hours table the osu geospace covers a smaller area has an equal mix of vehicles and pedestrians and nearly three times as many buildings the osu geospace also has a larger number of roadways and traffic intersections where intrinsic pois involving vehicles may materialize being within the main campus the wsu geospace has a larger proportion of pedestrian traffic with few roadways for vehicles to traverse and smaller number of buildings pedestrians may visit figures a and a show what this sumo generated pois labeling creates for the osu and wsu campus areas respectively qualitatively examination of these clusters appear to be reasonable labels of intrinsic pois for example the clusters representing true pois across osu in figure a finds buildings surrounding the osu oval quad particular locations on the ring road around the quad which tend to be busy osu intersections for both vehicles and pedestrians and parking lots around the osu recreation builds west of the oval to represent pois across wsu in figure a the truth pois represent each of the major buildings around the campus with particularly complex separate areas of movement in wsu s large student union the yellow points in the large building in the lower left part of the figure evaluation measures as discussed at the beginning of this section the unsupervised nature of intrinsic poi discovery make it difficult to carry out a meaningful evaluation of poi discovery methods using internal not requiring truth labels validation measures instead we consider a multifaceted approach an external quantitative evaluation of whether the intrinsic pois discovered aligns with sumo generated pois using the adjusted rand index ari and qualitative evaluation of the quality of the intrinsic pois our approach unearths as compared to the true intrinsic pois as defined above the of indices were chosen due to their transparency and whereas the traditional ri measures the proportion of pairwise agreements between two partitions the ari also adjusts the score based on the expected value of agreements under the null hypothesis that the agreements were completely random and thus is what we report algorithms compared we further compare the fidelity of the intrinsic pois extracted by the proposed framework against other clustering algorithms commonly used for poi discovery from trajectories we either downloaded the implementation of or implemented ourselves a number of these algorithms for comparison aside from rsl the selected methods includes the algorithms dbscan and optics the widespread hierarchical algorithms single linkage sl average linkage al and wards criterion wl along with the algorithms and clara these algorithms were chosen due to their relevance to this problem availability known success in the clustering world parameter settings clustering algorithms generally require in order to fit to a given data set but the number and semantics of these parameters often changes with the algorithm used leaving comparisons between parameter settings difficult although most hierarchical algorithms carry no free parameters to create a hierarchical set of solutions they do require either a threshold value h or the exact number of clusters to extract k to be specified to extract a flat clustering similarly and clara also require k to be specified a priori because the k parameter has the same interpretation in multiple algorithms we will use k to refer to the number of clusters extracted density based algorithms have multiple parameters with interpretations compared to the aforementioned algorithms for example dbscan requires a minimum cluster size parameter minpts and a distance or scale threshold to be set optics often cited as extension to dbscan is an ordering algorithm a parameter setting for be be used to extract either a flat cluster extraction or a simplified hierarchy using a either the a distance threshold or a threshold respectively the cluster extraction is reported here rsl requires the setting of and k the former relating to scaling the connection radii used to connect components and the latter to the saliency of cluster estimates note that in rsl k is more similar to the minpts parameter in that it is a minimum neighborhood parameter the number of clusters is automatically determined by optimized the defined relative excess of mass functional from section each algorithm reflects a large set of possible solutions over its parameter setting choosing a single parameter setting for evaluation would represent a source of possible bias rather we employ a more comprehensive approach by comparing a wide range of parameter settings for each algorithm to define these ranges let seq x y s denote the sequential range operator skipping s values in the sequence of integers from x to y for example seq n n and seq n i i n i n for the hierarchical clustering algorithms sl al and wl the number of flat clusters extracted k is varied in the range seq nt where nt is the number of true pois assigned by sumo we see this as a reasonable strategy as it gives a better view of how multiple levels extracted from the hierarchy matched the data set as well as how well the merge criterion or linkage function collectively captures the true pois in the geospace we use the same range to vary k for the and clara algorithms the density based methods dbscan and optics are evaluated by first varying minpts and then for each value of minpts by varying the scale parameters and respectively recall minpts relates to a minimum neighborhood value that constitutes a cluster thus and to allow the testing to be tractable we set minpts to reflect the possible sizes of the pois along the quantiles qnt seq corresponding to the the number of exemplars per poi in the sumo truth data the distance thresholds for dbscan and optics are also varied along the quantiles seq of the pairwise distances computed over the data set since all methods mark points that fall in areas of the data set not sufficiently dense as noise according to a scale to severe overfitting if not guided with a measure like densitybased solutions were deemed only valid if at least of the data is classified with a label finally the rsl also contains a true pois b inferred intrinsic pois figure intrinsic poi comparison osu c true pois d inferred intrinsic pois figure intrinsic poi comparison wsu two parameters a k value and an parameter we use chaudhuri et al s analysis to determine how to set these rsl was shown to have optimal rates of convergence when so we leave it at that constant value similarly the rate only holds for k at least as large as d log n where d is the dimensionality of the data set d in this case after varying through the small set of k values in a similar fashion as was performed for dbscan and optics k qnt d log n in total and cluster configurations were performed for the osu and wsu simulations respectively totalling reported configurations validation testing and discussion qualitative comparison to truth figures and compare the intrinsic pois discovered by our framework agains the simulation s true pois recall that points with low density are discarded as noise not shown direct comparison of the true pois defined by the simulation show clear similarities over the osu simulation in figure b the framework recovers intrinsic pois within buildings no matter its shape the density or closeness to other buildings it also recovers intrinsic pois over parking lots and street intersections around the osu oval some buildings are decomposed into a collection of individual intrinsic pois for example the easternmost large campus building by the northeast corner of the oval contains three separate intrinsic pois one at its entrance by the road another in the center of the building and a third at its back entrance although these labels may not match what sumo assigned they are in some sense more natural it s quite possible for large buildings to have dense isolated areas of people movement looking at the intrinsic pois over the wsu dataset in figure b we find each building in general is recovered as an intrinsic poi and align in shape compared to the shape of the true pois from figure a we also note that the framework determines that some movement within buildings but covering very small areas were found not be significant enough be an intrinsic poi large buildings also showed further decomposition like the osu simulation for example in the wsu student union large building in the lower left corner of the figure the framework defines intrinsic poi s at the center and two back exits from the building quantitative comparison to other approaches the proposed intrinsic poi framework measured ari scores of on the osu data set and for the wsu data set note that this is not the maximum ari of any rsl solution but the ari of the solution found using the highest predefined notion of stability determined completely without any knowledge of the surrounding geographical area rsl performed consistently in terms of having low variability compared to other algorithm with overall high similarity to the semantically driven sumo assigned locations figure shows the distribution of the ari for the algorithms we compared our method against with the parameter settings discussed in section the orange line corresponds with the ari of the proposed framework which compares favorably with best possible settings of other algorithms note that although dbscan like others performed well with very specific configurations of minpts and the settings of these parameters is often not very intuitive in unsupervised scenarios where the truth is unknown and thus external measures like ari can not be computed of the hierarchical algorithms we see the impact of the linkage criterion used and how they are influenced by the shape of the true clusters for example sl clustering performed fairly well on the well separated wsu data set but substantially lower on the more osu data set this is reflected in al as well was able capture much of the true clustering structure with the right parameter settings figure the distribution of adjusted rand index ari scores of various clustering algorithms after varying free parameters the orange line corresponds to the ari of the proposed framework for the wsu simulation with ari scores of however exhibited degraded performance when the pois were less separated in osu data set optics with a few specific parameter settings performed well on the wsu data set however again suffered on the more osu data set related research the trajectory field has been largely progressed by extensive and intensive individual efforts nonetheless conceptual models have been proposed for how to deal with the patterns within trajectories and how to relate such patterns to geographical areas of interest for various purposes one such model postulates that trajectories and their spatiotemporal patterns are essentially driven by the semantics the application associates with trajectory itself and have contributed significantly to the stops and moves of trajectories smot family of classification algorithms where the premise of the analysis is that by partitioning trajectory data into a labeled set of stop and move segments one can take then annotate these segments with semantic information derive specific mobility patterns and as a result discover interesting locations alvares et al developed to find interesting positions based on semantic annotations describing the places a trajectory visited palma et al reduced s reliance on prior knowledge about positions that are likely to be interesting by incorporating the speed at which tracks are traveling with their variation finds clusters of common trajectories based on similar direction changes and stopping points zhou et al tackle the problem of finding positions of interest to an individual track based on data about a track s location preferences position over time and tags of locations provided by web services such as google maps many related efforts encode or are reliant on varying notions of an interesting place using for example techniques from natural language processing nlp data clustering sequential pattern mining and social network analysis methods zheng et al pioneered the use of stay points corresponding to an aggregation of consecutive gps points that collectively are within a time and distance threshold thereby characterizing a virtual location it is interesting to note that zheng et al used optics to create a hierarchical clustering stay points for an application with microsoft called geolife indeed zheng anticipated a number of developments in the trajectory mining field the theoretical cluster tree may be viewed as a more statistically based conception of the tree based hierarchical graph that is used to represent pois in that application as well it s worth noting that our definition of an intrinsic poi having a more theoretical foundation in clustering is both conceptually very similar to optics and computationally more recently to hierarchical dbscan hdbscan there exist a number of commonalities between both optics and dbscan and the theory of the cluster tree a comprehensive exposition of this relationship is beyond the scope of this paper see campello et al for a thorough review of the subject although it s not mentioned in such efforts the usage of rsl with a relative excess of mass functional to cluster extraction is equivalent to the flat clusters hdbscan extracts with a setting of and k minpts however the asymptotic consistency of the setting of the pair k d log n has not been established when alpha k must be much larger exponential dimensionality of the data set thus we use rsl with concluding remarks this paper proposed a general framework for intrinsic poi discovery without needing to rely on external gazetteers based on recent theoretical advances in hierarchical nearest neighbor density estimation it discussed a conceptually sound basis for automated poi discovery specifically in the context of geospatial data and introduced a framework that provides a rigorous and usable solution to an applied domain primarily dominated by intuitively reasonable but methods with novel extensions to sumo to support pedestrian movement in buildings an evaluation of simulated trajectory data over diverse geographical areas supports the conclusion that the proposed framework is a useful tool for extracting intrinsic pois the framework has both theoretical guarantees and practical benefits requires no ad hoc parameter tuning and exhibits improved fidelity against common approaches over thousands of parameter settings in future work with the help of the asymptotic analysis done by chaudhuri et al we plan to develop techniques for poi extraction this is imperative in exploratory settings such as large urban environments where the number of pois is not known ahead of time there is little useful knowledge to gain from ad hoc or cluster analysis especially when the solution space is large by relating the concept of a poi to the theory of the cluster tree rsl and associated estimators enable future theoretical work may further augment models reliant on poi data such as location recommendation systems collaborative filtering techniques or social networking models built from poi data such the the social networks reviewed in references luis otavio alvares vania bogorny bart kuijpers jose antonio fernandes de macedo bart moelans and alejandro vaisman a model for enriching trajectories with semantic geographical information in proc of the annual acm intl symposium on advances in geographic information systems acm mihael ankerst markus m breunig kriegel and sander optics ordering points to identify the clustering structure acm sigmod record olatz arbelaitz ibai gurrutxaga javier muguerza m and perona an extensive comparative study of cluster validity indices pattern recognition shumeet baluja reducing vehicle emissions via machine learning for traffic signal program selection maike buchin anne driemel marc van kreveld and vera adinolfi segmenting trajectories a framework and algorithms using spatiotemporal criteria journal of spatial information science ricardo jgb campello davoud moulavi and sander clustering based on hierarchical density estimates in conference on knowledge discovery and data mining ricardo jgb campello davoud moulavi arthur zimek and sander a framework for and unsupervised optimal extraction of clusters from hierarchies data mining and knowledge discovery ricardo jgb campello davoud moulavi arthur zimek and joerg sander hierarchical density estimates for data clustering visualization and outlier detection acm trans on knowledge discovery from data aileen y chang maria e parrales javier jimenez magdalena e sobieszczyk scott m hammer david j copenhaver and rajan p kulkarni combining google earth and gis mapping technologies in a dengue surveillance system for developing countries intl journal of health geographics kamalika chaudhuri and sanjoy dasgupta rates of convergence for the cluster tree in advances in neural information processing systems chen jisu kim sivaraman balakrishnan alessandro rinaldo and larry wasserman statistical inference for cluster trees arxiv preprint martin ester kriegel sander xiaowei xu et al a densitybased algorithm for discovering clusters in large spatial databases with noise in proc on intl conf on knowledge discovery and data mining flavio figueiredo bruno ribeiro jussara m almeida and christos faloutsos tribeflow mining predicting user trajectories in proc of the intl conference on world wide web chris fraley and adrian e raftery clustering discriminant analysis and density estimation amer statist assoc lorenzo gabrielli salvatore rinzivillo francesco ronzano and daniel villatoro from tweets to semantic trajectories mining anomalous urban mobility patterns in citizen in sensor networks springer tobias gindele sebastian brechtel and dillmann a probabilistic model for estimating driver behaviors and vehicle trajectories in traffic environments in intl ieee conference on intelligent transportation systems ieee marta c gonzalez cesar a hidalgo and barabasi understanding individual human mobility patterns nature isabelle guyon ulrike von luxburg and robert c williamson clustering science or art in nips workshop on clustering theory john a hartigan consistency of single linkage for clusters amer statist assoc congwei hu wu chen yongqi chen and dajie liu adaptive kalman filtering for vehicle navigation journal of global positioning systems lawrence hubert and phipps arabie comparing partitions journal of classification leonard kaufman and peter rousseeuw clustering by means of medoids daniel krajzewicz jakob erdmann michael behrisch and laura bieker recent development and applications of sumo simulation of urban mobility intl journal on advances in systems and measurements december liang xu liu jia tao song bo guan zhao xiao wu and ke jia he tradbscan a algorithm of clustering trajectories in applied mechanics and materials vol trans tech publ siyuan liu shuhui wang kasthuri jayarajah archan misra and ramayya krishnan todmis mining communities from trajectories in proceedings of the acm international conference on information knowledge management cikm acm new york ny usa dietrich werner and sawitzki excess mass estimates and tests for multimodality amer statist assoc fionn murtagh and pierre legendre wardfis hierarchical agglomerative clustering method which algorithms implement wardfis criterion journal of classification andrey tietbohl palma vania bogorny bart kuijpers and luis otavio alvares a approach for discovering interesting places in trajectories in proc of the acm symposium on applied computing acm christine parent stefano spaccapietra and esteban conceptual modeling for traditional and applications the mads approach springer science business media park hong and cho recommendation system using bayesian userfis preference model in mobile devices in intl conference on ubiquitous intelligence and computing springer marco pavan stefano mizzaro ivan scagnetto and andrea beggiato finding important locations a approach in ieee intl conference on conf on mobile data management vol jose antonio mr rocha valeria c times gabriel oliveira luis o alvares and vania bogorny a clustering method in intelligent systems ieee intl conference ieee stefano spaccapietra christine parent maria luisa damiani jose antonio de macedo fabio porto and christelle vangenot a conceptual view on trajectories data knowledge engineering goce trajcevski roberto tamassia hui ding peter scheuermann and isabel f cruz continuous probabilistic queries for uncertain trajectories in proc of the intl conference on extending database technology advances in database technology acm md reaz uddin chinya ravishankar and vassilis j tsotras finding regions of interest from trajectory data in ieee intl conference on mobile data management vol ieee kirsi virrantaus jouni markkula artem garmash vagan terziyan jari veijalainen artem katanosov and henry tirri developing services in proc of the second intl conference on web information systems engineering vol ieee ulrike von luxburg robert c williamson and isabelle guyon clustering science or art in icml unsupervised and transfer learning xiangye xiao yu zheng qiong luo and xing xie inferring social ties between users with human location history journal of ambient intelligence and humanized computing josh ying lee weng and vincent s tseng semantic trajectory mining for location prediction in proc of the acm sigspatial intl conference on advances in geographic information systems acm ping zhang qing deng xiaodong liu rui yang and hui zhang spatiotemporal trajectory pattern recognition by intelligent sensor devices ieee access yu zheng social networks users in computing with spatial trajectories springer yu zheng trajectory data mining an overview acm trans on intelligent systems and technology yu zheng xing xie and ma geolife a collaborative social networking service among user location and trajectory ieee data eng bull yu zheng lizhu zhang xing xie and ma mining interesting locations and travel sequences from gps trajectories in proc of the intl conference on world wide web acm changqing zhou dan frankowski pamela ludford shashi shekhar and loren terveen discovering personally meaningful places an interactive clustering approach acm trans on information systems
2
critical parameters in particle swarm optimisation nov michael adam erskine thomas joyce institute for perception action and behaviour school of informatics the university of edinburgh crichton st edinburgh scotland abstract particle swarm optimisation is a metaheuristic algorithm which finds reasonable solutions in a wide range of applied problems if suitable parameters are used we study the properties of the algorithm in the framework of random dynamical systems which due to the swarm dynamics yields analytical results for the stability properties of the particles such considerations predict a relationship between the parameters of the algorithm that marks the edge between convergent and divergent behaviours comparison with simulations indicates that the algorithm performs best near this margin of instability pso introduction particle swarm optimisation pso is a metaheuristic algorithm which is widely used to solve search and optimisation tasks it employs a number of particles as a swarm of potential solutions each particles shares knowledge about the current overall best solution and also retains a memory of the best solution it has encountered itself previously otherwise the particles after random initialisation obey a linear dynamics of the following form vi xi t pi xi t g xi t xi t vi here xi t and vi t i n t represent respectively the position in the search space and the velocity vector of the particle in the swarm at time the velocity update contains an inertial term parameterised by and includes attractive forces towards the personal best location pi and towards the globally best location g which are parameterised by and and respectively the symbols and denote diagonal matrices whose entries are uniformly distributed in the unit interval the number of particles n is quite low in most applications usually amounting to a few dozens in order to function as an optimiser the algorithm uses a nonnegative cost function f rd r where without loss of generality f is assumed at an optimal solution in many problems where pso is applied there are also states with costs can be considered as good solutions the cost function is evaluated for the state of each particle at each time step if f xi t is better than f pi then the personal best pi is replaced by xi t similarly if one of the particles arrives at a state with a cost less than f g then g is replaced in all particles by the position of the particle that has discovered the new solution if its velocity is a particle will depart from the current best location but it may still have a chance to return guided by the force terms in the dynamics numerous modifications and variants have been proposed since the algorithm s inception and it continues to enjoy widespread usage ref groups around pso papers into discernible application areas google scholar reveals over results for particle swarm optimisation in total and for the year in the next section we will report observations from a simulation of a particle swarm and move on to a standard matrix formulation of the swarm dynamics in order to describe some of the existing corresponding author analytical work on pso in sect we will argue for a formulation of pso as a random dynamical system which will enable us to derive a novel exact characterisation of the dynamics of system which will then be generalised towards the more realistic case of a swarm in sect we will compare the theoretical predictions with simulations on a representative set of benchmark functions finally in sect we will discuss the assumption we have made in the theoretical solution in sect and address the applicability of our results to other metaheuristic algorithms and to practical optimisation problems swarm dynamics empirical properties the success of the algorithm in locating good solutions depends on the dynamics of the particles in the state space of the problem in contrast to many evolution strategies it is not straight forward to interpret the particle swarm as following a landscape defined by the cost function unless the current best positions p or g change the particles do not interact with each other and follow an intrinsic dynamics that does not even indirectly obtain any gradient information the particle dynamics depends on the parameterisation of the eq to obtain the best result one needs to select parameter settings that achieve a balance between the particles exploiting the knowledge of good known locations and exploring regions of the problem space that have not been visited before parameter values often need to be experimentally determined and poor selection may result in premature convergence of the swarm to poor local minima or in a divergence of the particles towards regions that are irrelevant for the problem empirically we can execute pso against a variety of problem functions with a range of and values typically the algorithm shows performance of the form depicted in fig the best solutions found show a curved relationship between and with at small and at small large values of both and are found to cause the particles to diverge leading to results far from optimality while at small values for both parameters the particles converge to a nearby solution which sometimes is acceptable for other cost functions similar relationships are observed in numerical tests see sect unless no good solutions found due to problem complexity or run time limits see sect for simple cost functions such as a single well potential there are also parameter combinations with small and small will usually lead to good results the choice of and at constant may have an effect for some cost functions but does not seem to have a big effect in most cases matrix formulation in order to analyse the behaviour of the algorithm it is convenient to use a matrix formulation by inserting the velocity explicitly in the second equation m zt p p g g with z v x and id where id is the unit matrix in d dimensions note that the two occurrence of in eq refer to the same realisation of the random variable similarly the two s are the same realisation but different from since the second and third term on the right in eq are constant most of the time the analysis of the algorithm can focus on the properties of the matrix m in spite of its wide applicability pso has not been subject to deeper theoretical study which may be due to the multiplicative noise in the simple dynamics in previous studies the effect of the noise has largely been ignored analytical results an early exploration of the pso dynamics considered a single particle in a space where the personal and global best locations were taken to be the same the random components were figure typical pso performance as a function of its and parameters here a particle swarm was run for pairs of and values cost function here was the d rotated rastrigin function each parameter pair was repeated times and the minimal costs after iterations were averaged replaced by their averages such that apart from random initialisation the algorithm was deterministic varying the parameters was shown to result in a range of periodic motions and divergent behaviour for the case of the addition of the random vectors was seen as beneficial as it adds noise to the deterministic search control of velocity not requiring the enforcement of an arbitrary maximum value as in ref is derived in an analytical manner by here eigenvalues derived from the dynamic matrix of a simplified version of the pso algorithm are used to imply various search behaviours thus again the case is expected to diverge for various cyclic and motions are shown to exist for a version of the algorithm in ref again a single particle was considered in a one dimensional problem space using a deterministic version of pso setting the eigenvalues of the system were determined as functions of and a combined which leads to three conditions the particle is shown to converge when and harmonic oscillations occur for and a zigzag motion is expected if and as with the preceding papers the discussion of the random numbers in the algorithm views them purely as enhancing the search capabilities by adding a drunken walk to the particle motions their replacement by expectation values was thus believed to simplify the analysis with no loss of generality we show in this contribution that the iterated use of these random factors and in fact adds a further level of complexity to the dynamics of the swarm which affects the behaviour of the algorithm in a way in ref these factors were given some consideration regions of convergence and divergence separated by a curved line were predicted this line separating these regions an equation for which is given in ref fails to include some parameter settings that lead to convergent swarms our analytical solution of the stability problem for the swarm dynamics explains why parameter settings derived from the deterministic approaches are not in line with experiences from practical tests for this purpose we will now formulate the pso algorithm as a random dynamical system and present an analytical solution for the swarm dynamics in a simplified but representative case critical swarm conditions for a single particle pso as a random dynamical system as in refs the dynamics of the particle swarm will be studied here as well in the case this can be justified because the particles interact only via the global best position such that while g is unchanged single particles exhibit qualitatively the same dynamics as in the swarm for the case we have necessarily p g such that shift invariance allows us to set both to zero which leads us to the following is given by the formulation of the pso dynamics m zt extending earlier approaches we will explicitly consider the randomness of the dynamics instead of averages over and we consider a random dynamical system with dynamical matrices m chosen from the set rij for i j and rii id with r being in both rows the same realisation of a random diagonal matrix that combines the effects of and the parameter is the sum with and as the diagonal elements of and are uniformly distributed in the distribution of the random variable rii ii ii in eq is given by a convolution of two uniform random variables namely r max max if r min if min r max if max r if the variable r and r otherwise r has a tent shape for and a box shape in the limits of either or the case where the swarm does not obtain information about the fitness function will not be considered here we expect that the pso is well represented by the simplified version for or the latter case being irrelevant in practice for deviations from the theory may occur because in the case p and g will be different for most particles we will discuss this as well as the effects of the switching of the dynamics at discovery of better solutions in sect marginal stability while the swarm does not discover any new solutions its dynamical properties are determined by an infinite product of matrices from the set m such products have been studied for several decades and have found applications in physics biology and economics here they provide a convenient way to explicitly model the stochasticity of the swarm dynamics such that we can claim that the performance of pso is determined by the stability properties of the random dynamical system since the equation is linear the analysis can be restricted to vectors on the unit sphere in the v x space to unit vectors a x v k x v k where k k denotes the euclidean norm unless the set of matrices shares the same eigenvectors which is not the case here standard stability analysis in terms of eigenvalues is not applicable instead we will use means from the theory of random matrix products in order to decide whether the set of matrices is stochastically contractive the properties of the asymptotic dynamics can be described based on a double lebesgue integral over the unit sphere s and the set m as in lyapunov exponents the effect of the dynamics is measured in logarithmic units in order to account for multiplicative action z z a m log km ak if a is negative the algorithm will converge to p with probability while for positive a arbitrarily large fluctuations are possible while the measure for the inner integral is given by eq we have to determine the stationary distribution on the unit sphere for the outer integral it is given as the solution of the integral equation z z a b m a m km bk a b s the existence of the invariant measure requires the dynamics to be ergodic which is ensured if at least some of elements of m have complex eigenvalues such as being the case for see above this condition excludes a small region in the parameters space at small values of such that there we have to take all ergodic components into account there are not more than two components which due to symmetry have the same stability properties it depends on the parameters and and differs strongly from a homogenous distribution see fig for a few examples in the case d critical parameters are obtained from eq by the relation figure stationary distribution a on the unit circle a in the x v plane for a system for and the distribution with peak near is for otherwise main peaks are highest for largest solving eq is difficult in higher dimensions so we rely on the linearity of the system when considering the d as representative the curve in fig represents the solution of eq for d and for other settings of and the distribution of the random factors has a smaller variance rendering the dynamics more stable such that the contour moves towards larger parameter values see fig inside the contour is negative meaning that the state will approach the origin with probability along the contour and in the outside region large state fluctuations are possible interesting parameter values are expected near the curve where due to a coexistence of stable and unstable dynamics induced by different sequences of random matrices a theoretically optimal combination of exploration and exploitation is possible for specific problems however deviations from the critical curve can be expected to be beneficial personal best global best due to linearity the particle swarm update rule is subject to a scaling invariance which was already used in eq we now consider the consequences of linearity for the case where personal best and global best differ p for an interval where pi and g remain unchanged the particle i with personal best pi will behave like a particle in a swarm where together with x and v pi is also scaled by a factor the approximation of the lyapunov exponent see eq t log hk xt vt ki t figure solution of eq representing a single particle in one dimension with a fixed best value at g p the curve that has higher on the right magenta is for the other curve green is for except for the regions near where numerical instabilities can occur a simulation produces an indistinguishable curve in the simulation we tracked the probability of a particle to either reach a small region near the origin or to escape beyond a radius of after starting from a random location on the unit circle along the curve both probabilities are equal will be changed by an amount of log by the scaling although this has no effect on the asymptotic behaviour we will have to expect an effect on the stability of the swarm for finite times which may be relevant for practical applications for the same parameters the swarm will be more stable if and less stable for provided that the initial conditions are scaled in the same way likewise if kpk is increased then the critical contour will move inwards see fig note that in this figure the low number of iterations lead to a few erroneous trials at parameter pairs outside the outer contour which have been omitted here we also do not consider the behaviour near which is complex but irrelevant for pso the contour can be seen as the limit such that only an increase of kpk is relevant for comparison with the theoretical stability result when comparing the stability results with numerical simulations for real optimisation problems we will need to take into account the effects caused by differences between p and g in a swarm with finite runtimes optimisation of benchmark functions metaheuristic algorithms are often tested in competition against benchmark functions designed to present different problem space characteristics the functions contain a mix of unimodal basic multimodal and composite functions the domain of the functions in this test set are all defined to be d where d is the dimensionality of the problem particles were initialised within the same domain we use problems throughout our implementation of pso performed no spatial or velocity clamping in all trials a swarm of particles was used we repeated the algorithm times on each occasion allowing iterations to pass before recording the best solution found by the swarm for the competition fitness evaluation were allowed which corresponds to iterations with particles other iteration numbers were included for comparison this protocol was carried out for pairs of and this was repeated for all functions the averaged solution costs as a function of the two parameters showed curved valleys similar to that in fig for all problems for each function we obtain different best values along or near the theoretical curve there appears to be no preferable location within the valley some individual functions yield best performance near this is not the case near although the global average performance over all test functions is better in the valley near than near see fig figure best parameter regions for blue green and magenta iterations for more iterations the region shifts towards the critical line cost averaged over runs and cec benchmark functions the red outer curve represents the zero lyapunov exponent for n d at medium values of the difference between the analytical solutions for the cases and is strongest see fig in simulations this shows to a lesser extent thus revealing a shortcoming of the approximation because in the case p and g are often different the resulting vector will have a smaller norm than in the case where p the case p g violates a the assumption of the theory the dynamics can be described based unit vectors while a particle far away from both p and g will behave as predicted from the case at length scales smaller than kp gk the retractive forces will tend to be reduced such that the inertia becomes more effective and the particle is locally less stable which shows numerically in optimal parameters that are smaller than predicted discussion relevance of criticality our analytical approach predicts a locus of and pairings that maintain the critical behaviour of the pso swarm outside this line the swarm will diverge unless steps are taken to constrain it inside the swarm will eventually converge to a single solution in order to locate a solution within the search space the swarm needs to converge at some point so the line represents an upper bound on the mix that a swarm manifests for parameters on the critical line fluctuations are still arbitrary large therefore subcritical parameter values can be preferable if the settling time is of the same order as the scheduled runtime of the algorithm if in addition a typical length scale of the problem is known then the finite standard deviation of the particles in the stable parameter region can be used to decide about the distance of the parameter values from the critical curve these dynamical quantities can be approximately set based on the theory presented here such that a precise control of the behaviour of the algorithm is in principle possible the observation of the distribution of empirically optimal parameter values along the critical curve confirms the expectation that critical or behaviour is the main reason for success of the algorithm critical fluctuations are a plausible tool in search problem if apart from certain smoothness assumption nothing is known about the cost landscape the majority of excursions will exploit the smoothness of the cost function by local search whereas the fat tails of the distribution allow the particles to escape from local minima figure for p g we define neutral stability as the equilibrium between divergence and convergence convergence means here that the particle approaches the line connecting p and curves are for a problem with p and g scaled see sect by outer curve and inner curve results are for iterations and averaged over repetitions switching dynamics at discovery of better solutions eq shows that the discovery of a better solution affects only the constant terms of the linear dynamics of a particle whereas its dynamical properties are governed by the linear coefficient matrices however in the time step after a particle has found a new solution the corresponding force term in the dynamics is zero see eq such that the particle dynamics slows down compared to the theoretical solution which assumes a finite distance from the best position at all finite times as this affects usually only one particle at a time and because new discoveries tend to become rarer over time this effect will be small in the asymptotic dynamics although it could justify the empirical optimality of parameters in the unstable region for some test cases the question is nevertheless how often these changes occur a weakly converging swarm can still produce good results if it often discovers better solutions by means of the fluctuations it performs before settling into the current best position for cost functions that are not deceptive where local optima tend to be near better optima parameter values far inside the critical contour see fig may give good results while in other cases more exploration is needed the role of personal best and global best a numerical scan of the plane shows a valley of good fitness values which at small fixed positive is roughly linear and described by the relation const only the joint parameter matters for large and accordingly small predicted optimal values the valley is less straight this may be because the effect of the known solutions is relatively weak so the interaction of the two components becomes more important in other words if the movement of the particles is mainly due to inertia then the relation between the global and local best is while at low inertia the particles can adjust their p vectors quickly towards the g vector such that both terms become interchangeable finally we should mention that more particles longer runtime as well as lower search space dimension increase the potential for exploration they all lead to the empirically determined optimal parameters being closer to the critical curve conclusion pso is a widely used optimisation scheme which is theoretically not well understood existing theory concentrates on a deterministic version of the algorithm which does not possess useful exploration capabilities we have studied the algorithm by means of a product of random matrices which allows us to predict useful parameter ranges and may allow for more precise settings if a typical length scale of the problem is known a weakness of the current approach is that it focuses on the standard pso which is known to include biases that are not necessarily justifiable and to be outperformed on benchmark set and in practical applications by many of the existing pso variants similar analyses are certainly possible and are expected to be carried out for some of the variants even though the field of metaheuristic search is often portrayed as largely inert to theoretical advances if the dynamics of particle swarms is better understood the algorithms may become useful as efficient particle filters which have many applications beyond heuristic optimisation acknowledgments this work was supported by the engineering and physical sciences research council epsrc grant number references kennedy and eberhart particle swarm optimization in proceedings ieee international conference on neural networks volume pages ieee poli analysis of the publications on the applications of particle swarm optimisation journal of artificial evolution and applications http kennedy the behavior of particles in porto saravanan and eiben editors evolutionary programming vii pages springer clerc and kennedy the particle stability and convergence in a multidimensional complex space ieee transactions on evolutionary computation trelea the particle swarm optimization algorithm convergence analysis and parameter selection information processing letters jiang luo and yang stagnation analysis in particle swarm optimization in swarm intelligence symposium sis ieee pages ieee cleghorn and p engelbrecht a generalized theoretical deterministic particle swarm model swarm intelligence furstenberg and kesten products of random matrices annals of mathematical statistics tutubalin on limit theorems for the product of random matrices theory of probability its applications khas minskii necessary and sufficient conditions for the asymptotic stability of linear stochastic systems theory of probability its applications clerc confinements and biases in particle swarm optimisation technical report open archive hal spears green and spears biases in particle swarm optimization international journal of swarm intelligence research
9
on the detection of low rank matrices in the regime antoine chevreuil and philippe loubaton gaspard monge computer science laboratory ligm umr cnrs de bd descartes france apr abstract we address the detection of a low rank n ndeterministic matrix from the noisy observation z when n where z is a complex gaussian random matrix with independent identically distributed nc entries thanks to large random matrix theory results it is now that if the largest singular value of verifies then it is possible to exhibit consistent tests in this contribution we prove a contrario that under the condition there are no consistent tests our proof is inspired by previous works devoted to the case of rank matrices index terms statistical detection tests large random matrices large deviation principle i ntroduction the problem of testing whether an observed matrix y is either a independent identically distributed gaussian random matrix z with variance or z for some low rank deterministic matrix with no known structure called also a spike is a fundamental problem arising in numerous applications such as the detection of multivariate signals or the gaussian hidden clique problem when the two dimensions converge towards in such a way that c the rank of remaining fixed known results on the additive spiked large random matrix models have enabled to this fundamental detection problem see it was established a long time ago see and the references therein that in the above asymptotic regime the largest singular value z of z converges almost surely towards more recently under mild technical extra assumptions proved that z still converges towards c if converges towards a limit strictly less than on if the limit of is strictly greater than then converges towards a limit strictly greater than this result implies that the generalized likelihood ratio test glrt is consistent both the probability of false alarm and the probability of missed detection converge towards in the above asymptotic regime if and only if is above the threshold in order to simplify the exposition we assume from now on that n so that ratio c reduces to while the detection problem was extensively addressed in the zone the case where was much less studied montanari et al consider the zone when is a rank matrix thanks to simple information geometry tools prove that in this region it is impossible to find a consistent test for the detection of the spike irrespective of the standard random matrix tools this approach is extended to the more general case when and z are tensors of order d namely if the frobenius norm of the tensor is stricly less than a threshold depending in d then the probability distributions of the observation under the two hypotheses are asymptotically undistinguishable so that any detection test can not behave better than a random guess this property which is stronger than the of a consistent test does not hold in the matrix case d see for instance where a test is exhibited that has a better performance than a random guess in this paper we extend the above methodolodgy to the general case where has rank our contribution is to prove that under the consistent detection is impossible while this theoretical result is not unexpected we believe that it provides a better understanding of the above fundamental detection problem in large dimensions without resorting to the machinery of large random matrices we mention that the works when the spike is symmetric and case are clearly related to the above problem however two major differences arise firstly the detection is not addressed but rather the estimation of second a statistical model of the spike is needed the results are in general not explicit however for a certain prior and for the rank one model it can be deduced that in the zone it is impossible to find estimates of the spike that have better performance than any dummy estimate an estimate that does not rely on the observation the authors rely on the computation of the mutual information between and y this computation involves results extending the approach of tallagrand for studying the model ii m odel notation asumption the set of matrices a complex endowed with the standard scalar product hx yi tr and the frobenius norm kxkf hx xi the spectral norm of a matrix x is denoted by the spike the signal is assumed to be a matrix of fixed rank r and hence admits a svd such as r x uj where are the singular values of sorted in descending order and where is the diagonal matrix gathering the r in the descending order as has to be defined for any n we impose a behavior of namely that all its singular values r do not depend on n for n large enough this hypothesis could be replaced by the condition that r all converge towards a finite limit at an ad hoc rate however this would introduce purely technical difficulties the noise matrix z is assumed to have entries distributed as nc we consider the alternative y z versus y z we denote by n y the probability probability density of y under and n y the density n y of y under l y n y is the likelihood ratio and we denote by the expectation under we now recall the fundamental information geometry results used in in order to address the detection following properties are well known see also section bounded i if l y then no consistent detection test exists ii if moroever l y o the total variation distance between n and n converges towards and no test performs better than a decision at random we however mention that i and ii are only sufficient conditions in particular l y unbounded does not imply the existence of consistent tests iii p rior on the spike e xpression of the second order moment the density of z seen as a collection of random variables is obviously n z exp kzkf where on the one hand we notice that the study of the the likelihood ratio is not suited to the deterministic model of the spike as presented previously indeed in this case e l y has the simple expression exp kf and always diverges on the other hand the noise matrix shows an invariance property if are unitary n n matrices then the density of equals this of z we hence modify the data according to the procedure we pick two independent unitary according to the haar measure which corresponds to the uniform distribution on the set of all unitary n n matrices and change the data tensor y according to as said above this does not affect the distribution of the noise but this amounts to assume a certain prior on the spike indeed this amounts to replace ui by ui and vi by vi in the following the data and the noise tensors after this procedure are still denoted respectively by y and z we are now in position to give a expression of the moment of l y we have n y ex n y x where ex is the mathematical expectation over the prior distribution of the spike or equivalently over the haar matrices it holds that l y e exp hx i where the expectation is over independent copies of the spike r stands for the real part x and x being respectively associated with and l y has the expression h e exp as and are haar and independent then and are also independent haar distributed and it holds l y e exp where the expectation is over the independent haar matrices and rtr the ultimate simplification comes from the decomposition which implies that rtr where u and it is clear that and are independent matrices that are both distributed as the upper r r diagonal block of a haar unitary matrix iv r esult the main result of our contribution is the following theorem if then lim sup l y and it is not possible to find a consistent test we remind that we are looking for a condition on due to this is a condition on under which e exp is bounded evidently the divergence may occur only when we hence consider e exp and e exp and prove that for a certain small enough to be specified later o and that is bounded t he term computation of the grf of it is clear that the boundedness of the integral is achieved when rarely deviates from as remarked in the natural machinery to consider is this of the large deviation principle ldp in essence if follows the ldp with rate n there can be found a certain function called good rate function grf such that for any borel set a of r n log p a converges towards x the existence of a grf allows one to analyze the asymptotic behaviour of the integral in the next section we thus justify that follows a large deviation principle with rate n and we compute the associated grf computation of the grf of pr eq and the inequality imply that the random variable is bounded with we first recall that for i the random matrix follows a ldp with rate n and that its grf at the parameter is log det ir see theorem in besides is a function of the matrices and therefore the contraction principle applies to see theorem in it ensures that follows a ldp with rate n and its grf is such that for each real x is the solution of the following optimization problem problem maximize in log det i log det i under the constraints rtr x i i we provide a solution of problem in this respect we define for each k r the interval ik defined by r ik k x x pr and ir it is easy to check that ik r are disjoint and that ik the following result holds theorem the maximum of problem is given by x r x p log k k k iik it is easy to check that the function x x is continuous on the proof of theorem is provided in the appendix we illustrate theorem through the following experiment the rank of the spike is fixed to r and the singular values have been set to we have computed millions of random samples of the matrices each pair is associated with a point x y defined as x rtr and y log det i i we obtain a cloud of points the upper envelope of which is expected to be x we have also plotted the graph of the function y x in addition we mention that in the more general context of tensors of order d the moment of l y is still given by but the random variable call it has a more complicated form than see the asymptotics of the term can still be studied by evaluating the grf of this grf is the solution of an optimization problem that apparently can not be solved in closed form for d in bound of the opposite of the true grf is computed this upper an upper bound valid for any d is given for d by log we thus also represent in figure this upper bound clearly it is not tight k log det tr fig graph of x seen as an upper envelope upper curve the upper bound computed in computation of the varadhan lemma see theorem in states that log e exp supx x and hence the term converges towards when supx x consider any of the intervals ik defined in the derivative of x for any x ik is x it is decreasing on ik and the limit in the left extremity of ik k is simply if then for all the indices k this shows that k k x is strictly decreasing on every ik hence for every x we have x we have proved that o vi t he term concentration of notice that the upper block r r of a unitary haar matrix has the same distribution as g where the matrix has entries distributed as nc and g is the top r block of obviously e ni it is a standard result that a random variable distributed as a n is concentrated around its mean this can be easily extended to the matrix lemma for any there exists a constant c such that p i c exp n we take and independent distributed as consider the upper r r blocks g and of and it and follows that has the same distribution as take now any we may split the integral in two parts e exp i e exp i z z where we have defined the events bi n n o thanks to the above concentration result we have exp p p exp exp as it is always possible to choose and such that and it follows that o let us now inspect the term since we have for i i then there exist for i such that i with we hence have e exp i i we expand i i as the sum of four terms take for instance thanks to von neumann s lemma we have r x k r x as pr ppr r it yields q k r tr invoking the von neumann s lemma three times it holds that q k r tr tr r k tr tr similar manipulations can be done on the other terms of the expansion so that is less than e exp tr with the above expectation is to be understood as the expectation over as and are independent we consider first the expectation over this gives up to the factor exp z exp e tr with e it is always possible to choose such that with such a the above integral is tr exp as tr tr we finally obtain after multiplying by exp and taking the expectation over z exp tr if it is always possible to adjust such that the above integral converges in this condition we have this must be true for all arbitrarily small hence the result a ppendix we prove theorem when x as the function to be maximized converges towards if k or k any argument of the maximization problem satisfies i k i therefore the kkt conditions imply the existence of a scalar lagrange multiplier such that is a stationary point of the lagrangian defined by log det ir i rtr as is a real valued function a stationary point is computedwhen setting the differential the entries of and to zero it can be checked that is a stationary point of when i i in a first step these equations can be shown to be satisfied only if and are diagonal up to permutations of the columns then is can be deduced that there exists a diagonal matrix p i and a matrix of permutation such that log det ir log det ir log det i p and rtr tr p this invites us to consider the following problem maximize log det i p jointly over all the r permutations and over diagonal matrices p verifying p i and the constraint tr p x in a first step we set i in the above problem and consider the problem maximize r x log pi under the constraints that pi for each i r and r x pi x the maximum is denoted by x this is a variant of the celebrated problem see and chap of that was solved to evaluate the capacity of a frequency selective gaussian channel the difference being that in the latter problem log pi is replaced by log pi order to solve problem we assume that the non zero singular values r are distinct if this is not the case a standard perturbation argument can be used in order to address the general case as the function to be maximized is strictly concave on the set defined by the constraints the maximum is reached verifying pi pr at a unique point p pr for r each i we consider the lagrangian corresponding to problem given by log pi p i i pi where and for i the partial derivatives parameters pi r are zero at this leads to for i r pi the first remark is that necessarily these equations imply that the numbers pi are sorted in decreasing order to verify this claim we assume that i j and that pi and pj then it holds that and that j because pj implies therefore a contradiction because we denote by s x the number of entries of hence the first s x entries of are non zero morever the equations i for i s x imply that ps x ps x pr we now analytically characterize s x on the one hand computed at for i s x and for i s x both imply x x on the other hand the constraint imposes that verifies ps x i s x therefore it holds that s x x s x s x x x x s x x such that s x coincides with the integer k for which x ik see for the definition of these intervals the maximum ps x log pi is direcly computed as s x ps x x i x log s x s x i in order to show that the grf of is x x it remains to show that the solution of problem is reached when the permutation matrix is the identity in this respect we introduce a nested problem motivated by the following observation we denote by and the vectors whose components are respectively the diagonal entries of and of arranged in the decreasing order evidently majorizes in the sense that for k r k x k x we thus consider the relaxed problem problem maximize log det i p over the diagonal matrices p i and over vectors satisfying the majorization constraint and the equality constraint r x pi x the maximum of problem is above the maximum of problem which is itself above the maximum x of problem we actually show that the maximum of problem is less than x and that it is reached for a vector that coincides with this will imply that the optimal permutation in problem is i and x x we give some elements for solving problem we consider a stationary point of the associated lagrangian and compute the kkt conditions we suppose that this stationary point attains the maximum if s denotes the number of components in we prove that necessarily ps and we let be the this index exists otherwise and the problem is solved this implies that first index such that for all indices i notice this fact if we suppose that the condition is true whatever k then it is possible to add a small and update as in such a way that the majorization constraints still hold the constraint holds and the updated increases the function to maximize this ispin contradiction with the definition of this means that there exists an index we choose the smallest such that it can be shown that it is necessary that all the are equal for i after some algebraic gymnastics it can be shown that it in this case all the inequalities at are saturated hence implying that the value of p i log pi equals x r eferences bai and silverstein spectral analysis of large dimensional random matrices series in statistics j banks moore vershynin verzelen and xu bounds and phase transitions in clustering sparse pca and submatrix localization in ieee international symposium on information theory isit pages june florent and raj rao nadakuditi the singular values and vectors of low rank perturbations of large rectangular random matrices journal of multivariate analysis bianchi debbah and najim performance of statistical tests for single source detection using random matrix theory ieee transactions on information theory chevreuil and ph loubaton on the of spiked large random tensors arxiv cover and thomas elements of information theory edition wiley interscience dembo and zeitouni large deviations techniques and applications berlin heidelberg fabrice gamboa and alain rouault spectral measures and large deviations of stat planning and inference marc lelarge and miolane fundamental limits of symmetric matrix estimation miolane fundamental limits of matrix estimation the case mirsky a trace inequality of john von neumann monatshefte mathematik dec andrea montanari daniel reichman and ofer zeitouni on the limitation of spectral methods from the gaussian hidden clique problem to rank one perturbations of gaussian tensors ieee trans inf march nadakuditi and edelman sample eigenvalue based detection of signals in white noise using relatively few samples ieee transactions on signal processing onatski moreira and hallin asymptotic power of sphericity tests for data ann statistics michel talagrand mean field models for spin glasses book subtitle volume i basic examples berlin heidelberg witsenhausen a determinant maximization problem occuring in the theory of data communications siam appl math
10
recurrent neural network language models for open vocabulary cyber anomaly detection aaron ryan nicolas brian nicole and robert pacific northwest national laboratory richland washington western washington university bellingham washington abstract automated analysis methods are crucial aids for monitoring and defending a network to protect the sensitive or confidential data it hosts this work introduces a flexible powerful and unsupervised approach to detecting anomalous behavior in computer and network logs one that largely eliminates feature engineering employed by existing methods by treating system logs as threads of interleaved sentences event log lines to train online unsupervised neural network language models our approach provides an adaptive model of normal network behavior we compare the effectiveness of both standard and bidirectional recurrent neural network language models at detecting malicious activity within network log data extending these models we introduce a tiered recurrent architecture which provides context by modeling sequences of users actions over time compared to isolation forest and principal components analysis two popular anomaly detection algorithms we observe superior performance on the los alamos national laboratory cyber security dataset for red team detection our best performing model provides test set area under the receiver operator characteristic curve of demonstrating the strong anomaly detection performance of this approach on open vocabulary logging sources introduction to minimize cyber security risks it is essential that organizations be able to rapidly detect and mitigate malicious activity on their computer networks these threats can originate from a variety of sources including malware phishing port scanning etc attacks can lead to unauthorized network access to perpetrate further damage such as theft of credentials intellectual property and other business sensitive information in a typical scenario cyber defenders and network administrators are tasked with sifting through vast amounts of data from various logging sources to assess potential security risks unfortunately the amount of data for even a network can quickly grow beyond the ability of a single person or team to assess leading to delayed response the desire for automated assistance has and continues to encourage research in cyber security and machine learning approaches for automated detection can be highly effective for characterizing individual threats spite their high precision they suffer from low recall and may fail to detect subtle mutations or novel attacks alternatively given an unlabeled training set of typically benign activity logs one can build a model of normal behavior during online joint training and evaluation of this model patterns of normal usage will be reinforced and atypical malicious activity will stand out as anomalous the features used to identify unusual behavior are typically statistical feature vectors associated with time slices vectors of counts for types of activities taking place in a window such systems developed in research have been criticized as brittle to differences in properties of operational networks such as security constraints and variable usage patterns sommer and paxson the approach we introduce aims to minimize assumptions implicit in feature engineering and effectively model variability in network usage by direct online learning of language models over log lines language models assign probabilities to sequences of tokens and are a core component of speech recognition machine translation and other language processing systems specifically we explore the effectiveness of several recurrent neural network rnn language models for use in a network anomaly detection system our system dynamically updates the network language model each day based on the previous day s events when the language model assigns a low probability to a it is flagged as anomalous there are several advantages to this approach reduced feature engineering our model acts directly on raw string tokens rather than domainspecific statistics this dramatically reduces the time to deployment and makes it agnostic to the specific network or logging source configuration it also removes the blind spots introduced when tens of thousands of are distilled down to a single aggregated feature vector allowing our model to capture patterns that would have otherwise been lost fine grained assessment the response time for analysts can be improved by providing more specific and relevant events of interest baseline systems that alert to a user s day aggregate require sifting through tens of thousands of actions our approach can provide or even scores to the analyst helping them quickly cate the suspicious activity real time processing with the ability to process events in real time and fixed bounds on memory usage which do not grow over time our approach is suitable for the common scenario in which events are appearing in a log stream we assess our models using the publicly available los alamos national laboratory lanl cyber security dataset which contains real data with ground truth red team attacks and demonstrate language models definitively outperforming standard unsupervised anomaly detection approaches prior work machine learning has been widely explored for network anomaly detection with techniques such as isolation forest gavai et al liu ting and zhou and principal component analysis novakov et al ringberg et al attracting significant interest machine learning classifiers ranging from decision trees to bayes have been used for cyber security tasks such as malware detection network intrusion and insider threat detection extensive discussion of machine learning applications in cyber security is presented in bhattacharyya and kalita buczak and guven dua and du kumar kumar and sachdeva zuech khoshgoftaar and wald lawson and heard deep learning approaches are also gaining adoption for specialized cyber defense tasks in an early use of recurrent neural networks debar becker and siboni model sequences of unix shell commands for network intrusion detection anomaly detection has been demonstrated using deep belief networks on the kdd cup dataset alrawashdeh and purdy and bivens et al use perceptrons for the darpa dataset both approaches use aggregated features and synthetic network data tuor et al and veeramachaneni et al both employ deep neural network autoencoders for unsupervised network anomaly detection using time aggregated statistics as features some works of note have been previously published on the lanl data turcotte heard and kent develop an online statistical model for anomaly detection in network activity using models similarly turcotte et al use poission factorization gopalan hofman and blei on the lanl authentication logs a authentication count matrix is constructed by assuming each count comes from a poisson distribution parameterized by latent factors for users and computers the learned distributions are then used to predict unlikely authentication behavior several variants of tiered recurrent networks have been explored in the machine learning and natural language processing communities koutnik et al ling et al ling et al chung et al they are often realized by a lower tier network whose output is fed to an upper tier network and the separate tiers are jointly trained ling et al use a convolutional neural network to feed a word level long memory lstm rnn for machine translation with predictions made at the both hwang and sung and ling et al use a lstm to feed a second word or lstm for language modeling pascanu et al create activity models from real world data on a command basis and sequences of system calls are then modeled using rnn and echo state networks the learned features are used to independently train neural network and logistic regression classifiers max pooling is applied to hidden layers of the unsupervised rnn for each time step in a session and the result is concatenated to the final hidden state to produce feature vectors for the classifier this is similar to our tiered approach in which we use the average of all hidden states concatenated with the final hidden state as input to the rnn in contrast our model is completely unsupervised and all components are jointly trained approach our approach learns normal behavior for users processing a stream of computer and network as follows initialize model weights randomly for each day k in chronological order a given model produce anomaly scores for all events in day k b optionally produce an aggregated anomaly score each user for day k from the scores c send or anomaly scores in rank order to analysts for inspection d update model weights to minimize loss on all in day k yielding model mk this methodology interleaves detection and training in an online fashion in this section we detail the components of our approach tokenization to work directly from arbitrary log formats we treat loglines as sequences of tokens for this work we consider two tokenization granularities and for word tokenization we assume that tokens in the logline are delimited by a known character space or comma after splitting the on this delimiter we define a shared vocabulary of words over all log fields consisting of the tokens appearing in the training set to allow our model to handle previously unseen tokens we add an out of vocbulary token to our vocabulary oov for instance not every ip address will be represented in a training set likewise new pcs and users are continually being added to large networks to ensure that oov has probability we replace sufficiently infrequent tokens in the training data with oov during evaluation tokens not seen before are labeled oov in order to accommodate shifting word distributions in an online environment a fixed size vocabulary could be periodically updated using a sliding window of word frequency statistics for simplicity we assume we have a fixed training set from which we produce a fixed vocabulary to avoid the challenges of managing a vocabulary we also develop language models using a characterlevel tokenization in this case our primitive vocabulary the alphabet of printable ascii characters circumvents the open vocabulary issue by its ability to represent any log entry irrespective of the network logging source or log field with tokenization we keep the delimiter token in the sequence to provide our models with cues to transitions between fields recurrent neural network language models to produce anomaly scores we use recurrent neural networks in two ways as a language model over individual and to model the state of a user over time we first present two recurrent models that focus only on and then a tiered model that accomplishes both and both were for our experiments using tensorflow abadi et al event model em first we consider a simple rnn model that operates on the token word sequences of individual events specifically we consider a long memory lstm hochreiter and schmidhuber network whose inputs are token embeddings and from whose output we predict distributions over the next token for a with k tokens each drawn from a shared vocabulary of size c let x k x x x k denote a sequence of representations of the tokens each x t rc in this model the hidden representation at token t h t from which we make our predictions is a function of x x x t according to the usual lstm equations h t c t o t tanh c t f t c i t g t g t f t i t o t tanh x t w g x h w g h b g x t w f x h w f h b f x t w i x h w i h b i x t w o x h w o h b o where the initial hidden and cell states c and h are set to zero vectors and and denote multiplication and logistic sigmoid respectively vector g t is a hidden representation based on the current input and previous hidden state while vectors f t i t and o t are the standard lstm gates the matrices w and bias vectors b are the model parameters we use each h to produce a probability distribution p t over the token at time t as follows p t softmax h w p b p code will soon be available at https p p p p k softmax softmax softmax softmax lstm lstm lstm lstm sos x x x lstm lstm lstm lstm x x x k eos figure event models set of black bordered nodes and connections illustrate the em model while set of all nodes and connections illustrate the bem model we use loss k x h x t p t k for two important purposes first as anomaly score and second as the training objective to update model weights we train this model using stochastic through time bidirectional event model bem following the language model formulation suggested in schuster and paliwal we alternatively model the structure of log lines with a bidirectional lstm we define a new set of hidden vectors hb hb k hb by running the lstm equations backwards in time starting with initial zero cell and hidden states at time k set to zero the weights w and biases b for the backward lstm are denoted with superscript b the probability distribution p t over the token at time t is then b p t softmax h w p hb w p b p tiered event models to incorporate context we propose a recurrent neural network the can be either event model em or bem but with the additional input of a context vector generated by the concatenated to the token embedding at each time step the input to the model is the hidden states of the model this upper tier models the dynamics of user behavior over time producing the context vectors provided to the rnn this model is illustrated in fig in this model x u j denotes user u s jth log line which consists of a sequence of tokens as described in the previous subsections the models a sequence of user log lines x x x u tu using an lstm for each user u and each log line j in the user s log line sequence a lstm is applied to the tokens of x u j the input to the model at j is the concatenation of the final hidden state s and the average of the hidden states in the case of a em context lstm mean lstm lstm final lstm sos context lstm eos mean lstm sos lstm final lstm eos figure tiered event model refers to the hidden state at time k for the bem is the concatenation of the forward hidden state at time k and the backward hidden state at time for we average over hidden states primarily to provide many connections in the lstm which aids trainability the output of the lstm at j is a hidden state u j this hidden vector serves to provide context for the model at the next time step specifically u is concatenated to each of the inputs of the model operating on the jth note that the model serves only to propagate context information across individual no loss is computed directly on the values produced by the model the and models are trained jointly to minimize the loss of the model we unroll the model for a fixed number of fully unrolling each of the models within that window the model s loss is also used to detect anomalous behavior as is described further in section minibatching becomes more challenging for the tiered model as the number of per day can vary dramatically between users this poses two problems first it introduces the possibility that the most active users may have a disproportionate impact on model weights second it means that toward the end of the day there may not be enough users to fill the minibatch to counteract the first problem we fix the number of per user per day that the model will train on the remaining are not used in any gradient updates we leave compensating for the inefficiency that results from the second to future work baselines anomaly detection in streaming network logs often relies upon computing statistics over windows of time and applying anomaly detection techniques to those vectors below we describe the aggregate features and two anomaly detection techniques that are typical of prior work aggregate features we first define the set of features which summarize users activities in the day to aggregate the features that have a small number of distinct values logon orientation we count the number of occurrences for each distinct value for the for fields that have a larger number of distinct values pcs users domains we count the number of common and uncommon events that occurred rather than the number of occurrences of each distinct value this approach avoids high dimensional sparse features furthermore we define two categories of to the individual and relative to all users a value is defined as uncommon for the user if it accounts for fewer than of the values observed in that field up to that point in time and common otherwise a value is defined as uncommon for all users if it occurs fewer times than the average value for the field and common otherwise for the lanl dataset the prior featurization strategy yields a aggregate feature vector per userday these feature vectors then serve as the input to the baseline models described next models we consider two baseline models the first uses principal components analysis pca to learn a low dimensional representation of the aggregate features the anomaly score is proportional to the reconstruction error after mapping the compressed representation back into the original dimension shyu et al the second is an isolation forest iso based approach liu ting and zhou as implemented in s outlier detection tools pedregosa et al this was noted as the best performing anomaly detection algorithm in the recent darpa insider threat detection program gavai et al experiments in this section we describe experiments to evaluate the effectiveness of the proposed event modeling algorithms data the los alamos national laboratory lanl cyber security dataset kent consists of event logs from lanl s internal computer network collected over a period of consecutive days the data set contains over one billion loglines from authentication process network flow and dns logging sources identifying fields users computers and processes have been anonymized the recorded network activities included both normal operational network activity as well as a series of red team activities that compromised account credentials over the first days of data information about known red team attack events is used only for evaluation our approach is strictly unsupervised for the experiments presented in this paper we rely only on the authentication event logs whose fields and statistics are summarized in figure we filter these events to only those linked to an actual user removing computercomputer interaction events events on weekends and holidays contain drastically different frequencies and distributions of activities in a real deployment a separate model would be trained for use on those days but because no malicious events were in that data it was also withheld table has statistics of our data split the first days serve as the development set while the remaining days are the independent test set assessment granularity our model learns normal behavior and assigns relatively high loss to events that are unexpected a principal advantage of our approach is this ability to score the anomaly of field time source user dest user source pc dest pc auth type logon type auth orient success example negotiate batch logon success unique labels model pca iso em bem em bem a days events attacks dev test w w w w c c c c figure dataset statistics a authentication log fields and statistics and b dataset splits metrics we consider two performance metrics first we assess results using the standard area under the receiver operator characteristic curve auc which characterizes the in model detection performance between true positives and false positives effectively sweeping through all possible analyst budgets false positives are detections that are not truly red team events while true positives are detections that are to quantify the proportion of the data the analyst must sift through to diagnose malicious behavior on the network we use the average percentile ap metric specifically for each red team event or we note the percentile of its anomaly amongst all anomaly scores for the day we then average these percentiles for all of the malicious events or auc ap table granularity test set auc and ap language model anomaly scores calculated with average userday normalization diff b individual events allowing us to flag at the or aggregate anomalies over any larger timescale for this work we consider two timescales first we assess based on individual events a list of events would be presented to the analyst sorted descending by anomaly score second to facilitate comparison with traditional aggregation methods we aggregate anomaly scores over all of a user s events for the day specifically taking the max producing a single anomaly score in this scenario a list of would be provided to the analyst sorted descending by anomaly score we refer to this approach as max because the anomaly scores provided to the analyst are produced by taking the maximum score over the event scores in the window for that user where scoring is just taking the max over a singleton set of one event in order to counter systematic offsets in users anomaly scores for a day we also consider a simple normalization strategy which we refer to as diff by which every raw score is first normalized by subtracting the user s average anomaly score for the day tokenization word word word word char char char char em bem em bem log diff max day diff max table comparison of auc for and analysis with and without normalization figures and provide a visualization of these results note that if all true malicious events or are flagged as the most anomalous on the respective days then ap while if all malicious events or are ranked as the least anomalous on their respective days ap for both auc and ap a higher score is better our model hyperparameters were manually tuned to maximize ap for diff scores on the development set no separate training set is needed as our approach is unsupervised and trained online results and analysis we begin by exploring the granularity performance table summarizes model detection performance at this granularity on the test set for the auc and ap metrics using the diff method to produce day level scores from the language models a few trends are evident from these results first the aggregate feature baselines have nearequivalent performance by both metrics with the isolation forest approach having a slight edge we hypothesize the feature representation which is common to these methods could be a bottleneck in performance this highlights the blind spot issue feature engineering introduces second despite having only the context of a single at a time as opposed to features aggregated over an entire day the event model em performs comparably to the baseline models when a forward pass lstm network is used with auc auc diff max log day w em log day w bem log day w log day w diff max log day w em log day w bem log day w log day w figure word model comparison of auc at and granularities figure character model comparison of auc at and granularities a character tokenization and outperforms the baselines with word tokenization the most pronounced performance gain results from using bidirectional models finally tokenization performs better than however even the bidirectional character models perform appreciably better than the baselines it is clear from these results that the tiered models perform comparably to but not better than the models this suggests that the event level model is able to characterize normal user behavior from information stored in the model weights of the network which are trained each day to model user activity given the context of the past day s activity stored in the model weights the categorical variables represented by the fields in an individual log line may eliminate the need for explicit event context modeling we leave tracking the state of individual computers rather than users to future work but hypothesize that it may make the tiered approach more effective next we broaden our analysis of language modeling approaches comparing performance across all language models tokenization strategies anomaly granularity and normalization techniques figure plots auc for all language model types using word tokenization contrasting max and diff normalization modes figure compares the same variations for character tokenization table presents these results in tabular form with few exceptions granularity vastly outperforms this is true for both the and tokenization strategies with an average gain of auc the most interesting outcome of these comparisons is that word tokenization performance gains are heavily reliant on the diff normalization whereas for character tokenization the diff normalization has a minor detrimental effect for some models this suggests that the model could be used to provide a more immediate response time not having to wait until the day is done to obtain the day statistics used in diff mode the two tokenization strategies may in fact be complementary as the versatility and response time gains of a character tokenization come at the expense of easy interpretibility of a word tokenization the word tokenization allows anomaly scores to be decomposed into individual fields enabling analysts to pinpoint features of the event that most contributed to it being flagged since we tuned hyperparameters using diff mode the model has potential to do better with additional tuning next figures and visualize the average percentiles of red team detections for the subset of the test set with the most activity anomaly scores for both word and character tokenizations are computed without average userday offset normalization red team scores are plotted as purple x s with the x coordinate being the second in time at which the event occurred and y coordinate the anomaly score for that event percentile ranges are colored to provide context for the anomaly scores against the backdrop of other network activity the spread of anomaly scores is much greater for the tokenizations fig than fig which could explain the different sensitivity of word level tokenization to normalization also notice that there is an expected bump in percentiles for windows of frequent redteam activity curiously at the end of day there are massive bumps for the percentile which suggest unplanned and anomalous events on the lanl network for those hours notice that for the character tokenization almost all red team anomaly scores are above the percentile with a large proportion above the percentile finally figure plots the roc curves for the best aggregate baseline iso the best granularity language model word bem and the best granularity model character bem it illustrates the qualitatively different curves obtained with the baselines the granularity and the granularity since the proportion of to normal events is vanishingly low in the the rate is effectively the proportion of data flagged to achieve a particular recall from this observation figure shows the character event model can achieve recall from only of the data whereas the other models considered only achieve recall when nearly all of the data has been percentile percentile anomaly score pm am pm ay d pm pm ay d am d ay pm pm d ay am figure anomaly scores in relation to percentiles over time true positive iso agg auc w bem day auc c bem auc false positive figure roc curves for best performing baseline word language model evaluated at and character language model evaluated at handed to the analyst further the character event model can achieve recall by flagging only of the data whereas the word day language model needs of the data and the aggregate isolation forest model needs of the data to achieve the same result conclusion this work builds upon advances in language modeling to address computer security log analysis proposing an unsupervised online anomaly detection approach we eliminate the usual feature engineering stage making our approach fast to deploy and agnostic to the system configuration and monitoring tools it further confers the key advantage of detection which allows for a near immediate alert response following anomalous activity in experiments using the los alamos national laboratory cyber security dataset bidirectional language models significantly outperformed standard methods at figure anomaly scores in relation to percentiles over time detection the best detection performance was achieved with a bidirectional language model obtaining a area under the roc curve showing that for the constrained language domain of network logs character based language modeling can achieve comparable accuracy to word based modeling for event level detection we have therefore demonstrated a simple and effective approach to modeling dynamic networks with open vocabulary logs with new users pcs or ip addresses we propose to extend this work in several ways first potential modeling advantages of tiered architectures merit further investigation the use of tiered architectures to track pcs instead of network users or from a richer set of logging sources other than simply authentication logs may take better advantage of their modeling power next we anticipate interpretability can become lost with such detailed granularity provided by detection from a characterbased model therefore future work will explore alternate methods of providing context to an analyst finally we are interested in exploring the robustness of this approach to adversarial tampering similarly performing models could have different levels of resilience that would lead to selection of one over another acknowledgments the research described in this paper is part of the analysis in motion initiative at pacific northwest national laboratory it was conducted under the laboratory directed research and development program at pnnl a national laboratory operated by battelle for the department of energy the authors would also like to thank the nvidia corporation for their donations of titan x and titan xp gpus used in this research references abadi et al abadi agarwal barham brevdo chen citro corrado davis dean devin ghemawat goodfellow harp irving isard jia jozefowicz kaiser kudlur levenberg monga moore murray olah schuster shlens steiner sutskever talwar tucker vanhoucke vasudevan vinyals warden wattenberg wicke yu and zheng x tensorflow largescale machine learning on heterogeneous systems software available from alrawashdeh and purdy alrawashdeh and purdy toward an online anomaly intrusion detection system based on deep learning in machine learning and applications icmla ieee international conference on ieee bhattacharyya and kalita bhattacharyya and kalita network anomaly detection a machine learning perspective crc press bivens et al bivens palagiri smith szymanski embrechts et al networkbased intrusion detection using neural networks intelligent engineering systems through artificial neural networks buczak and guven buczak and guven a survey of data mining and machine learning methods for cyber security intrusion detection ieee communications surveys tutorials chung et al chung gulcehre cho and bengio y gated feedback recurrent neural networks in international conference on machine learning debar becker and siboni debar becker and siboni a neural network component for an intrusion detection system in proc ieee symposium on research in security and privacy dua and du dua and du x data mining and machine learning in cybersecurity crc press gavai et al gavai sricharan gunning hanley singhal and rolleston supervised and unsupervised methods to detect insider threat from enterprise social and online activity data journal of wireless mobile networks ubiquitous computing and dependable applications gopalan hofman and blei gopalan hofman and blei scalable recommendation with poisson factorization arxiv preprint hochreiter and schmidhuber hochreiter and schmidhuber j long memory neural computation hwang and sung hwang and sung language modeling with hierarchical recurrent neural networks arxiv preprint kent kent cyber security data sources for dynamic network research dynamic networks and koutnik et al koutnik greff gomez and schmidhuber j a clockwork rnn arxiv preprint kumar kumar and sachdeva kumar kumar and sachdeva the use of artificial intelligence based techniques for intrusion detection a review artificial intelligence review ling et al ling marujo astudillo amir dyer black and trancoso i finding function in form compositional character models for open vocabulary word representation arxiv preprint ling et al ling trancoso dyer and black neural machine translation arxiv preprint liu ting and zhou liu ting and zhou isolation forest in proc icdm novakov et al novakov lung lambadaris and seddigh studies in applying pca and wavelet algorithms for network traffic anomaly detection in high performance switching and routing hpsr ieee international conference on ieee pascanu et al pascanu stokes sanossian marinescu and thomas a malware classification with recurrent networks in acoustics speech and signal processing icassp ieee international conference on ieee pedregosa et al pedregosa varoquaux gramfort michel thirion grisel blondel prettenhofer weiss dubourg vanderplas passos cournapeau brucher perrot and duchesnay machine learning in python journal of machine learning research ringberg et al ringberg soule rexford and diot sensitivity of pca for traffic anomaly detection in sigmetrics lawson and heard rubindelanchy lawson and heard a anomaly detection for cyber security applications dynamic networks and schuster and paliwal schuster and paliwal bidirectional recurrent neural networks ieee transactions on signal processing shyu et al shyu chen sarinnapakorn and chang a novel anomaly detection scheme based on principal component classifier in proc icdm sommer and paxson sommer and paxson outside the closed world on using machine learning for network intrusion detection in proc symposium on security and privacy tuor et al tuor kaplan hutchinson nichols and robinson deep learning for unsupervised insider threat detection in structured cybersecurity data streams in artificial intelligence for cybersecurity workshop at aaai turcotte et al turcotte moore heard and mcphall a poisson factorization for anomaly detection in intelligence and security informatics isi ieee conference on ieee turcotte heard and kent turcotte heard and kent modelling user behavior in a network using computer event logs dynamic networks and veeramachaneni et al veeramachaneni arnaldo korrapati bassias and li ai training a big data machine to defend in proc hpsc and ids zuech khoshgoftaar and wald zuech khoshgoftaar and wald intrusion detection and big heterogeneous data a survey journal of big data
9
ieee signal processing letters look wider to match image patches with convolutional neural networks sep haesol park and kyoung mu lee a human matches two images the viewer has a natural tendency to view the wide area around the target pixel to obtain clues of right correspondence however designing a matching cost function that works on a large window in the same way is difficult the cost function is typically not intelligent enough to discard the information irrelevant to the target pixel resulting in undesirable artifacts in this paper we propose a novel convolutional neural network cnn module to learn a stereo matching cost with a window unlike conventional pooling layers with strides the proposed layer can cover a large area without a loss of resolution and detail therefore the learned matching cost function can successfully utilize the information from a large area without introducing the fattening effect the proposed method is robust despite the presence of weak textures depth discontinuity illumination and exposure difference the proposed method achieves performance on the middlebury benchmark index matching pooling cnn i ntroduction most stereo matching methods first compute the matching cost of each pixel with a certain disparity before optimizing the whole cost volume either globally or locally by using specific prior knowledge for decades many researchers have focused on the second step designing a good prior function and optimizing it few studies have been conducted on designing or selecting a better matching cost function one of the most widely used matching cost functions is a matching cost function such as the one used in along with sophisticated prior models it sometimes produces good results especially in preserving the detailed structures near the disparity discontinuities however the function fails when the image contains areas or repetitive textures in such cases a matching cost such as census or sad produces a more reliable and distinctive measurement the critical shortcoming of matching cost functions is their unreliability around disparity discontinuities figure visually illustrates the characteristics of different matching cost measures one method to handle this is to make the windowbased versatile to its input patterns the key idea is making the shape of the matching template adaptive so that it can discard the information from the pixels that are irrelevant to the target pixel however knowing the background pixels before the actual matching is difficult making it a park and lee are with automation and systems research institute seoul national university seoul korea matching cost for pixelwise matching cost for sad sad census census proposed fig the top image shows the reference image with two interested points and the pixel positions are marked as blue dots whereas the red and green boxes represent and windows centered on them respectively at the bottom the matching costs for each pixel are visualized as a normalized function of disparity for different matching cost functions the positions of true disparities are marked as red vertical lines the pixelwise cost shows the lowest values at the true disparity but it also gives zero costs for other disparities the sad and census matching cost functions become less ambiguous as the matching window becomes larger however these functions are affected by pixels irrelevant to the target pixel even the matching cost learned by using the baseline convolutional neural network cnn architecture fails when the surface has a nearly flat texture on the other hand the proposed cnn architecture works well both on weakly textured regions and disparity discontinuities problem therefore the use of a cnn is appropriate as it automatically learns the proper shape of the templates for each input pattern the existing methods however are based on conventional cnn architectures resembling the alexnet or vgg network which are optimized for image classification task and not for image matching the architectures comprise several convolution layers each followed by a rectified linear unit relu and pooling layers with strides one of the limitations of using these architectures for matching is the difficulty of enlarging the size of the patches that are to be compared the effective size of the patch is directly related to ieee signal processing letters the spatial extent of the receptive field of cnn which can be increased by including a few strided layers using larger convolution kernels at each layer or increasing the number of layers however use of strided layers makes the results downsampled losing fine details although the resolution can be recovered by applying convolution reconstructing small or thin structures is still difficult if once they are lost after downsampling increasing the size of the kernels is also problematic as the number of feature maps required to represent the larger patterns increases significantly furthermore a previous study reported that the repetitive usage of small convolutions does not always result in a large receptive field this paper contributes to the literature by proposing a novel cnn module to learn a better matching cost function the module is an innovative pooling scheme that enables a cnn to view a larger area without losing the fine details and without increasing the computational complexity during test times the experiments show that the use of the proposed module improves the performance of the baseline network showing competitive results on the middlebury benchmark ii r elated w orks given the introduction of stereo datasets with the disparity maps many attempts have been made to learn a matching cost function using machine learning algorithms the most impressive results are obtained by using cnn the architecture proposed in takes a small window and processes it without the use of pooling the computed cost volume is noisy due to the limited size of the window thus it is by using the crossbased cost aggregation cbca matching sgm and additional refinement procedures on the other hand the method in uses multiple pooling layers and spp to process larger patches however the results show a fattening effect owing to the loss of information introduced by pooling the main contribution of this paper is in proposing a novel pooling scheme that can handle information from a large receptive field without losing the fine details recently several attempts have been made to accomplish the same goal in the context of semantic segmentation these methods combine the feature maps from the highlevel layers with those from the lower layers with the aim of correctly aligning the information along the details while this approach can successfully align the boundaries of the big objects its inherent limitation is its inability to recover small objects in the final output once they are lost during the abstraction due to multiple uses of pooling in the same context the flownet architecture can upsample the flow to the original scale by using feature maps however it fails to recover the extreme flow elements that are hidden due to the low resolution of feature maps the architecture most closely related to the current work has been proposed in unlike the other approaches the p fig the module with pooling size vector s is visualized this figure shows its action for one channel of the feature maps for brevity it does the same job for all channels spp network excludes pooling layers between convolutional layers instead it first computes feature maps by cascading convolutional layers several times and then generates and information by pooling them at different scales by keeping the original feature maps along with feature maps pooled at multiple scales the spp network can combine the features from multiple levels without losing fine details although the previously mentioned stereo method in uses spp it also employs conventional pooling layers between convolutional layers thus losing the detailed information iii a rchitecture of the n eural n etwork the proposed architecture takes two input patches and produces the corresponding matching cost in the following subsections the newly proposed module is first introduced then the detailed architecture of the entire network is presented pyramid pooling the use of pooling layers in cnn has been considered desirable because of its accuracy and efficiency in image classification tasks while the use of layers has been reported to provide an additional invariance in spatial transformation the most important gain comes from the downsampling of feature maps by performing pooling with a stride that is larger than one the output feature maps after the pooling are scaled down the final scale of the cnn output is decreased exponentially in terms of the number of pooling layers given that no parameters related to a pooling operation exist this method is an effective way to widen the receptive field area of a cnn without increasing the number of parameters the drawback of strided pooling is that the network loses fine details in the original feature maps as the pooling is applied thus a exists in seeing a larger area and preserving the small details inspired by the idea discussed in we propose a novel pooling scheme to overcome this instead of using a small pooling window with a stride a large pooling window is used to achieve the desired size of the receptive field the use of one large pooling window can lead to the loss of finer details thus multiple poolings with varying window sizes are performed and the outputs are concatenated to ieee signal processing letters matching score matching score conv sigmoid conv sigmoid table i t he quantitative results on the training dense set of the m iddlebury benchmark are shown t he error represents the percentage of bad pixels with a disparity threshold and the same weighting scheme is applied as in when computing the average conv relu conv conv relu conv relu conv relu conv relu conv relu conv relu methods wta after conv relu conv relu conv relu conv relu conv relu conv relu conv relu conv relu conv relu conv relu l r baseline l r proposed fig the network structures are visualized for the baseline network and the proposed network the parenthesized numbers at each layer represent the number of feature maps after the corresponding operations note that this figure is drawn in terms of the fully convolutional network create new feature maps the resulting feature maps contain the information from scales the pooling operation is performed for every pixel without strides we call this whole procedure pyramid pooling which is formally defined as follows p f s p f p f sm where s is a vector having m number of elements and p f si is the pooling operation with size si and stride one the structure of this module is illustrated in figure b proposed model to validate the effect of the proposed module we trained and tested cnns with and without the module the baseline architecture is selected as the the module in the proposed architecture is constructed by using the size vector s the structures of two cnns are visualized in figure iv i mplementation d etails for a fair comparison we followed the details in to train the proposed architecture with a few exceptions mentioned below first the size of the training patch became furthermore we only the parameters of the last three convolution layers of the proposed architecture in figure the parameters of the earlier layers are borrowed from the network in our experiments this resulted in a better performance than the training of the network with random initializations moreover training a few convolution layers with features is easier making it converge faster we have run a avg error proposed proposed parameters in proposed parameter tuning total of four epochs of training where the last two epochs were run with a decreased learning rate from to we also used the same pipeline as in during the test phase the pipeline includes the use of the cbca and sgm and the disparity maps are refined to have continuous values and undergo median filtering and bilateral filtering e xperiments to verify the effect of the proposed module we have compared the results of the baseline and proposed network the performance is measured using the training dense set of the middlebury benchmark the quantitative results are briefly summarized in table i using the average errors all experiments are performed by using the intel core cpu and a single nvidia geforce gtx titan x gpu the proposed method outperforms the baseline architecture regardless of the use of the benefit of using the module is clear when the disparity maps are obtained by using the wta rule without any given that the images in the dataset contain many areas the window can not distinguish the true matches from false ones without the aid of on the other hand the proposed architecture effectively sees the larger window by inserting the module before the final decision layer it is less straightforward to understand why the proposed architecture still outperforms the baseline even after postprocessing in that sense it is worth to mention that the best parameter setting for of the proposed method largely differ from that of the the most notable changes from the original parameter setting is that we use much less number of cbca and it means that multiple uses of cbca become redundant in the proposed architecture from this fact we can interpret the role of module as adaptive local feature aggregation compared to the algorithm such as cbca the influence of neighboring pixels to a certain pixel is automatically learned following the conventions in the best parameter setting is as follows and ieee signal processing letters true disparity and left image proposed fig the results for playtablep and vintage are visualized for each datum the upper row shows the disparity map and the bottom row shows the corresponding error maps while the shows errors around the areas such as the surfaces of the chair and the table in playtablep or the white wall in vintage the proposed method shows more reliable results and it can be jointly trained with the cost function itself furthermore the information exchange among pixels is done in feature space which contains richer contextual information than the final cost volume space note that the improvement over the baseline clearly results neither from the use of extra layers nor from the use of more parameters as the authors of already have shown that the additional use of fc layers is less significant using two additional fc layers leads to an improvement of approximately whereas using the module results in a improvement in terms of average error the main contribution of the proposed method lies in learning a less ambiguous matching cost function by inspecting a larger area figure shows that the proposed network actually works better around the area than the the quantitative and qualitative results of each dataset including the ones in the test dense set are available at the middlebury benchmark website vi c onclusions viewing a large area to estimate the dense pixel correspondence is necessary to fully utilize the texture information to achieve less ambiguous and more accurate matching a conventional matching cost function fails because neighboring pixels on the same surface as the target pixel are unknown in this paper a novel cnn module is proposed to make the cnn structure handle a large image patch without losing the small details which enables it to learn an intelligent matching cost function for windows the learned cost function can discriminate the false matches for areas or repeating textures and can also conserve the disparity discontinuities the learned cost function achieves competitive performance on the middlebury benchmark ieee signal processing letters r eferences scharstein and szeliski a taxonomy and evaluation of dense stereo correspondence algorithms ijcv vol no pp kolmogorov and zabih computing visual correspondence with occlusions using graph cuts in iccv vol ieee pp stereo processing by semiglobal matching and mutual information pami vol no pp woodford torr reid and fitzgibbon global stereo reconstruction under smoothness priors pami vol no pp rhemann hosni bleyer rother and gelautz fast filtering for visual correspondence and beyond in cvpr ieee pp yang a cost aggregation method for stereo matching in cvpr ieee pp birchfield and tomasi depth discontinuities by stereo international journal of computer vision vol no pp hirschmuller and scharstein evaluation of stereo matching costs on images with radiometric differences ieee transactions on pattern analysis and machine intelligence vol no pp and scharstein evaluation of cost functions for stereo matching in cvpr ieee pp wang adaptive stereo matching algorithm based on edge detection in icip vol ieee pp yoon and kweon adaptive approach for correspondence search pami vol no pp tombari mattoccia stefano and addimanda classification and evaluation of cost aggregation methods for stereo correspondence in cvpr ieee pp and lecun stereo matching by training a convolutional neural network to compare image patches the journal of machine learning research vol no pp zagoruyko and komodakis learning to compare image patches via convolutional neural networks in cvpr june pp krizhevsky sutskever and hinton imagenet classification with deep convolutional neural networks in advances in neural information processing systems pp simonyan and zisserman very deep convolutional networks for image recognition arxiv preprint radford metz and chintala unsupervised representation learning with deep convolutional generative adversarial networks arxiv preprint zhou khosla lapedriza oliva and torralba object detectors emerge in deep scene cnns arxiv preprint scharstein kitajima krathwohl wang and westling stereo datasets with ground truth in pattern recognition springer pp geiger lenz and urtasun are we ready for autonomous driving the kitti vision benchmark suite in cvpr menze and geiger object scene flow for autonomous vehicles in cvpr and pollefeys learning the matching function arxiv preprint zhang lu and lafruit local stereo matching using orthogonal integral images circuits and systems for video technology vol no pp he zhang ren and j sun spatial pyramid pooling in deep convolutional networks for visual recognition in eccv springer pp j long shelhamer and darrell fully convolutional networks for semantic segmentation in cvpr pp hariharan arbelaez girshick and malik hypercolumns for object segmentation and localization in cvpr june pp noh hong and han learning deconvolution network for semantic segmentation arxiv preprint fischer dosovitskiy ilg golkov van der smagt cremers and brox flownet learning optical flow with convolutional networks arxiv preprint
1
on the asymptotic structure of brownian motions with a small effect apr yuta april abstract this paper considers two brownian motions in a situation where one is correlated to the other with a slight delay we study the problem of estimating the time lag parameter between these brownian motions from their observations which are possibly subject to measurement errors the measurement errors are assumed to be centered gaussian and independent of the latent processes we investigate the asymptotic structure of the likelihood ratio process for this model when the lag parameter is asymptotically infinitesimal we show that the structure of the limit experiment depends on the level of the measurement errors if the measurement errors locally dominate the latent brownian motions the model enjoys the lan property otherwise the limit experiment does not result in typical ones appearing in the literature we also discuss the efficient estimation of the lag parameter to highlight the statistical implications keywords and phrases asymptotic efficiency endogenous noise effect local asymptotic normality microstructure noise introduction let bt t r be a bivariate brownian motion such that e e and e for some also let i be a sequence of bivariate standard normal variables independent of b for each n n we denote by pn the law of the vector zn xn yn generated by the following model xi yi vn if n i v if xi vn yi n i for i n where vn is a number r denotes the unknown parameter which we are interested in especially the sign of is unknown the aim of this paper is to study the asymptotic structure of the sequence of experiments b pn n as n when the time lag parameter is asymptotically infinitesimal tends to as n here and below b m denotes the borel of rm for m n more precisely we study the limit experiment of pn rnu as n for the proper convergence rate rn department of business administration graduate school of social sciences tokyo metropolitan university marunouchi eiraku bldg marunouchi tokyo japan the institute of statistical mathematics tachikawa tokyo japan crest japan science and technology agency if vn model is a special case of the hry model introduced in hoffmann et al to describe effects in financial data a similar model has also been studied in robert and rosenbaum with an asymptotic regime different from the current setting here the effect refers to a situation where one time series is correlated to another time series at a later time which has especially drawn attention in analysis of economic time series data for a long time and associated econometric methods have been developed by many authors see section of hoffmann et al section of robert and rosenbaum and references therein the practicality of the hry model in empirical work has recently been established by several authors such as alsayed and mcgroarty huth and abergel and bollen et al for financial data and iacus et al for social media data these empirical studies show that time lag parameters are typically comparable to the observation frequencies in their scales this motivates us to study the hry model when is small in such a situation one would especially be interested in how small lag parameters can be identified in principle to the author s knowledge however there is few theoretical study for the hry model and in particular nothing has been known about the optimality of statistical inferences for the hry model the purpose of this paper is trying to fill in this gap in this paper as well as a special case of the hry model we also consider a situation where the model contains measurement errors this is motivated by the recent studies for the volatility estimation from ultra high frequency financial data which is typically modeled as a discretely observed semimartingale with market microstructure noise we refer to chapter of and jacod for a brief description of this subject in particular the asymptotic structure and the asymptotic efficiency bound have been established in the work of gloter and jacod b see also cai et al for a statistical model of estimating the scale parameter from the discrete observations vn i n where w wt is a standard wiener process and is a sequence of centered standard normal variables independent of w they proved the lan property for the above model and constructed asymptotically efficient estimators for they indeed considered a more general setting extensions of their lan result to a multivariate setting have also been studied by several authors the correlation estimation in a bivariate setting is studied in bibinger while a more general setting containing sampling case is studied in ogihara on the other hand has studied the asymptotic structure of model when is a function of time rather than a constant and established the asymptotic equivalence between such a model and a gaussian white noise model the result has been extended to the bivariate case by bibinger and and a multivariate setting containing case by bibinger et al another type of extension replacing the wiener process w by a different process has also been studied for example sabel and consider the efficient estimation of in a situation where w is a more general gaussian process especially a fractional brownian motion the main contribution of this paper is i to determine the proper convergence rate rn and ii to derive a stochastic expansion for the likelihood ratio process for pn rnu analogously to gloter and jacod the proper convergence rate rn depends on the behavior of the sequence nvn this is intuitively natural because var and thus the behavior of nvn determines how strongly the measurement errors locally dominate the nature of the observed returns in particular we find that rn if vn or more generally if nvn is the rate is much faster than the usual parametric rate and even faster than the rate since the time resolution of our model is our result suggests that we could estimate lag parameters smaller than the time resolution of observation data this implication is at least true for our restrictive situation as shown in section since the convergence rate of the estimator for the lag parameter proposed in hoffmann et al can not be faster than see proposition of hoffmann et al and the discussion after this proposition our result shows that their estimator is suboptimal in the setting considered in this paper although their estimator works in a more general setting given the proper convergence rate we have the following stochastic expansion for the likelihood ratio process there are random variables tn and sn defined on b and numbers and such that dpn rnun p log under as n un tn for any bounded sequence un of real numbers and d tn sn n n under as n therefore by a contiguity argument we deduce that the experiments b pn rn u converge weakly to the experiment b qu in the le cam sense where qu n n see lary the numbers and are determined by the asymptotic behavior of nvn and precisely defined by in particular is always positive while is positive if nvn is bounded and otherwise the case corresponds to the situation where the measurement errors locally dominate the signal and in this case our model enjoys the lan property which commonly appears in regular experiments this result is of interest because model exhibits irregularity in the sense that its likelihood function is not smooth in and the limit experiment of such a model typically deviates from the lan structure as illustrated in chapters of ibragimov and has minskii our result means that the measurement errors have a kind of regularizing effect on the asymptotic structure of model on the other hand if which corresponds to the cases where the signal dominates or is balanced with the measurement errors in addition to an observation from a usual gaussian shift experiment n u the limit experiment contains an extra observation from the experiment n although this experiment looks simple to the author s knowledge it does not result in cases such as in ibragimov and has minskii so the definition of asymptotically efficient estimators in this case is not obvious to obtain the asymptotic efficiency bound for estimating the lag parameter in this case in section we apply ibragimov and has minskii s theory to our problem which is a common approach to establish asymptotic efficiency bounds for experiments generated by diffusion type processes see kutoyants for details as a result we find that bayesian estimators are asymptotically efficient while the maximum likelihood estimator is not always asymptotically efficient this is a common phenomenon in irregular models see chapters of ibragimov and has minskii and kutoyants chapter of kutoyants rubin and song and chapter of van der vaart for example this paper is organized as follows section presents the main result of the paper in section we discuss the efficient estimation of the lag parameter in our setting section is devoted to the proof of an auxiliary technical result indeed an intuition for this fact has already been appeared in hoffmann et al see remark general notation em denotes the m matrix for a matrix a we denote by kaksp and kakf its spectral norm and the frobenius norm respectively that is kaksp sup kaxk kxk and tr a also we denote by aij the i j entry of a main result we start with completing the definitions of the quantities rn and appearing in the introduction first following gloter and jacod we assume that the sequence nvn converges in and set lim nvn we also assume lim supn vn as in gloter and jacod then we set p if nn n otherwise nn can be considered as an effective sample size in the sense that the proper convergence rate for estimating from model is given by nn which is seen as the regular parametric rate if we regard nn as the sample size using this effective sample size nn we define our proper convergence rate as rn nn the constants and appearing in are defined by if if if and n o if if if remark is always positive for any this is evident when or when this is proven as follows first suppose that then we have p p p p hence we have on the other hand if applying the above inequality with replacing by p p we obtain hence we have the following statement is our main result theorem there are two sequences tn and sn of random variables satisfying for any bounded sequence un of real numbers we can explicitly give the variables tn and sn in theorem by below theorem has some immediate consequences the first one is the direct consequence of the definition of the lan property corollary if pn has the lan property at with rate rn and asymptotic fisher information the second one follows from le cam s first lemma see lemma of van der vaart corollary pn and are mutually contiguous if the sequence of real numbers satisfies o rn the third one is derived from corollary and theorem of strasser we refer to drost et al le cam chapter of strasser and chapters of van der vaart for the definition and applications of the weak convergence of experiments corollary the sequence b pn rn h of experiments converges weakly to the experiment b qu as n where qu n n now we turn to the proof of theorem although pn consists of gaussian distributions the problem is not simple because the covariance matrix cn of pn is a complicated function of the lag parameter in particular cn and cn are not simultaneously diagonalizable in general even asymptotically if this could be troublesome because in analysis of gaussian experiments the asymptotically neous diagonalizability of the covariance matrices of the statistical model for different parameters typically plays an important role cf section of davies lemma of gloter and jacod and lemma of sabel and for this reason we first transfer from the model pn to a more tractable model defined as follows for each n n set r vn r e n the law of the vector z en then for each we denote by p x en yen defined by x where e n if vn p b vn e n if e n e n yei ei b e x en en the covariance matrix of p for i we denote by c e n to be precise the hellinger distance in the following we will show that pn is by p en tends to as n provided that tends to sufficiently fast here the hellinger between pn and p distance h p q between two probability measures p and q on a measurable space x a is defined by s s z dp dq h p q x where is a measure dominating both p and q p q for example it can easily be checked that h p q does not depend on the choice of see appendix of section of strasser and section of tsybakov for more information about the hellinger distance e n expectation with respect to pn resp p en throughout the paper we denote by en resp e e n proposition a if then pn p en for any n n and any b if vn h pn p n e n c if a sequence of positive numbers satisfies o as n then h pn p proof claim c immediately follows from a and b so we focus on a and b by symmetry we may assume e n xi xj for let xn yn be the canonical variables on then we have en xi xj e all i j moreover a simple computation shows that if i j n vn en yi yj n vn otherwise and and e n yi yj e en xi yj vn vn n n if i j e n xi yj e otherwise if i j otherwise en if this yields claim a because both pn and p en are centered therefore we have cn c gaussian on the other hand from the above identities we also have en kcn c n x n n i n x n x i x n j i n x x i n j n n i n j x x n j i j n n j n x j j n j n now if vn cn is positive semidefinite and satisfies kcn ksp vn by the monotonicity theorem for eigenvalues corollary of horn and johnson because cn vn is positive semidefinite therefore by eqs and from we obtain en cn en cn c h pn p f n hence claim b holds true in the following we will frequently use the fact that the hellinger distance dominates the total variation distance v p q h p q where v p q a q a see lemma of tsybakov for the proof the following properties of the total variation distance are immediate consequences of the definition and important for our purpose for each n n let pn and qn be two sequences of probability measures on a measurable space xn an and let be a random variable on xn an taking its value in a metric space then for any a d and any probability measure on d the following statements hold true p p v pn qn a under pn a under qn d d v pn qn under pn under qn e n as a tractable form for this purpose we introduce some en of p next we express the covariance matrix c notation the n n matrix denotes the backward difference operator b n z e n x x x x en x yen and denote by we set z b n vn c en vn can explicitly be expressed vn the covariance matrix of z sign as vn gn n en where e n n gn n with gn en vn fn fn n and rn n rn n rn n sign it is more convenient to rewrite the expression as follows let sn and tn be the symmetric and skewn symmetric parts of n respectively that is sn and tn then we set sn tn rn rn sn rn rn so we obtain vn this is a simple function of we can easily check n so vn is more tractable than cn although vn s are not simultaneously diagonalizable for different s it is sufficient to consider a relationship between the matrices and in fact it turns out that the following result is sufficient for our purpose proposition for any r we have ksp o nn as n and lim the proof of proposition consists of elementary but complicated calculations so we postpone it to the appendix section we remark that the proof requires a calculation essentially different from that of the fisher information for the scale parameter estimation from observations of the form such as in gloter and jacod and sabel and see remark also note that proposition yields the invertibility of vn for sufficiently large n if o because vn proof of theorem define the function zbn by setting zbn for then we set o n tn rn z b tr n n n n n n o n sn rn z b tr n n n n n n by virtue of proposition and it suffices to prove the following statements dpn rnun p log under p un tn d e tn sn n n under p follows from proposition and proposition of dalalyan and yoshida on the other hand setting an un we have kan by proposition therefore by proposition and proposition from chapter of le cam we obtain once we show that en rnun dp kan p e log under p un tn dp the strategy of the proof of is the same as that of theorem from davies first by eq of davies for sufficiently large n we have e log det vn rn un log det vn tr vn vn rn un vn kan kf e log det an tr an tr an tr an an an p p note that it holds that an an for sufficiently large n because kan ksp as n by proposition combining this fact with inequality v from appendix ii of davies we obtain e kan ksp kan kf kan ksp kan ksp e for sufficiently large hence proposition yields e next noting that can be rewritten as b bn zbn e bn zbn e where bn vn rn un vn an we obtain from eq of davies vn vn rn un vn vn an f an an f therefore using the identity an p an again we obtain kan kan kan ksp for sufficiently large hence proposition again yields and we obtain we finish this section with some remarks remark it is worth mentioning that we can infer from hoffmann et al why the rate is the proper convergence rate of our model in the case of vn as follows let us set n u x xi yj i j n i n n n for the principle used in hoffmann et al is that u n is close to the true correlation if and only if is close to the true parameter since the accuracy of estimating the correlation parameter is of order n we naturally consider the quantity n u n to measure the distance between u n and n u n would take a large value if is not sufficiently close to the true parameter in fact proposition from hoffmann et al implies that n u n does not diverge if and only if the distance between and the true parameter is at most of order this information allows us to estimate the true parameter with the accuracy of order remark from an econometric point of view proposition is of independent interest because the model given by has an economic interpretation different from model this model contains measurement errors correlated to the latent returns b the integrated volatility estimation in the presence of this type of measurement error has been studied by kalnina and linton for example in the market ture theory such a correlation is often explained as an effect of asymmetric information glosten interestingly some economic arguments suggest that such an information asymmetry would cause a effect see chan and chordia et al for instance it would also be worth emphasizing that de jong and schotman connect this type of model with the investigation of price discovery for price discovery processes are closely related to effects as seen in de jong et al and hasbrouck remark our proof of the main result heavily depends on the gaussianity of the model and especially we require the gaussianity of the measurement errors it is obvious that we need some restriction on the distribution of the measurement errors to derive a specific limit experiment in fact if vn and s take their values only in integers then we can completely recover the signal for sufficiently large apart from such a trivial example the recent study of bibinger et al has shown that another specification for the distribution of the measurement errors s in can improve the convergence rate for estimating the scale parameter in the light of the connection between the convergence rates for models we naturally conjecture that a similar specification for the measurement errors would affect the convergence rate for our model this issue is beyond the scope of this paper and left to future research efficient estimation of the lag parameter as an application of the results from the previous section we construct efficient estimators for the lag parameter in the models pn at here we consider a slightly extended setup as follows letting be a sequence of positive numbers tending to and c be a bounded open interval in r we construct efficient estimators for the parameter c in the models pn at every c to make use of the results from the previous section we impose the following condition on o nn and as n under there is a positive integer such that and vn is invertible for any c c and n due to proposition throughout this section we always assume that n is larger than such an remark in a practical point of view the dependence of the parameter on the sampling frequency n is just a theoretical device to control the relative size of compared with n which corresponds to in the asymptotic theory and it is only important whether the asymptotic order condition corresponding to in our case is acceptable as an approximation namely our asymptotic theory concerns whether the parameter is comparable to for some given a fixed sampling frequency n the possible values of change in accordance with the noise level vn and it does not require that the parameter varies in proportion to the sampling frequency this type of asymptotic theory is standard in econometrics for example when one considers the volatility estimation of a financial asset with taking account of rounding one usually lets the rounding level shrink as the sampling frequency increases see rosenbaum li and mykland li et al and sato and kunitomo for example we start with generalizing proposition by a matrix perturbation argument lemma for any r we have sup kvn vn ksp o nn sup kvn vn as n proof setting hn c vn we have vn vn hn c hn c for any c therefore ostrowski s theorem theorem of horn and johnson implies that kvn vn ksp khn c hn c ksp ksp and vn k hn c hn c ksp khn c hn c ksp khn c hn c ksp hence proposition implies that the proof is completed once we show that khn c hn c ksp o and khn c hn c ksp o as n since hn c hn c and hn c hn c share the same eigenvalues theorem of horn and johnson and hn c hn c gn gn the desired results follow from proposition and the neumann series representation of hn c hn c using the above result we can prove a uniform version of theorem proposition let tn and sn be defined by then dpn un p un tn log dpn d under pn as n uniformly in c c for any bounded sequence un of real numbers moreover tn sn n n under pn as n uniformly in c proof we can prove the first claim in a similar manner to the proof of theorem using lemma instead d of proposition to prove the second claim it suffices to show that tn sn n n under pn as n for any sequence cn of numbers in c which follows from lemma and inequality from dalalyan and yoshida proposition implies that the experiments pn enjoy the lan property if and do not erwise when the lan property holds true there is a theory to define the asymptotic efficiency of estimators cf section of ibragimov and has minskii a sequence cn of estimators in the experiments pn is said to be asymptotically efficient at c c if the variables cn c converge in law to n under pn as n see definition of ibragimov and has minskii under the lan property this definition of the asymptotic efficiency is supported by several theorems such as the convolution theorem theorem of ibragimov and has minskii and the local asymptotic minimax theorem theorem of ibragimov and has minskii moreover it is that both maximum likelihood and bayesian estimators are asymptotically efficient under very general settings cf chapter iii of ibragimov and has minskii on the other hand if the lan property fails it is generally not obvious how to define the asymptotic efficiency of estimators here we adopt the approach from and kutoyants to define the asymptotic efficiency which is based on theorem of ibragimov and has minskii that derives an asymptotically minimax lower bound from the asymptotic properties of the bayesian estimators as a consequence the bayesian estimators are turned out to be asymptotically efficient now we explain the strategy to obtain asymptotically efficient estimators in our setting as in the previous en rather than the original model pn for this reason section we would like to work with the tractable model p we consider the function based on the former as follows en dp p exp zbn vn zbn ln c dx n det vn c then we consider the quasi maximum likelihood and bayesian estimators based on ln c as our estimators and give en using the general scheme of ibragimov and has minskii their asymptotic behavior in the experiments p see proposition next we consider the case where the lan property holds true and thus en can be transferred to that in pn by proposition c finally we convergence in law in p en pn for sufficiently large n due to and consider the case where we have p n n proposition a hence we can apply the minskii method to define and obtain asymptotically efficient estimators the quasi maximum likelihood estimator qmle is defined as a solution of the equation ln sup ln c note that the above equation always has at least one solution belonging to the closure of c because c ln c is continuous moreover we can choose so that it is measurable by the measurable selection theorem see theorem of pfanzagl also the quasi bayesian estimator qbe for a prior density q c with respect to the quadratic loss is defined by z z ln c q c dc cln c q c dc c c where the prior density q is assumed to be continuous and satisfy inf q c q c the corresponding qmle and qbe in the experiments pn are given by and respectively remark since the quantity seems the exact order of the true parameter one may consider that in a practical setting it is difficult to know beforehand and thus it is difficult to use the estimators and however when we construct the estimator can be considered as the maximum order of the true en and cn c c then the estimator can be parameter as follows let us set dp considered as a solution of the equation sup ln therefore in a practical situation c resp inf c can be interpreted as an upper bound resp a lower bound of possible parameters it is often not so difficult to find such bounds in a practical setting and they are typically small as pointed out in the introduction for example we can find them by computing the via hoffmann et al s method as in huth and abergel the same remark can be applied to the estimator because it can be rewritten as z z qn cn cn qn where qn q for cn so qn is a prior density on cn to describe the limit distribution of these estimators we introduce the likelihood ratio process for the limit experiment z u exp u r where and are two mutually independent variables such that n and n then we set if z u if otherwise and uz u du z u du en using the we first give the asymptotic behavior of the estimators and in the experiments p general scheme of ibragimov and has minskii note that in this situation and are true maximum likelihood and bayesian estimators respectively proposition for any compact subset k of c uniformly in c k it holds that c converges in law en and e e n c e for any p as n also uniformly in c k to under p n n n e n and e e n c e for any it holds that r c converges in law to under p n n p as n en u en proof for every c c we set un c u r c rn u c and define zn c u dp n n n for each u un c according to theorems and from ibragimov and has minskii it suffices to prove the following statements p p e n zn c u zn c v a lim supu c e n p e n zn c u b there is a constant such that lim sup sup sup e c n c the marginal distributions of zn c converge in law to the marginal distributions of z uniformly in c k as n c is an immediate consequence of proposition on the other hand by eq from we obtain e n e n q zn c u q zn c v en u p en v h p n n n n u v kvn rn u vn rn u kvn rn u vn rn u hence lemma yields claim a now we consider b by corollary from mathai and provost we have e n an c u log e zn c u log det an c u log det n where an c u vn vn rn u vn then we consider the following decomposition e n z u log e n c n log det an c u tr an c u kan c u kf log det bn c u tr bn c u kbn c u tr an c u kan c u kf tr bn c u kan c u kf kbn c u kf in c u iin c u iiin c u ivn c u where bn c u an c u let us set c vn c vn then we have an c u c c rn u c rn c hence it holds that sup sup kan c u ksp sup sup c ksp c ksp c here we use the fact that u c because of u un c in particular we have c kan c u ksp p k for sufficiently large n by and lemma for such an n we have bn c u an c u and thus kbn c u ksp kan c u ksp kan c u ksp kbn c u kf kan c u kf kan c u ksp for k to obtain the latter estimate where we use the inequality kan c u k kf kan c u kf kan c u ksp therefore for sufficiently large n we have for any c c and any u un c c u kan c u ksp kan c u kan c u c u kbn c u ksp kbn c u kbn c u by appendix v from davies and c u x tr an c u k kan c u kan c u ksp kan c u ksp for k as well as by the inequality tr an c u k kan c u kan c u ksp kan c u ivn c u kan c u ksp consequently there is a constant such that for sufficiently large n it holds that e zn c u kan c u log en for any c c and any u un c now we consider giving an upper bound for c u we have c u c u rn tr c c rn c c tr c c therefore noting by lemma for sufficiently large n we have c u for any c c and any u un c consequently we obtain b by setting if is equivalent to the condition that o as n therefore proposition a yields the following result e n s are replaced by pn s corollary if the statement of proposition still holds true while p n n now we return to the efficient estimation of the parameter c in the model pn first we consider the case in this case we know that pn enjoy the lan property at every c c by proposition so the definition of the asymptotic efficiency of an estimator sequence is as explained in the above theorem if both and are asymptotically efficient at every c c in the experiments pn that is both and converge in law to n under pn for any c c as n in particular both and are asymptotically efficient at in the experiments pn next we turn to the case in this case the experiments pn no longer enjoy the lan property so the definition of the asymptotic efficiency is not obvious as explained in the above here we follow the approach of and kutoyants to define the asymptotic efficiency for our experiments we obtain the following result by virtue of corollary and theorem of ibragimov and has minskii theorem if we have lim lim inf sup en c e for any c and any estimator sequence in the experiments pn in particular we also have lim lim inf sup en e for any estimator sequence in the experiments pn thanks to theorem an estimator sequence is said to be asymptotically efficient at c in the ments pn if it holds that lim lim inf sup en c e similarly an estimator sequence is said to be asymptotically efficient at in the experiments pn if it holds that lim lim inf sup en e for any sequence of positive numbers satisfying the following result is an immediate consequence of this definition theorem if the sequence is asymptotically efficient at every c c in the experiments pn in particular the sequence is asymptotically efficient at in the experiments pn in contrast there is no guarantee of the asymptotic efficiency of the q mle if in fact may perform much better than if as shown in the following proposition proposition it holds that s p e arctan z x y e x y dxdy x y where x du and x y denotes the bivariate normal density with standard normal marginals and correlation r in particular e if proof let us denote by the normal density with mean and variance a then a simple calculation yields z z x z x dx z dz e by formulae and from gradshteyn and ryzhik we have z z dz z arctan x z x dx hence we obtain s p next by a change of variable we obtain z z u du p moreover formulae and from gradshteyn and ryzhik imply that z p p p uz u du p p since the distribution of the vector has the density x y we obtain finally we prove the latter statement define the functions f and g on by r z x y f r arctan x y dxdy g r x y then we have e f r and e g r since r as if and f r it suffices to prove g r because we have g r z z x rx r y rx r y x y dxdy x rx r y the dominated convergence theorem yields g r which completes the proof appendix proof of proposition before starting the proof we introduce some notation we set n i then we define the n n matrix un uij by uij cos cos which is often referred to as the discrete cosine transform dct of see sabel and and references therein note that un and un is real orthogonal it is known that un diagonalizes fn as follows where cos un fn un diag see lemma of kunitomo and sato or lemma of sabel and for the proof for each a we define the functions fa and ga on r by fa x a cos x n ga x sin x fa x x r we also set gn a na en vn fn from we have un gn a un a diag fa fa remark it turns out that the components of un tn un play a dominant role to calculate the limit of kf this is essentially different from the case of calculating the fisher information for the scale rameter estimation from observations of the form where the similarity transformations of toeplitz matrices by un are sufficiently approximated by diagonal matrices as manifested by lemma of sabel and for this reason we need rather specific calculations as seen in lemmas and for a square matrix a spr a denotes the spectral radius of a we will frequently use the identity kaksp spr a holding if a is a normal matrix now we start the main body of the proof we will frequently use the following inequality for the sine function x sin x lemma for any a we have sup ga x p na na vn sup x proof the claim immediately follows from the identity x a a cos x n fa x a x n fa x fa x lemma let r be continuous also let mn be a sequence of positive integers such that mn n and mn c as n then for any p we have lim z mn x p x dx fa p a provided that proof by the fundamental theorem of calculus we have p fa p z vn vn dx x hence the desired result follows from the standard riemann sum approximation lemma let mn be a sequence of positive integers such that mn n and mn c as n then mn x lim nnn f a a arctan a a tan if c b b tan mn arctan x q lim r ga gb a n arctan a tan b for any a b such that a b if if c if if proof first using the lower and upper darboux sums of the integral z m r fa x dx n x dx fa x fa fa fa we obtain z dx fa x now formula in gradshteyn and ryzhik yields z y r dx p arctan fa x a a tan a hence we obtain next a simple calculation yields n ga x gb x x x fa x fb x therefore if lemma implies that lim r n mn x ga gb z sin xdx sin on the other hand if for sufficiently large n we have cos x fa x hence x fa x nvn fa x fa x therefore we obtain x x fa x fb x b nvn fb x a nvn fa x hence the desired result follows from lemma let mn be a sequence of positive integers such that mn n and mn as n then lim x mn n i i proof since we have o l cos l sin n l cos l l if l is even l if l is odd therefore using the formula sin i sin i cos i we can decompose the target quantity as x m m mn n n x n i cot i n n tan i i i even i odd first we prove limn n using the monotonicity of the tangent function and assumption mn n we obtain n z x dx since formula in gradshteyn and ryzhik yields limn that limn n r x dx we conclude next we prove limn n our proof relies on the following inequality for the tangent function x x tan x x the lower estimate of is and the upper estimate is known as the inequality eq of becker and stark now using we obtain x x n therefore using formula we conclude that limn n lemma for any a b it holds that kgn a rn gn b ksp o nn and kgn a rn gn b o as n proof first by definition we have kgn a rn gn b ksp spr gn b gn a rn hence and theorem of horn and johnson yield kgn a rn gn b ksp n x n x p max fa fb uik fa fb therefore lemma implies that kgn a rn gn b ksp o nn moreover since it holds that kgn a rn gn b tr gn a rn gn b rn n x fa fb n x fa k n x fb lemma again yields the desired result lemma for any a we have kgn a sn gn a o nn kgn a sn gn a a as n where for any a we set if a a if a if proof since sn rn fn by lemma it suffices to prove in the case where sn is replaced by fn imply that min fa kgn a fn gn a ksp max and kgn a fn gn a n x fa the first equation in immediately follows from in order to prove the second equation in we will prove the right side of converges to a as n first if the desired result follows from lemma next if noting that fa vn we have a a fa vn n fa n fa hence by lemma we obtain the desired equation once we prove n x a lim p f a a the monotonicity of the cosine function yields z n x dx fa x fa fa z dx fa x since formula in gradshteyn and ryzhik implies that z p dx fa x we obtain the desired result finally if using the inequality fa vn we obtain n x n fa n n vn nvn hence we deduce the desired result lemma if a and b are positive numbers such that a b we have ab b a lim rn kgn a tn gn b kf b if if if proof since we have un tn un ij un rn un ij un tn rn un ij n x ui ui ukj using the trigonometric identities cos x cos y sin sin and sin x cos y sin x y sin x y we obtain n un tn un ij un rn un ij x sin sin sin then using summation formula of gradshteyn and ryzhik we have sin n n ij ij un tn un un rn un sin sin sin now since un tn un tn un and un rn un un rn un by and the unitary invariance of the frobenius norm we obtain kgn a tn gn b kgn a rn gn b x n sin n n n ga gb i j i j i n n n first we consider n using inequalities and x y x y r we have n n max ga max gb i j i x n max ga max gb and thus lemma yields n o nn log n o next we consider n first we prove n n o where x n n ga gb i j n i lemma and yield n n n n x ga i j b n b b n x i n x ga i ga n x i j j if by the property of ga we have n x ga max ga hence lemma yields pn lemma because ga x ga fa x ga x dx fa max ga log fa o log n this also holds true in the case that due to consequently n n o nn log n o now since i j for any c we have z x n n x x n n g g ga gb b a i i b n x therefore lemma implies that lim inf x ga gb lim inf n lim sup n lim sup n x ga gb then letting c by lemma we obtain n n pn ga gb by symmetry we have n n hence we complete the proof due to and lemma proof of proposition set un un un then we have ln un sn rn un ln un sn rn un ln ln un tn rn un ln and un tn r n un ln where ln and ln hence we obtain kgn tn rn gn kgn sn rn gn kgn sn rn gn therefore lemmas yield on the other hand since ksp kgn sn rn gn ksp sn rn gn ksp lemmas also yield ksp o nn hence the proof is completed once we prove ksp o nn note that vn is positive semidefinite and vn for any note also that both and are symmetric therefore if is an eigenvalue of by the monotonicity theorem for eigenvalues corollary of horn and johnson we have ksp for any since we can take this inequality implies that ksp ksp ksp which yields the desired result acknowledgements the author is grateful to two anonymous referees for their careful reading and insightful comments which have significantly improved a former version of this paper the author also thanks the participants at asymptotic statistics and computations statistics for stochastic processes and analysis of high frequency data statistique asymptotique des processus stochastiques x and statistics for stochastic processes and analysis of high frequency data iv for valuable comments this work was supported by crest jst references and jacod j financial econometrics princeton university press alsayed and mcgroarty algorithmic arbitrage across international index futures j forecast becker and stark on a hierarchy of quolynomial inequalities for tan univerzitet u beogradu publikacije fakulteta serija matematika i fizika bibinger efficient covariance estimation for asynchronous noisy data scand stat bibinger hautsch malec and estimating the quadratic covariation matrix from noisy observations local method of moments and efficiency ann statist bibinger jirak and volatility estimation under errors with applications to limit order books ann appl probab bibinger and spectral estimation of covolatility from noisy observations using local weights scand stat bollen o neill and whaley tail wags dog intraday price discovery in vix markets journal of futures markets cai munk and j sharp minimax estimation of the variance of brownian motion corrupted with gaussian noise statist sinica chan imperfect information and among stock prices journal of finance chordia sarkar and subrahmanyam a liquidity dynamics and journal of financial and quantitative analysis dalalyan and yoshida asymptotic expansion for a covariation estimator ann inst henri probab stat davies b asymptotic inference in stationary gaussian adv in appl probab de jong mahieu and schotman price discovery in the foreign exchange market an empirical analysis of the rate journal of international money and finance de jong and schotman price discovery in fragmented markets journal of financial econometrics drost van den akker and werker j the asymptotic structure of nearly unstable integervalued ar models bernoulli glosten components of the spread and the statistical properties of transaction prices journal of finance gloter and jacod j diffusions with measurement errors i local asymptotic normality esaim probab stat gloter and jacod j diffusions with measurement errors ii optimal estimators esaim probab stat gradshteyn and ryzhik i table of integrals series and products elsevier seventh edn hasbrouck j one security many markets determining the contributions to price discovery journal of finance hoffmann rosenbaum and yoshida estimation of the parameter from data bernoulli horn and johnson matrix analysis cambridge university press huth and abergel high frequency relationships empirical facts journal of empirical finance iacus porro salini and siletti social networks happiness and health from sentiment analysis to a multidimensional indicator of subjective working paper available at arxiv http ibragimov and has minskii statistical estimation asymptotic theory springer kalnina and linton o estimating quadratic variation consistently in the presence of endogenous and diurnal measurement error econometrics and kutoyants y a delay estimation for some stationary processes scand stat kunitomo and sato separating information maximum likelihood estimation of the integrated volatility and covariance with noise the north american journal of economics and finance kutoyants y a statistical inference for ergodie diffusion processes springer le cam asymptotic methods in statistical decision theory springer li and mykland a rounding errors and volatility estimation journal of financial econometrics li zhang and li y a unified approach to volatility estimation in the presence of both rounding and random market microstructure noise working paper available at ssrn http mathai and provost b quadratic forms in random variables theory and applications marcel dekker ogihara parametric inference for nonsynchronously observed diffusion processes in the presence of market microstructure noise working paper available at arxiv http pfanzagl j parametric statistical theory walter de gruyter asymptotic equivalence for inference on the volatility from noisy observations ann statist robert and rosenbaum on the limiting spectral distribution of the covariance matrices of processes multivariate anal rosenbaum integrated volatility and error bernoulli rubin and song exact computation of the asymptotic efficiency of maximum likelihood estimators of a discontinuous signal in a gaussian white noise ann statist sabel and j asymptotically efficient estimation of a scale parameter in gaussian time series and expressions for the fisher information bernoulli sato and kunitomo a robust estimation of integrated volatility under errors price adjustments and noises cirje discussion papers the university of tokyo strasser mathematical theory of statistics walter de gruyter tsybakov a b introduction to nonparametric estimation springer van der vaart asymptotic statistics cambridge university press
10
robust estimation via robust gradient estimation feb adarsh arun sai sivaraman pradeep machine learning department of carnegie mellon university pittsburgh pa abstract we provide a new class of estimators for risk minimization we show that these estimators are robust for general statistical models in the classical huber model and in settings our workhorse is a novel robust variant of gradient descent and we provide conditions under which our gradient descent variant provides accurate estimators in a general convex risk minimization problem we provide specific consequences of our theory for linear regression logistic regression and for estimation of the canonical parameters in an exponential family these results provide some of the first computationally tractable and provably robust estimators for these canonical statistical models finally we study the empirical performance of our proposed methods on synthetic and real datasets and find that our methods convincingly outperform a variety of baselines introduction robust estimation has a rich history in statistics with seminal contributions due to box tukey huber hampel and several others in the classical analysis of statistical estimators statistical guarantees are derived under strong model assumptions and in most cases these guarantees hold only in the absence of arbitrary outliers and other deviations from the model assumptions strong model assumptions are rarely met in practice and this has led to the development of robust inferential procedures and various associated statistical concepts such as the influence function the breakdown point and the huber model to assess the robustness of estimators despite this progress however the statistical methods with the strongest robustness guarantees for instance those based on m tournaments and notions of depth are computationally intractable in this paper we present a class of estimators that are computationally tractable and have strong robustness guarantees the estimators we propose are obtained by robustifying classical algorithms for risk minimization and are applicable to a of parametric statistical models for which parameter estimation can be cast within this framework in contrast to classical work for instance on we do not attempt to replace the risk minimization objective with a robust counterpart but instead focus on making canonical optimization for the usual risk minimization objective robust we find that this shift in perspective enables a unified treatment of different statistical models leads to computationally tractable estimators and leads to estimators with strong robustness guarantees in the risk minimization framework the target parameter is defined as the solution to an optimization problem argmin r argmin l z where l is an appropriate r is the population risk and is the set of feasible parameters the goal of empirical risk minimization procedures is then to compute an approximate minimizer to the above program when given access to samples dn zn in this classical setting a standard assumption that is imposed on dn is that the data has no outliers and has no arbitrary deviations from model assumptions it is typically assumed that each of the zi s are independent and identically distributed according to the distribution p many analyses of risk minimization further assume that p follows a distribution or has otherwise tails in order to appropriately control the deviation between the population risk and its empirical counterpart while our general results can be specialized to obtain results for a variety of models and notions of robustness we focus on developing estimators which are robust to two canonical classes of deviations from the model assumptions robustness to arbitrary outliers in this setting we focus on huber s model where rather than observe samples directly from p in we instead observe samples drawn from which for an arbitrary distribution q is defined as p the distribution q allows for arbitrary outliers which may correspond to gross corruptions or more subtle deviations from the assumed model this model can be equivalently viewed as model in the total variation tv metric robustness to in this setting we are interested in developing estimators under weak moment assumptions we assume that the distribution p from which we obtain samples only has finite moments see section for a precise characterization such heavy tailed distributions arise frequently in the analysis of financial data and biological datasets see for instance examples in in contrast to classical analyses of empirical risk minimization in this setting the empirical risk is not uniformly close to the population risk and methods that directly minimize the empirical risk perform poorly see section the goal of our work is to develop estimators which are computationally tractable and robust in these models below we provide an outline of our results and contributions our first contribution is to introduce a new class of robust estimators for risk minimization these estimators are based on robustly estimating gradients of the population risk and are computationally tractable by design building on prior work for robust mean estimation in the huber model and in the model we design robust gradient estimators for the population risk in our main insight is that in this general risk minimization setting the gradient of the population risk is simply a multivariate mean vector and we can leverage prior work on mean estimation to design robust gradient estimators through this we are able to significantly generalize the applicability of mean estimation methods to general parametric models our estimators are practical and our second contribution is to conduct extensive numerical experiments on real and simulated data with our proposed estimators we provide guidelines for tuning parameter selection and we compare the proposed estimators with several competitive baselines across different settings and according to various metrics we find that our estimators consistently perform well finally we provide rigorous robustness guarantees for the estimators we propose for a variety of canonical statistical models including for linear regression for logistic regression and for estimation of the canonical parameters in an exponential family our contributions in this direction are building on prior work we provide a general result on the stability of gradient descent for risk minimization showing that in favorable cases gradient descent can be quite tolerant to inaccurate gradient estimates subsequently in concrete settings we provide a careful analysis of the quality of gradient estimation afforded by our proposed gradient estimators and combine these results to obtain guarantees on our final estimates broadly as we discuss in the sequel our work suggests that estimators which are based on robust gradient estimation offer a variety of practical conceptual statistical and computational advantages for robust estimation related work there is extensive work broadly in the area of robust statistics see for instance and references therein and we focus this section on some lines of work that are most related to this paper classical work has already developed several estimators which are known to be optimally robust for a variety of inferential tasks including hypothesis testing mean estimation general parametric estimation and estimation however a major drawback of this classical line of work has been that most of the estimators with strong robustness guarantees are computationally intractable while the remaining ones are heuristics which are not optimal recently there has been a flurry of research in theoretical computer science designing provably robust estimators which are computationally tractable while achieving contamination dependence for special classes of problems some of the proposed algorithms are not practical as they rely on the ellipsoid algorithm or require solving semidefinite programs which can be slow for modern problem sizes we build on the work of lai et al who study practical robust mean and covariance estimators for distributions with appropriately controlled moments a complementary line of recent research has focused on providing minimax upper and lower bounds on the performance of estimators under model without the constraint of computational tractability in the model q can be arbitrary but there has been a lot of work in settings where the contamination distribution is restricted in various ways for example recent work in statistics for instance have studied problems like principal component analysis and linear regression under the assumption that the corruptions are evenly spread throughout the dataset another line of research has focused on designing robust estimators under the heavy tailed distribution setting these approaches relax the or distributional assumptions that are typically imposed on the target distribution p and allow it to be a heavy tailed distribution most of the approaches in this category use robust mean estimators that exhibit type concentration around the true mean for distributions satisfying mild moment assumptions the estimator and catoni s mean estimator are two popular examples of such robust mean estimators hsu and sabato use the estimator to develop an alternative to erm under heavy tails although this estimator has strong theoretical guarantees and is computationally tractable as noted by the authors in it performs poorly in practice in recent work brownlees et al replace empirical mean in the empirical risk minimization framework erm with catoni s mean estimator and perform risk minimization the authors provide risk bounds similar to the bounds one can achieve under distributional assumptions however their estimator is not easily computable and the authors do not provide a practical algorithm to compute the estimator other recent works by lerasle and oliveira lugosi and mendelson use similar ideas to derive estimators that perform well theoretically in situations however these approaches involve optimization of complex objectives for which no computationally tractable algorithms exist we emphasize that in contrast to our work these works focus on robustly estimating the population risk which does not directly lead to a computable estimator we instead consider robustly estimating the gradient of the population risk when complemented with the gradient descent algorithm this leads naturally to a computable estimator outline we conclude this section with a brief outline of the remainder of the paper in section we provide some background on risk minimization and the huber and noise models in section we introduce our class of estimators and provide concrete algorithms for the and setting in section we study the empirical performance of our estimator on a variety of tasks and datasets we complement our empirical results with theoretical guarantees in sections and we defer technical details to the appendix finally we conclude in section with a discussion of some open problems background and problem setup in this section we provide the necessary background on risk minimization gradient descent and introduce two notions of robustness that we consider in this work risk minimization and parametric estimation in the setting of risk minimization we assume that we have access to a differentiable loss function l z r where is a convex subset of rp let r l z be the population loss risk and let be the minimizer of the population risk r over the set argmin r the goal of risk minimization is to minimize the population risk r given only n samples dn zi whereas in parameter estimation we are interested in estimating the unknown parameter from samples dn in this work we assume that the population risk is convex to ensure tractable minimization moreover in order to ensure identifiability of the parameter we impose two standard regularity conditions on the population risk these properties are defined in terms of the error of the taylor approximation of the population risk defining r r i we assume that where the parameters denote the and smoothness parameters respectively gradient descent and empirical risk minimization a starting point for the techniques we develop in this paper is the classical projected gradient descent method for empirical risk minimization given data dn zi empirical risk minimization erm estimates the unknown parameter as the minimizer of the empirical risk n l zi argmin rn n a popular method for solving this optimization problem is projected gradient descent projected gradient descent generates a sequence of iterates t by refining an initial parameter via the update t t where is the step size and is the projection operator onto despite its simplicity the gradient descent method is not robust for general convex losses furthermore the empirical risk minimizer is a poor estimator of in the presence of outliers in the data since erm depends on the sample mean outliers in the data can effect the sample mean and lead erm to estimates this observation has led to a large body of research that focuses on developing robust which have favorable statistical properties but are often computationally intractable in this work we take a different approach relies on an important observation that the gradient of the population risk z is simply a mean vector one that can be estimated robustly by leveraging recent advances in robust mean estimation this leads to a general method for risk minimization based on robust gradient estimation see algorithm robust estimation one of the goals of this work is to develop general statistical estimation methods that are robust in one of the following two models huber s model or the model we now briefly review these two notions of robustness huber s model huber proposed the model where we observe samples that are obtained from a mixture of the form p where p is the true distribution is the expected fraction of outliers and q is an arbitrary outlier distribution given observations drawn from our is to estimate the minimizer of the population risk r l z robust to the contamination from q model in the model it is assumed that the data follows a distribution p is while distributions have various possible characterizations in this paper we consider a characterization via gradients for a fixed we let denote the multivariate distribution of the gradient of population loss z we refer to a distribution as one for which has finite second moments for any as we illustrate in section in various concrete examples this translates to relatively weak moment assumptions on the data distribution p given n observations from p our objective is to estimate the minimizer of the population risk from a conceptual standpoint the classical analysis of which relies on uniform concentration of the empirical risk around the true risk fails in the setting necessitating new estimators and analyses gradient estimation gradient descent and its variants are at the heart of modern optimization and are in the literature suppose we have access to the true distribution then to minimize the population risk r we can use projected gradient descent where starting at some initial and for an appropriately chosen we update our estimate according to t t however we only have access to n samples dn zi the key technical challenges are then to estimate the gradient of r from samples dn and to ensure that an appropriate modification of gradient descent is stable to the resulting estimation error to address the first challenge we observe that the gradient of population risk at any point is the mean of a multivariate distribution z accordingly the problem of gradient estimation can be reduced to a multivariate mean estimation problem where our goal is to robustly estimate the true mean from n samples zi for a given n and confidence parameter we define a gradient estimator definition a function g dn is a gradient estimator if for functions and with probability at least at any fixed the estimator satisfies the following inequality kg dn n n in subsequent sections we will develop conditions under which we can obtain gradient estimators with strong control on the functions n and n in the huber and models furthermore by investigating the stability of gradient descent we will develop sufficient conditions on these functions such that gradient descent with an inaccurate gradient estimator still returns an accurate estimate to minimize r we replace in equation with the gradient estimator g dn and perform projected gradient descent in order to avoid complex statistical dependency issues that can arise in the analysis of gradient descent for our theoretical results we consider a variant of the algorithm where each iteration is performed on a fresh batch of samples see algorithm we further assume that the number of gradient iterations t is specified and accordingly we define jnk and n t t we discuss methods for selecting t and the impact of in later sections as confirmed in our experiments see section should be viewed as a device introduced for theoretical convenience which can likely be eliminated via more complex uniform arguments see for instance the work algorithm projected gradient descent function pgd zn step size number of iterations t split samples into t subsets zt of size n for t to t do e t t zt end for end function next we consider the two notions of robustness described in section and derive specific gradient estimators for each of the models using the framework described above although the major focus of this work is on huber contamination and models our class of estimators are more general and are not restricted to these two notions of robustness gradient estimation in huber s model there has been a flurry of recent interest in designing mean estimators which under the huber contamination model can robustly estimate the mean of a random vector while some of these results are focused on the case where the uncorrupted distribution is gaussian or isotropic we are more interested in robust mean oracles for more general distributions lai et al proposed a robust mean estimator for general distributions satisfying weak moment assumptions and we leverage the existence of such an estimator to design a huber gradient estimator g dn which works in the huber contamination model see algorithm now we briefly describe the main idea behind algorithm and the mean estimator of lai et al the algorithm builds upon the fact that in it is relatively easy to estimate the gradient robustly in the crucial insight of lai et al is that the effect of the contamination q on the mean of uncontaminated distribution p is effectively provided we can accurately estimate the direction along which the mean is shifted in our context if we can compute the gradient shift direction the direction of the difference between the sample corrupted mean gradient and the true population gradient then the true gradient can be estimated by using a robust algorithm along the direction and a in the orthogonal direction since the contamination has no effect on the gradient in this orthogonal direction in order to identify the gradient shift direction we use a recursive singular value decomposition svd based algorithm in each stage of the recursion we first remove via a truncation algorithm described in more detail in the appendix and subsequently identify two subspaces using an svd a clean subspace where the contamination has a small effect on the mean and another subspace where the contamination has a potentially larger effect we use a simple estimator in the clean subspace and recurse our computation on the other subspace building on the work of lai et al in lemma and appendix j we provide a careful analysis of this gradient estimator algorithm huber gradient estimator function hubergradientestimator sample gradients s zi corruption level dimension p se huberoutliergradienttruncation s p if then e return mean s else e let be the covariance matrix of let v be the span of the top principal components of and w be its complement e where pv is the projection operation on to v set pv s let bv hubergradientestimator e let bw mean pw s p let b r be such that pv b bv and pw b bw return end if end function gradient estimation in the model to design gradient estimators for the model we leverage recent work on designing robust mean estimators in this setting these robust mean estimators build on the classical work of alon et al nemirovski and yudin and jerrum et al on the estimator for the problem of mean estimation catoni lerasle and oliveira propose robust mean estimators that achieve exponential concentration around the true mean for any distribution with bounded second moment in this work we require mean estimators for multivariate distributions several recent works extend the estimator of to general metric spaces in this paper we use the geometric estimator gmom which was originally proposed and analyzed by minsker to design the gradient estimator g dn the basic idea behind the gmom estimator is to first split the samples into subsamples and estimate the sample mean of each of the subsamples then the gmom estimator is given by the of the subsamples formally let xi xn r be n random variables sampled from a distribution p then the gmom estimator for estimating the mean of p can be described as follows partition the n samples into b blocks p bb each of size let b bb be the sample means in each block where bi xj xj then the gmom estimator is given by median b bb in high dimensions where different notions of the median have been considered minsker uses geometric median b argmin b x bi algorithm presents the gradient estimator g dn obtained using gmom as the mean estimator algorithm heavy tailed gradient estimator function heavytailedgradientestimator sample gradients s zi define number of buckets b log partition s into b blocks bb each of size for i nx do bi end for let b argmin return end function b x bi experiments in this section we demonstrate our proposed methods for huber contamination and heavytailed models on a variety of simulated and real data examples huber contamination we first consider the huber contamination model and demonstrate the practical utility of based robust estimator described in algorithms and synthetic experiments linear regression in linear regression we observe paired samples xn yn where each xi yi rp we assume that the x y pairs sampled from the true distribution p are linked via a linear model y hx i w where w is drawn from a normal distribution with variance w n we use the squared loss as our loss function l x y y hx note that the true parameter is the minimizer of the resulting population risk r we now describe the experiment setup the data model and present the results setup we fix the contamination level and next we generate n clean covariates from x n ip the corresponding clean responses using y hx i w where t and w n we simulate an outlier distribution by drawing the covariates from n ip and setting the responses to the total number of samples are set to be the sample size increases with the dimension this scaling is used to ensure that the statistical minimax error in the absence of any contamination is roughly an optimally robust method should have error close to roughly equal to corruption level which ours does see figure ols robustgd torrent huber plugin ransac parameter error p a parameter error vs p for b parameter error vs log parameter error iterations c log vs t for different figure robust linear regression metric we measure the parameter error in we also study the convergence properties of our proposed method for different contamination levels we use code provided by lai et al to implement our gradient estimator baselines we use ols torrent ransac and plugin estimator as our baselines torrent is an iterative based alternating minimization algorithm where in one step it calculates an active set of examples by keeping only n samples which have the smallest absolute values of residual r y x t and in the other step it updates the current estimates by solving ols on the active set bhatia et al had shown the superiority of torrent over other based outlier techniques hence we do not compare against those the plugin estimator is implemented using p algorithm to estimate both the mean vector yi xi and the covariance matrix xi xti results we summarize our main findings here all estimators except our proposed algorithm perform poorly figure a note that the torrent algorithm has strong guarantees when only the response y is corrupted but performs poorly in the huber contamination model where both x and y may be contaminated the error for the robust plugin estimator increases with dimension we investigate this theoretically in section where we find that the error of the plugin estimator grows with the norm of in our experiment we choose p and thus figure a corroborates corollary in section in figure b we find that the parameter error increases linearly with the contamination rate and we study this further in section finally figure c shows that the convergence rate decreases with increasing contamination and after is high enough the algorithm remains stuck at corroborating lemma in the appendix next we study the performance of our proposed method in the context of classification synthetic experiments logistic regression in logistic regression we observe paired samples xn yn where each xi yi rp we assume that the x y pairs sampled from the true distribution p are linked via a linear model with probability i otherwise in this case we use the negative conditional as our loss function l x y log exp hx setup we simulate a linearly separable classification problem where the clean covariates are sampled from n ip the corresponding clean responses are computed as y sign hx i where p p t we simulate the outlier distribution by adding asymmetric noise we flip the labels of one class and increase the variance of the corresponding covariates by multiplying them by the total number of samples are set to be metric we measure the classification error on a separate clean test set we study how the error changes with p and and the convergence properties parameter error of our proposed method for different contamination levels baselines we use the logistic regression mle and the linear support vector machine svm as our baselines results figures b and c show qualitatively similar results to the linear regression setting that the error of our proposed estimator degrades gracefully and grows linearly with the contamination level and that the gradient descent iterates converge linearly in figure a we observe that both the svm and logistic regression mle perform poorly the logistic regression mle completely flips the labels and has a error close to whereas the linear svm outputs a random hyperplane classifier that flips the label for roughly half of the dataset robustgd logisticregression svm error error robustgd p epsilon a error vs p at b error vs log parameter error iterations c log vs t for different figure robust logistic regression robust face reconstruction setup in this experiment we show the efficacy of our algorithm by attempting to reconstruct face images that have been corrupted with heavy occlusion where the occluding pixels play the role the outliers we use the data from the cropped yale dataset the dataset contains subjects and each image has pixels following the methodology of wang et al we choose face images per subject taken under mild illumination conditions and computed an eigenface set with eigenfaces then given a new corrupted face image of a subject the goal is to get the best of the true face to remove the scaling effects we normalized all images to range one image per person was used to test reconstruction occlusions were simulated by randomly placing blocks of size we repeated this times for each test image note that in this example we use a linear regression model as the uncontaminated statistical model which is almost certainly not an table fitting to original image error mean rmse best possible proposed torrent ols scrrr exact match for the unknown ground truth distribution despite this model misspecification as our results show that robust mean based gradient algorithms do well metric we use root mean square error rmse between the original and reconstructed image to evaluate the performance of the algorithms we also compute the best possible reconstruction of the original face image by using the eigenfaces methods we use torrent ols as baselines wang et al implemented popular robust estimators such as ransac huber loss etc and showed their poor performance wang et al then proposed an alternate robust regression algorithm called self scaled regularized robust regression scrrr and showed its equivalence to and method we also compare against the best possible rmse obtained by reconstructing the image using the eigenfaces results table shows that the mean rmse is best for our proposed gradient descent based method and that the recovered images are in most cases closer to the original image figure figure c shows a case when none of the methods succeed in reconstruction a successful reconstruction b successful reconstruction c failed reconstruction figure robust face recovery results top in order from l to r original image occluded image best possible recovery with given basis bottom in order from l to r reconstructions using our proposed algorithm torrent and ordinary least squares ols estimation we now consider the model and present experimental results on synthetic and real world datasets comparing the gradient descent based robust estimator described in rithms and which we call gmom with erm and several other recent proposals in these experiments we focus on the problem of linear regression which is described in section and work with noise distributions synthetic experiments simple linear regression setup the covariate x rp is sampled from a isotropic gaussian distribution we set each entry of to the noise w is sampled from a pareto distribution with mean zero variance and tail parameter the tail parameter determines the moments of the pareto random variable more specifically the moment of order k exists only if k hence smaller the the more the distribution in this setup we keep the dimension p fixed to and vary n and we always maintain the n to be at least methods we use erm as our baseline and compare it with gmom since we are always in the n p setting the solution to erm has a closed form expression and is simply the ols solution we also study which performs a gradient descent on erm and is equivalent to using empirical mean as the gradient oracle in our framework we also compare against the robust estimation techniques of hsu and sabato and duchi and namkoong in our experiments all the iterative techniques are run until convergence hyper parameter selection the gmom estimator depends on the confidence parameter which needs to be tuned in our experiments we noticed that the performance of gmom varies very little when is selected from a reasonable range that is not too close to see figure and the discussion below so we set in all our simulations metrics in our experiments we vary and the parameters of pareto distribution which can change the minimal risk r so to compare various approaches across parameter values where b b is we use a scaled version of the excess risk which we define as r the estimator to compare the performance of two estimators we define the notion of relative efficiency r r r r releff r r roughly this corresponds to the percentage improvement in the excess risk obtained using over whenever releff has a lower risk and higher the value the more the fractional improvement results to reduce the variance in the plots presented here and in the next section we averaged results over repetitions figure shows the benefits of using gmom over erm in figure a we plot the excess risk of erm and gmom against the number of iterations we see that upon convergence gmom has a much lower population risk than erm as expected converges to erm however the population risk of in the first few iterations is much lower than the risk of erm suggesting early stopping next in figure b we plot the scaled excess risk for erm and gmom as increases we see that gmom is always better than erm even when the number of samples is times the dimension in figure c we plot the relative efficiency of gmom and erm against this shows that the percentage improvement in the excess risk by gmom decreases as the noise level decreases this behavior is expected because in the noiseless setting both methods would have a similar behavior we do a similar study to see the relative efficiency against the of the noise distribution as noted before as is increased more moments exist for the underlying distribution figure d shows that as the noise distribution becomes more there is more benefit in using gmom over erm erm erm gd gmom gd excess risk true risk population risk erm gmom gd a population risk vs iterations b vs releff gmom erm iterations releff gmom erm relative efficiency relative efficiency c relative efficiency vs d relative efficiency vs figure linear regression performance comparison of gmom and erm dependence on confidence level figure a shows the performance of gmom estimator for various values of it can be seen that the choice of have very little effect on the performance of the estimator however we notice that for small values of the performance of the gmom degrades in practice one can use either cross validation or a validation set for choosing theoretical preliminaries in this section we develop some theoretical preliminaries we begin with a description of some canonical examples of risk minimization in section next we develop a general erm population risk iterations a population risk vs figure linear regression dependence on confidence level theory on convergence of projected gradient descent in section we analyze the gradient estimators defined in algorithms and in sections and respectively finally in sections we present consequences of our general theory for the canonical examples under huber contamination and models for some of our examples we will assume certain mild moment conditions concretely for a random vector x rp let e x and be the covariance matrix then x has bounded moments if there exists a constant such that for every unit vector v we have that i h e hx e hx illustrative examples the framework of risk minimization is a central paradigm of statistical estimation and is widely applicable in this section we provide illustrative examples that fall under this framework linear regression here we observe paired samples xn yn where each xi yi rp we assume that the x y pairs sampled from the true distribution p are linked via a linear model y hx i w where w is drawn from a distribution such as normal distribution with variance n or a more distribution such as or pareto distribution we suppose that under p the covariates x rp have mean and covariance for this setting we use the squared loss as our loss function which induces the following population risk y hx and r t note that the true parameter is the minimizer of the population risk r the strongconvexity and smoothness assumptions from in this setting require that l x y generalized linear models here we observe paired samples xn yn where each xi yi rp y we suppose that the x y pairs sampled from the true distribution p are linked via a linear model such that when conditioned on the covariates x the response variable has the distribution yhx i hx i p exp c here c is a fixed and known scale parameter and r r is the link function we focus on the random design setting where the covariates x rp have mean and covariance we use the negative conditional as our loss function l x y hx once again the true parameter is the minimizer of the resulting population risk r it is easy to see that linear regression with gaussian noise lies in the family of generalized linear models we now instantiate glms for logistic regression logistic regression in this case the x y pairs are linked as with probability i otherwise this corresponds to setting t log exp t and c in the hessian of the population risk is given by exp hx t xx r e exp hx note that as diverges the minimum eigenvalue of the hessian approaches and the loss is no longer strongly convex to prevent this in this case we take the parameter space to be bounded exponential families and canonical parameters finally we consider the case where the true distribution p is in exponential family with canonical parameters rp and a vector of sufficient statistics obtained from the map z rp note that while the linear and logistic regression models are indeed in an exponential family our interest in those cases was not in the canonical parameters in more details we can write the true distribution p in this case as p z h z exp z i a where h z is an arbitrary nuisance function the negative gives us the following loss function l z z a the and smoothness assumptions require that there are constants such that a for stability of gradient descent in this section we develop a general theory for the convergence of the projected gradient descent described in algorithm note that our gradient estimators could be biased and are not guaranteed to be consistent estimators of the true gradient this is especially true in the huber contamination model where it is impossible to obtain consistent estimators of the gradient of the risk because of the bias caused by the contaminated samples hence we turn our attention to understanding the behavior of projected gradient descent with a biased inexact gradient estimator of the form in before we present our main result we define the notion of stability of a gradient estimator which plays a key role in the convergence of gradient descent definition stability a gradient estimator is stable for a given risk function r r if for some e n we denote by the following contraction parameter r e e n and note that with these definitions in place we state our main result on the stability of gradient descent theorem suppose that the gradient estimator satisfies the condition in and is stable for the risk function r then algorithm initialized at with returns iterates such that with probability at least for the contraction parameter above we have that e e n we defer a proof of this result to the appendix theorem provides a general result for risk minimization and parameter estimation in any concrete instantiation for a given gradient estimator risk pair we first study the distribution of the gradient of the risk to estimate the e e e and then apply theorem error suffered by the gradient estimator e n n for the bound the first term is decreasing in t while the second term is increasing in t this suggests that for a given n and we need to run just enough iterations for the first term to be bounded by the second hence we can fix the number of iterations t as the smallest positive integer such that t e e n since we obtain linear convergence typically a logarithmic number of iterations suffice to obtain an accurate estimate general analysis of algorithm we now analyze the gradient estimator described in algorithm for huber contamination model and study the error suffered by it as stated before algorithm uses the robust mean estimator of lai et al hence while our proof strategy mimics that of lai et al we present a different result which is obtained by a more careful analysis of the algorithm we define p log p log log p log n p n n p log p and with this definition in place we have the following result lemma let p be the true probability distribution of z and let be the true distribution of the gradients z on rp with mean covariance and bounded fourth moments there exists a positive constant such that given n samples from the distribution in the huber gradient estimator described in algorithm when instantiated with the contamination level with probability at least returns an estimate b of such that kb n p log we note in particular if n with other parameters held fixed then n p and the error of our gradient estimator satisfies kb c p log p and has only a weak dependence on the dimension general analysis of algorithm in this section we analyze the gradient estimator for setting described in algorithm the following result shows that the gradient estimate has exponential concentration around the true gradient under the mild assumption that the gradient distribution has bounded second moment its proof follows from the analysis of geometric estimator of minsker we use tr to denote the trace of the matrix lemma let p be the probability distribution of z and be the distribution of the gradients z on rp with mean covariance then the heavy tailed gradient estib that satisfies the following exponential mator described in algorithm returns an estimate concentration inequality with probability at least r kb tr log n consequences for estimation under model we now turn our attention to the examples introduced earlier and present specific applications of theorem for parametric estimation under huber contamination model as shown in lemma we need the added assumption that the true gradient distribution has bounded fourth moments which suggests the need for additional assumptions we make our assumptions explicit and defer the technical details to the appendix linear regression we assume that the covariates x rp have bounded and the noise w has bounded moments theorem robust linear regression consider the statistical model in equation and e suppose that the number of samples n is large enough such that e n p and the log p e contamination level is such that e n p for some constants and p then there are universal constants such that if algorithm is initialized at with stepsize and algorithm as gradient estimator then it returns iterates such that with probability at least p c log p t t e n p e for some contraction parameter in the asymptotic setting when the number of samples n and other parameters are held fixed we see that for the huber gradient estimator the corresponding maximum allowed c contamination level is p this says that the more the covariance u matrix the higher the contamination level we can tolerate plugin estimation for linear regression the true parameter can be written in closed form as e xxt e xy a way to estimate is to separately estimate e xxt and e xy using robust covariance and mean oracles respectively under the assumption that x n ip one can reduce the problem to robustly estimating e xy under this setting we now present a result using lai et al as the mean estimator for estimation of e xy recall the definition of in we have the following result corollary consider the model in equation with the covariates drawn from n ip and w n then there are universal constants such that if then returns an estimate of e xy such that with probability at least q log p n p comparing bounds and we see that the error of the plugin estimator depends on which would make the estimator vacuous if scales with the dimension on the other hand the asymptotic rate of our robust gradient estimator is independent of the this disadvantage of plugin estimation is inescapable due to known minimax results for robust mean estimation that show that the dependence on is unavoidable for any oracle which estimates the mean of xy in the setting next we apply our estimator to generalized linear models generalized linear models here we assume that the covariates have bounded moments additionally we assume smoothness of around to be more precise we assume that there exist universal constants such that h i ex hx hx i for k k we also assume that ex t hx i t k where t is the tth of theorem robust generalized linear models consider the statistical model in equation and suppose that the number of samples n is large enough such that e e n p log and the contamination level is such that c e n p log for some constants and then there are universal constants such that if algorithm is initialized at with stepsize and algorithm as gradient estimator then it returns iterates such that with probability at least c c log e e n p for some contraction parameter note that for the case of linear regression with gaussian noise it is relatively straightforward to see that t k t k n and t k t k n under the assumption of bounded moments of the covariates which essentially leads to an equivalence between theorem and theorem for this setting in the following section we instantiate the above theorem for logistic regression and compare and contrast our results to other existing methods logistic regression by observing that t is bounded for logistic regression for all t we can see that and that there exists a universal constant c such that c and t k c t k n corollary robust logistic regression consider the model in equation then there are universal constants such that if then algorithm initialized at with stepsize and algorithm as gradient estimator returns iterates such that with probability at least p log p t t e b n p e for some contraction parameter under the restrictive assumption that x n ip du et al exploited stein s trick to derive a plugin estimator for logistic regression however similar to the linear regression the error of the plugin estimator scales with which is avoided in our robust gradient descent algorithm we also note that our algorithm extends to general covariate distributions exponential family here we assume that the random vector z z p has bounded moments theorem robust exponential family consider the model in equation then there are universal constants such that if then algorithm initialized at with stepsize and algorithm as gradient oracle returns iterates such that with probability at least log p t t e b e n p for some contraction parameter plugin estimation since the true parameter is the minimizer of the negative loglikelihood we know that e which implies that z this shows that the true parameter can be obtained by inverting the operator whenever possible in the robust estimation framework we can use a robust mean of the sufficient statistics to estimate z we instantiate this estimator using the mean estimator of to estimate z corollary consider the model in equation then there are universal constants b of e z such that with probability at such that if then returns an estimate least log p n p b where ky is the projection operator onto the feasible set discussion and limitations in the asymptotic setting of n algorithm with algorithm as gradient estimator converges to a point such that o log p hence our error scales only logarithmically with the dimension this dependency on the dimension p is a facet of using the estimator from lai et al for gradient estimation using better oracles will only improve our performance next we would like to point to the difference in the maximum allowed contamination between the three models for logistic regression and exponential c family while for linear regression p these differences are in large part u due to differing variances of the gradients which naturally depend on the underlying risk function this scaling of the variance of gradients for linear regression also provides insights into the limitations of algorithm for gradient estimators in the appendix we provide an upper bound for the contamination level based on the initialization point above which algorithm would not work for any gradient estimator consequences for estimation in this section we present specific applications of theorem for parametric estimation under heavy tailed setting the proofs of the results can be found in the appendix linear regression we first consider the linear regression model described in equation we assume that the covariates x rp have bounded and the noise w has bounded moments this assumption is needed to bound the error in the gradient estimator see lemma theorem heavy tailed linear regression consider the statistical model in equation there are universal constants such that if n e e p log and if algorithm is initialized at with stepsize and algorithm as gradient estimator then it returns iterates such that with probability at least p e p log n e for some contraction parameter generalized linear models in this section we consider generalized linear models described in equation where the covariate x is allowed to have a heavy tailed distribution here we assume that the covariates have bounded moment additionally we assume smoothness of around specifically we assume that there exist universal constants such that h i ex hx hx i for k k we also assume that ex t hx i t k for t where t is the tth derivative of theorem heavy tailed generalized linear models consider the statistical model in equation there are universal constants such that if p e n e p log and if algorithm is initialized at with stepsize and algorithm as gradient estimator it returns iterates such that with probability at least c c e p log n e for some contraction parameter we now instantiate the above theorem for logistic regression model corollary heavy tailed logistic regression consider the model in equation there are universal constants such that if n e e p log and if algorithm initialized at with stepsize and algorithm as gradient estimator it returns iterates such that with probability at least p e c p log n e for some contraction parameter exponential family we now instantiate theorem for parameter estimation in exponential family distributions here we assume that the random vector z z p has bounded moments and we obtain the following result theorem heavy tailed exponential family consider the model in equation if algorithm is initialized at with stepsize and algorithm as gradient estimator it returns iterates such that with probability at least c s a p log n e for some contraction parameter and universal constant discussion in this paper we introduced a broad class of estimators and showed that these estimators can have strong robustness guarantees in huber s model and for distributions these estimators leverage the robustness of gradient descent together with the observation that for risk minimization in most statistical models the gradient of the risk takes the form of a simple multivariate mean which can be robustly estimated by using recent work of robust mean estimation these estimators based on robust gradient descent work well in practice and in many cases outperform other robust and estimators there are several avenues for future work including developing a better understanding of robust mean estimation any improvement for robust mean estimation would immediately translate to improved guarantees for the estimators we propose for general parametric models finally it would also be of interest to understand the extent to which we could replace gradient descent with other optimization methods such as accelerated gradient descent or newton s method we note however that although these methods may have faster rates of convergence in the classical risk minimization setting in our setup their stability to using inexact gradients is far more crucial and warrants further investigation acknowledgements the research of sb was supported in part by the grant we thank larry wasserman for helpful comments on the paper references noga alon yossi matias and mario szegedy the space complexity of approximating the frequency moments in proceedings of the annual acm symposium on theory of computing stoc pages new york ny usa acm sivaraman balakrishnan martin j wainwright and bin yu statistical guarantees for the em algorithm from population to analysis the annals of statistics kush bhatia prateek jain and purushottam kar robust regression via hard thresholding in advances in neural information processing systems pages box and tests on variances biometrika christian brownlees emilien joly and lugosi empirical risk minimization for heavytailed losses the annals of statistics bubeck convex optimization algorithms and complexity foundations and trends r in machine learning emmanuel j xiaodong li yi ma and john wright robust principal component analysis journal of the acm jacm olivier catoni challenging the empirical mean and empirical variance a deviation study in annales de l institut henri et statistiques volume pages institut henri moses charikar jacob steinhardt and gregory valiant learning from untrusted data in stoc mengjie chen chao gao and zhao ren robust covariance matrix estimation via matrix depth arxiv preprint mengjie chen chao gao zhao ren et al a general decision theory for huber s epsiloncontamination model electronic journal of statistics yudong chen constantine caramanis and shie mannor robust sparse regression under adversarial corruption in proceedings of the international conference on machine learning icml atlanta ga usa june pages devroye and nonparametric density estimation the view wiley series in probability and mathematical statistics wiley ilias diakonikolas gautam kamath daniel m kane jerry li ankur moitra and alistair stewart robust estimators in high dimensions without the computational intractability in foundations of computer science focs ieee annual symposium on pages ieee david l donoho and richard c liu the automatic robustness of minimum distance functionals the annals of statistics pages simon s du sivaraman balakrishnan and aarti singh computationally efficient robust estimation of sparse functionals conference on learning theory john duchi and hongseok namkoong regularization with convex objectives arxiv preprint jianqing fan weichen wang and ziwei zhu a shrinkage principle for data highdimensional robust matrix recovery martin fischler and robert bolles random sample consensus a paradigm for model fitting with applications to image analysis and automated cartography commun acm chao gao robust regression via mutivariate regression depth frank r hampel elvezio m ronchetti peter j rousseeuw and werner a stahel robust statistics the approach based on influence functions volume john wiley sons cecil hastings jr frederick mosteller john w tukey and charles p winsor low moments for small samples a comparative study of order statistics the annals of mathematical statistics pages daniel hsu and sivan sabato loss minimization and parameter estimation with heavy tails journal of machine learning research huber robust statistics john wiley sons peter j huber robust estimation of a location parameter the annals of mathematical statistics peter j huber a robust version of the probability ratio test the annals of mathematical statistics mark jerrum leslie valiant and vijay vazirani random generation of combinatorial structures from a uniform distribution theoretical computer science s kakade shai and ambuj tewari applications of strong smoothness duality to learning with matrices corr kevin a lai anup b rao and santosh vempala agnostic estimation of mean and covariance in foundations of computer science focs ieee annual symposium on pages ieee lee jeffrey ho and david j kriegman acquiring linear subspaces for face recognition under variable lighting ieee transactions on pattern analysis and machine intelligence matthieu lerasle and roberto i oliveira robust empirical mean estimators arxiv preprint jerry li robust sparse estimation tasks in high dimensions conference on learning theory loh statistical consistency and asymptotic normality for robust mestimators ann loh and martin j wainwright regression with noisy and missing data provable guarantees with in advances in neural information processing systems pages gabor lugosi and shahar mendelson risk minimization by tournaments arxiv preprint lugosi and shahar mendelson estimators of the mean of a random vector the annals of statistics stanislav minsker geometric median and robust estimation in banach spaces bernoulli ivan mizera on depth and deep points a calculus annals of statistics pages nemirovski and yudin problem complexity and method efficiency in optimization a publication wiley yurii nesterov introductory lectures on convex optimization a basic course volume springer science business media joel a tropp tail bounds for sums of random matrices foundations of computational mathematics john w tukey mathematics and the picturing of data in proceedings of the international congress of mathematicians volume pages van de geer empirical processes in cambridge university press yin wang caglayan dicle mario sznaier and octavia camps self scaled regularized robust regression in proceedings of the ieee conference on computer vision and pattern recognition pages yannis yatracos rates of convergence of minimum distance estimators and kolmogorov s entropy ann xinyang yi dohyung park yudong chen and constantine caramanis fast algorithms for robust pca via gradient descent in advances in neural information processing systems annual conference on neural information processing systems december barcelona spain pages zhou koushiki bose jianqing fan and han liu a new perspective on robust mestimation finite sample theory and applications to multiple testing a proof of theorem in this section we present the proof of our main result on projected gradient descent with an inexact gradient estimator to ease the notation we will often omit dn from g dn proof at any iteration step t t by assumption we have that with probability at least kg t dn t taking union bound holds over all iteration steps t t with probability at least for the remainder of the analysis we assume this event to be true notation let g k k ek be the noisy gradient let and for brevity we have the following lemma from bubeck lemma lemma let f be m and convex then for all x y rp we have x y x yi mm kx y x by assumptions we have that k g k kek k our update rule is k k then we have that k k k k k k k k k k kek k k where equation follows from contraction property of projections now we can write k k as k k k k where the second step follows from lemma and the last step follows from the step size now combining equations and and using our assumption that kek k we get p k p hp i k by assumption we choose such that p p p p since we get that we have p q let therefore we have that k for some solving the induction we get k b proof of theorem to prove our result on robust generalized linear models we first study the distribution of gradients of the corresponding risk function lemma consider the model in equation then there exist universal constants such that p kcov q p p c c bounded fourth moments e e t v i var t v proof the gradient and it s expectation can be written as u hx e e x u xt u xt where u t t ke sup y t e sup e y t x u xt u xt sup q e y t x q q e u xt u xt where the last line follows from our assumption of smoothness now to bound the maximum eigenvalue of the cov kcov sup z t e t e e t z sup z t e t z sup z t e e t z h z ke sup z t e xxt u xt y h i z ke sup e z t xxt u xt y sup q r h i e u xt y ke e z t x to bound e h u xt y i we make use of the cr inequality cr inequality if x and y are random variables such that and where r then y using the cr inequality we have that h h h i i e u xt y e u xt u xt e u xt y c c where the last line follows from our assumption that is in the exponential family hence the cumulants are higher order derivatives of the function q p p p p c c ke kcov c q p p p p c c c q p p p c c bounded fourth moment to show that the fourth moment of the gradient distribution is bounded we have i i e e t v e e t v t e t z z a control of a b e t e xt v u xt y q q t e x v e u xt y q p e u xt u xt e u xt y v u x p u gt k t k t v u ux p p t gt k t k t where the last step follows from the fact that the central moment can be written as a polynomial involving the lower cumulants which in turn are the derivatives of the lognormalization function control of b e t ke by assumption k k t k are all bounded for k t which implies that there exist constants such that i e e t v previously we say that for some universal constants hence the gradient has bounded fourth moments having studied the distribution of the gradients we use lemma to characterize the stability of huber gradient estimator using lemma we know that at any point the huber gradient estimator g satisfies that with probability e kcov k log n p kg e substituting the upper bound on kcov from lemma we get that there are universal constants such that with probability at least e log kg e n p z e e n e e n p c c log z e e n e using equation we to ensure stability of gradient descent we need that e n get that gradient descent is stable as long as the number of samples n is large enough such e that e n p and the contamination level is such that log log e e n p for some constants and plugging the e into theorem we get back the result of theorem sponding and e n c proof of corollary we begin by studying the distribution of the random variable xy xxt lemma consider the model in equation with x n ip and w n then there exist universal constants such that e xy kcov xy i var xy t v bounded fourth moments e xy e xy t v proof mean xy xxt t e xy e xx e xy covariance cov xy e xxt i xxt i t cov xy e xxt i xxt i ip now z xxt i can be written as xp xp xp xp t xx xp xp xp xp xp xp then t e zz hence the covariance matrix can be written as cov xy ip t therefore kcov xy bounded fourth moment e xy e xy t v i we start from the lhs xy e xy t v i e xxt i wx t v h e x xt v v wv t x t x x v e v z z a b e w xt v z c the last line follows from two applications of the following inequality cr inequality if x and y are random variables such that and where r then y now to control each term on control of a using cauchy schwartz and normality of projections of normal distribution q q t a e e control of b b control of c c o using independence of w and normality of projections of normal distribution i therefore the e xy e xy t v c for the rhs var xy t v v t cov xy v kcov xy we saw that the kcov xy c so both the lhs and rhs scale with hence xy has bounded fourth moments now that we ve established that xy has bounded fourth moments implies that we can use as a mean estimation oracle using theorem we know that the oracle of outputs an estimate of e xy such that with probability at least we have p kcov xy log p n p using lemma to subsitute kcov xy we recover the statement of corollary d proof of theorem to prove our result on robust exponential family we first study the distribution of gradients of the corresponding risk function lemma consider the model in equation then there exists a universal constant such that e kcov a i var t v bounded fourth moments e e t v proof by fisher consistency of the negative we know that z z for the mean z e z e now for the covariance i l i h t z z a h bounded moments follows from our assumption that the sufficient statistics have bounded moments having studied the distribution of the gradients we use lemma to characterize the stability of huber gradient estimator using lemma we know that at any point the huber gradient estimator g satisfies that with probability e kcov k log n p kg e substituting the upper bound on kcov from lemma we get that there are universal constants such that e kg e n p log p z e e n e by assumption therefore we just have that in this case we have that e n e into theorem for some universal constant plugging the corresponding and e n we get back the result of corollary e proof of corollary using the contraction property of projections we know that b b k b by fisher consistency of the negative we know that z the true parameter can be obtained by inverting the operator whenever possible k b k b z b z where is the convex conjugate of a we can use the following result to control the lipschitz smoothness theorem duality assume f is closed and convex then f is smooth with parameter m if and only if its convex conjugate f is strongly convex with parameter m a proof of the above theorem can be found in hence we have that z b kb by assumption we have that the fourth moments of the sufficient statistics are bounded we also know that cov z a which implies that we can use as our oracle using lemma we get that there exists universal constants such that with probability at least p kb z log p n p combining the above with equation recovers the result of corollary f proof of theorem before we present the proof of theorem we first study the distribution of gradients of the loss function this will help us bound the error in the gradient estimator lemma consider the model in equation suppose the covariates x rp have bounded and the noise w has bounded moments then there exist universal constants such that e kcov where and e xxt proof we start by deriving the results for e l y xt xt w xxt e next we bound the operator norm of the covariance of the gradients at any point covariance cov e xxt xxt t t t t cov e xx xx now we want to bound kcov cov cov e xxt xxt sup y t e xx xx y t t t sup y t e xxt xxt y sup y sup e y t xxt z y sup y sup y sup e y t x xt z y t e y t x xt z e y t x xt z e y t x y q e z t x where the second last step follows from and the last step follows from our assumption of bounded moments see equation we now proceed to the proof of theorem from lemma we know that at any point e satisfies the following with the gradient estimator described in algorithm g dne probability at least e c kg dne q tr cov log n e we substitute the upper bound for kcov from lemma in the above equation e c kg dne c q q tr cov log n e p log n e s p log n e z e e n s p log n e z e e n to complete the proof of this theorem we use the results from theorem note that the e this holds when gradient estimator satisfies the stability condition if e n n e e p log e into theorem gives us now suppose n e satisfies the above condition then plugging e n the required result g proof of theorem to prove the theorem we use the result from lemma where we derived the following expression for covariance of p kcov q p p c c from lemma we know that at any point the gradient estimator described in algorithm e satisfies the following with probability at least g dne e c kg dne q tr cov log n e substitute the upper bound for kcov in the above equation we get q log e kg dne c tr cov n e s p p log n e z e e n v u p u b pb c c p log t z n e e e n we now use the results from theorem the gradient estimator satisfies the stability condition e this holds when if e n p e n e p log e into theorem gives us now suppose n e satisfies the above condition then plugging e n the required result h proof of theorem the proof proceeds along similar lines as the proof of theorem to prove the theorem we utilize the result of lemma where we showed that kcov a combining this result with lemma we get that with probability at least q log e kg dne c tr cov n e s a p log c n e z e e n e the stability condition is always satisfied as long as substituting since e n e into theorem gives us the required result e n i upper bound on contamination level we provide a complementary result which gives an upper bound for the contamination level based on the initialization point above which algorithm would not work the key idea is that the error incurred by any mean estimation oracle is lower bounded by the variance of the distribution and that if the zero vector lies within that error ball then any mean oracle can be forced to output as the mean for algorithm this implies that in estimating the mean of the gradient if the error is high then one can force the mean to be which forces the algorithm to converge for the remainder of the section we consider the case of linear regression with x n ip in the asymptotic regime of n lemma consider the model in equation with x n ip and w n then there exists a universal constant such that if then for every gradient oracle there exists a contamination distribution q such that algorithm will converge to even when the number of samples n proof using lemma we know that for any point xxt kcov where let represent the distribution similarly let q represent the corresponding distribution then using theorem we know that the minimax rate for estimating the mean of the distribution of gradients is given by inf sup q kb b q the above p statement says that at any point any mean oracle will always incur an error of in estimating the gradient q forp any oracle there exists some adversarial contamination q such that whenever then suppose that the contamination level is such that p c then for every oracle there exists a corresponding q such that algorithm will remain stuck at plugging we recover the statement of the lemma chen et al provide a general minimax lower bound of for in this setting in contrast using algorithm with as oracle we can only o log p close to the true parameter even when the contamination is small which implies that our procedure is not minimax optimal our approach is nonetheless the only practical algorithm for robust estimation of general statistical models j details and analysis of algorithm in this section we present a refined analysis of the algorithm from we begin by introducing some preliminaries we subsequently analyze the algorithm in and finally turn our attention to the general algorithm preliminaries unless otherwise stated we assume throughout that the random variable x has bounded fourth moments for every unit vector v e hx e hx we summarize some useful results from which bound the deviation of the conditional from the true lemma lemma let x be a univariate random variable with bounded fourth moments and let a be any with event with probability p a then p e x lemma lemma let x be a univariate random variable with e x e x and let e x let a be any with event with probability p a then p e x corollary corollary let a be any event with probability p a and let x be a random variable with bounded fourth moments we denote e xx t e e t to be the conditional covariance matrix we have that p p for random variables with bounded fourth moments we can use chebyshev s inequality to obtain tail bounds lemma lemma let x have bounded fourth moments then for every unit vector v we have that p vi e hx vi t p e hx our proofs also use the matrix bernstein inequality for rectangular matrices as a preliminary we consider a finite sequence zk of independent random matrices of size we assume that each random matrix satisfies e zk and kzk kop r almost surely we define x x max k e zk zkt kop k e zk zkt kop k k with these preliminaries in place we use the following result from lemma for all t p x k zk op t exp equivalently with probability at least s x log log zk op k we let i denote the set of all intervals in the following is a standard uniform convergence result lemma suppose xn p then with probability at least r n log en log i xi i sup p i n n algorithm huber outlier gradients truncation function huberoutliergradienttruncation sample gradients s corruption level dimension p if then let a b be smallest interval containing log tion of points se s a b return se else let s i be the samples with the ith only s i hx ei i s for i to p do a i hubergradientestimator s i end for let b r a be the ball of smallest radius centered at a containing cp p log fraction of points in se s b r a return se end if end function we now turn our attention to an analysis of algorithm for the case the case when p firstly we analyze algorithm when p lemma suppose that is a distribution on with mean variance and bounded fourth moments there exist positive universal constants such that given n samples from the distribution in the algorithm with probability at least returns an estimate b such that r r r log log log kb n where t q n n log which can be further simplified to kb r r r n log log log n n n proof by an application of hoeffding s inequality we obtain that with probability at least q the fraction of corrupted samples samples from the distribution q is less than log we condition on this event through the remainder of this proof we let denote the fraction of corrupted samples further we let sp be the samples from the true distribution let np be the cardinality of this set np let be the interval around containing mass of then using lemma we have that length using lemma we obtain that with probability at least the number of samples from the distribution p that fall in the interval is at least t where t is upper bounded as r log en log n now we let se be the set of points in the smallest interval containing t fraction of all the points using vc theory we know that for every interval i r there exists some universal constant such that p p x d p x sd exp this can be as that with probability at least there exists a universal constant such that r r d log log sup p x d p x sd nd n i z t using equation we know that t fraction of sd lie in let se be the set of points in the smallest interval containing t fraction of the points we know that the length of minimum interval containing t fraction of the points of s is less than length of smallest interval containing t fraction of points of sd which in turn is less than length of now and minimum interval containing t fraction of points of sd need to overlap this is because n is large enough such that t hence the extreme points for such an interval can be atmost away hence the distance of all chosen from will be within the length moreover the interval of minimum length with fraction of s will contain at least t fraction of sd e by controlling the sources of error hence we can bound the error of mean s all chosen noise points are within length and there are atmost of them hence the maximum error can be next the mean of chosen good points will converge to the mean of the conditional distribution points sampled from d but conditioned to lie in the minimum length interval the variance of these random variables is upper bounded using lemma to control the distance between the mean e x and the conditional mean e where a is the event that a sample x is in the chosen interval we know that p a t hence using lemma we get that there exists a constant such that x e t hence with probability at least the mean of se will be within r log length t n q taking over all conditioning statements and upper bounding with log we recover the statement of the lemma the case when p to prove the case for p we use a series of lemmas lemma proves that the outlier filtering constrains the points in a ball around the true mean lemma controls the error in e lemma controls the mean and covariance the true distribution after outlier filtering d e the error for the mean of s when projected onto the bottom span of the covariance matrix lemma suppose that is a distribution on rp with mean covariance and bounded fourth moments there exist positive universal constants such that given n samples from the distribution in equation we can find a vector a rp such that with probability at least ka np tr log n r r np p log tr log n n r p proof pick n orthogonal directions vn and use method for and using union bound we can recover the result next we prove the case when p firstly we prove that after the outlier step lemma after the outlier removal step there exists universal constants such that with probability at least every remaining point x satisfies kx q p and t t log and n q q log is the fraction of samples corrupted t log np here where proof let se be the set of points chosen after the outlier filtering let sed be set of good points chosen after the outlier filtering let sen be the set of bad points chosen after the outlier filtering using vc theory we know that for every closed ball b r r there exists a constant such that with probability at least s n p log sup x d p x sd n b z let b b r for p then we claim that p x b d to see this suppose we have some x letpz x let zi z t vi for some orthogonal directions vp let z c p p e z now e z maxi e plugging this in the above we have that p x b d hence we have that p x b sd using lemma we have that at least fraction of good points are away from a hence we have that the minimum radius of the ball containing all the has a radius of atmost which when combined with the triangle inequality recovers the statement of lemma e as before let se be the set of points after outlier filtering let mean s sd mean sed e mean sen sn lemma let sed be the set of clean points remaining after the outlier filtering then with probability at least we have that r p p log log n n r log n and n where n r p p log log p t n n proof we first prove the bounds on the mean shift z z b a control of b we use lemma on x xt for x d and a be the event e d that x is not removed by the outlier filtering p control of a using lemma we have that now we use bernstein s inequality lemma with r c b we get that with probability at least r p p r b log log n n next we prove the bound for covariance matrix z by corollary x x t e d from to control we use bernstein s inequality with zk k de kn de lemma we know that the points are constrained in a ball plugging this into lemma r log log c r n n where c b plugging in the values we get that r p p log log p t n n finally we have that r log log p p p t n n z n lemma let w be the bottom principal components of the covariance matrix after filtering then there exists a universal constant c such that with probability at least we have that n n where pw is the projection matrix on the bottom of n is as defined in lemma and n t t log n proof we have z z e f by weyl s inequality we have that e f control of f tr f r b log t t n z f n where t q n log control of e np e hence we have that p using that w is the space spanned by the bottom eigenvectors of and pw is corresponding projection operator we have that h p i t pw ip following some algebraic manipulation in we get that n having established all required results we are now ready to prove lemma we restate the result for the sake of completeness theorem suppose that is a distribution on rp with mean covariance and bounded fourth moments there exist positive universal constant c such that given n samples from the distribution in equation the algorithm with probability at least returns an estimate b such that r p log p log p log kb log p n where q log p log log and q p log p log n proof we divide n samples into p different sets we choose the first set and keep that as our active set of samples we run our outlier filtering on this set and let the remaining samples after the outlier filtering be sed by orthogonality of subspaces spanned by eigenvectors coupled with triangle inequality and contraction of projection operators we have that pv kb b kb pv kb b kb bv is the mean vector where v is the span of the top principal components of and where of returned by the running the algorithm on the reduced dimensions dim v from lemma both n and n are monotonically increasing in the dimension moreover the upper bound in lemma is also monotonically increasing in the dimension p hence the error at each step of the algorithm can be upper bounded by error incurred when running on dimension p with log p samples and probability of log hence the overall error for the recursive algorithm can be upper bounded as kb b log p combining lemma and lemma which are instantiated for log p samples and probability log p we get r log p log p log kb log p n
2
quasi periodicity quantification in video data using topology jan christopher and jose perea january abstract this work introduces a novel framework for quantifying the presence and strength of recurrent dynamics in video data specifically we provide continuous measures of periodicity perfect repetition and quasiperiodicity superposition of periodic modes with periods in a way which does not require segmentation training object tracking or surrogate signals our methodology operates directly on video data the approach combines ideas from nonlinear time series analysis delay embeddings and computational topology persistent homology by translating the problem of finding recurrent dynamics in video data into the problem of determining the circularity or toroidality of an associated geometric space through extensive testing we show the robustness of our scores with respect to several noise we show that our periodicity score is superior to other methods when compared to periodicity rankings and furthermore we show that our quasiperiodicity score clearly indicates the presence of biphonation in videos of vibrating vocal folds which has never before been accomplished end to end quantitatively introduction periodicity characterizes many natural motions including animal locomotion spinning wheels oscillating pendulums etc quasiperiodicity thought of as the superposition of frequencies occurs naturally during transitions from ordinary to chaotic dynamics the goal of this work is to automate the analysis of videos capturing periodic and quasiperiodic motion in order to identify both classes of motion in a unified framework we generalize sliding window embeddings to reconstruct periodic and quasiperiodic attractors from we analyze the resulting attractors using persistent homology a technique which combines geometry and topology section and we return scores in the range that indicate the degree of periodicity or quasiperiodicity in the corresponding video we show that our periodicity measure compares favorable to others in the literature when ranking videos section furthermore to our knowledge there is no other method able to quantify the existence of quasiperiodicity directly from video data our approach is fundamentally different from most others which quantify periodicity in video for instance it is common to derive signals from the video and apply fourier or autocorrelation to measure periodicity by contrast our technique operates on raw pixels avoiding common video department of electrical and computer engineering duke university durham nc usa ctralie department of mathematics and department of computational mathematics science engineering michigan state university east lansing mi usa joperea some of the analysis and results appeared as part of the thesis of the first author code to replicate results https supplementary material and videos https preprocessing and tracking entirely using geometry over also has advantages for our applications in fact as a simple synthetic example shows figure the fourier transform of quasiperiodic signals is often very close to the fourier transform of periodic signals by contrast the sliding window embeddings we design yield starkly different geometric structures in the periodic and quasiperiodic cases we exploit this to devise a quasiperiodicity measurement which we use to indicate the degree of biphonation in videos of vibrating vocal folds section which is useful in automatically diagnosing speech pathologies in the context of applied topology our quasiperiodicity score is one of the first applications of persistent to high dimensional data which is largely possible due to recent advancements in the computational feasibility of persistent homology prior work on recurrence in videos surrogate signals one common strategy for detecting periodicity in video is to derive a function to act as a surrogate for its dynamics and then to use either frequency domain fourier transform or time domain autocorrelation peak finding techniques one of the earliest works in this genre finds level set surfaces in a spatiotemporal xyt volume of video all frames stacked on top of each other and then uses curvature scale space on curves that live on these spatiotemporal surfaces as the function use fourier transforms on pixels which exhibit motion and define a measure of periodicity based on the energy around the fourier peak and its harmonics extract contours and find eigenshapes from the contours to classify and parameterize motion within a period frequency estimation is done by using fourier analysis and peak detection on top of other statistics derived from the contours such as area and center of mass finally derive a surrogate function based on mutual information between the first and subsequent frames and then look for peaks in the similarity function with the help of a watershed method matrices another class of techniques relies on matrices ssms between frames where similarity can be defined in a variety of ways track a set of points on a foreground object and compare them with an affine invariant similarity another widely recognized technique for periodicity quantification derives periodicity measures based on matrices of pixel differences this technique has inspired a diverse array of applications including analyzing the cycles of jellyfish analyzing bat wings and analyzing videos of autistic spectrum children performing characteristic repetitive motions such as hand flapping we compare to this technique in section miscellaneous techniques for periodic video quantification there are also a number of works that don t fall into the two categories above some works focus solely on walking humans since that is one of the most common types of periodic motion in videos of interest to people look at the braiding patterns that occur in xyt slices of videos of walking people perform blob tracking on the foreground of a walking person and use the ratio of the second and first eigenvalues of pca on that blob for more general periodic videos make a codebook of visual words and look for repetitions within the resulting string take a deep learning approach to counting the number of periods that occur in a video segment they use a convolutional neural network on spatially downsampled regions of interest which are uniformly spaced in time to estimate the length of the cycle finally perhaps the most philosophically similar work to ours is the work of who use cohomology to find maps of mocap data to the circle for parameterizing periodic motions though this work does not provide a way to quantify periodicity our work we show that geometry provides a natural way to quantify recurrence periodicity and quasiperiodicity in video by measuring the shape of delay embeddings in particular we propose several optimizations section which make this approach feasible the resulting measure of quasiperiodicity for which quantitative approaches are lacking is used in section to detect anomalies in videos of vibrating vocal folds finally in contrast to both frequency and time domain techniques our method does not rely on the period length being an integer multiple of the sampling rate background delay embeddings and their geometry recurrence in video data can be captured via the geometry of delay embeddings we describe this next video delay embeddings we will regard a video as a sequence of image frames indexed by the positive real numbers that is given positive integers w width and h height a video with w h pixels is a function x rw in particular a sequence of images rw sampled at discrete times yields one such function via interpolation for an integer d known as the dimension a real number known as the delay and a video x rw we define the sliding window also referred to as time delay embedding of x with parameters d and at time t as the vector x t x t swd x t rw x t the subset of rw resulting from varying t will be referred to as the sliding window embedding of x we remark that since the pixel measurement locations are fixed the sliding window embedding is an eulerian view into the dynamics of the video note that delay embeddings are generally applied to time series which can be viewed as videos w h in our framework hence equation is essentially the concatenation of the delay embeddings of each individual pixel in the video into one large vector one of the main points we leverage in this paper is the fact that the geometry of the sliding window embedding carries fundamental information about the original video we explore this next geometry of video delay embeddings as a motivating example consider the harmonic periodic signal t cos t fh t cos and the quasiperiodic signal fq t cos t cos t for color videos we can treat each channel independently yielding a vector in rw in practice there isn t much of a difference between color and grayscale embeddings in our framework for the videos we consider we refer to fh as harmonic because its constitutive frequencies and are commensurate that is they are linearly dependent over the rational numbers q by way of contrast the underlying quencies of the signal fq and are linearly independent over q and hence we use the term quasiperiodicity as in the dynamics literature to denote the superposition of periodic processes whose frequencies are this differs from other definitions in the literature which regard quasiperiodic as any deviation from perfect repetition a geometric argument from see equation below and the discussion that follows shows that given a periodic function f r with exactly n harmonics if d and d then the sliding window embedding swd f is a topological circle a closed curve without selfintersections which wraps around an n torus s z c tn s s z n as an illustration we show in figure a plot of fh and of its sliding window embedding swd fh via a pca principal component analysis projection figure sliding window embedding of the harmonic signal fh colors in the signal correspond to colors of the points in the pca plot the sliding window embedding traces a topological circle wrapped around a torus however if g r r is quasiperiodic with n distinct frequencies then for appropriate d and swd g is dense in fills out tn figure shows a plot of the quasiperiodic signal fq t and a projection via pca of its sliding window embedding swd fq figure sliding window embedding of the quasiperiodic signal fq colors in the signal correspond to colors of the points in the pca plot the sliding window embedding is dense in a torus the difference in geometry of the delay embeddings is stark compared to the difference between their power spectral densities as shown in figure figure the power spectral densities samples of commensurate and signals with relative harmonics at ratios and respectively the difference is not nearly as evident as in the geometry of their sliding window embeddings additionally unless sampling is commensurate with a frequency a fixed fourier basis causes that frequency component to bleed into many frequency bins in a pattern making precise peak finding difficult moreover as we will see next the interpretation of periodicity and quasiperiodicity as circularity and toroidality of sliding window embeddings remains true for videos with higher resolution max w h the rest of the paper will show how one can use persistent homology a tool from the field of computational topology to quantify the presence of quasi periodicity in a video by measuring the geometry of its associated sliding window embedding in short we propose a periodicity score for a video x which measures the degree to which the sliding window embedding swd x spans a topological circle and a quasiperiodicity score which quantifies the degree to which swd x covers a torus this approach will be validated extensively we show that our quasi periodicity detection method is robust under several noise models motion blur additive gaussian white noise and mpeg bit corruption we compare several periodicity quantification algorithms and show that our approach is the most closely aligned with human subjects finally we provide an application to the automatic classification of dynamic regimes in laryngeal geometry of video delay embeddings though it may seem daunting compared to the case the geometry of the delay embedding shares many similarities for periodic videos as shown in let us argue why sliding window embeddings from quasi periodic videos have the geometry we have described so far to this end consider an example video x that contains a set of n frequencies let the amplitude of the nth frequency and ith pixel be ain for simplicity but without loss of generality assume that each is a cosine with zero phase offset then the time series at pixel i can be written as xi t n x ain cos t grouping all of the coefficients together into a w h n matrix a we can write x t n x an cos t where an stands for the nth column of constructing a delay embedding as in equation an cos t n swd x t n a cos t and applying the cosine sum identity we get swd x t n x cos t sin t where rw are constant vectors in other words the sliding window embedding of this video is the sum of linearly independent ellipses which lie in the space of d frame videos at resolution w as shown in for the case of commensurate frequencies when the window length is just under the length of the period all of the and vectors become orthogonal and so they can be recovered by doing pca on swd x t figure shows the components of the first pca vectors for a horizontal line of pixels in a video of an oscillating pendulum note how the oscillations are present both temporally and spatially x t figure showing an xt slice of the principal components on for the synthetic video of an oscillating pendulum d is chosen just under the period length frames the high dimensional geometry of repeated pulses using eulerian coordinates has an important impact on the geometry of delay embeddings of natural videos as figure shows pixels often jump from foreground to background in a pattern similar to square waves these types of abrupt transitions require higher dimensional embeddings to reconstruct the geometry to see why first extract one period of a signal with period at a pixel xi t xi t t fi t otherwise then xi t can be rewritten in terms of the pulse as xi t x fi t m since xi t repeats itself regardless of what fi t looks like periodic summation discretizes the frequency domain x f xi t k f fi t figure an example of an eulerian pixel witnessing a transition in a video of a woman doing jumping jacks red green and blue channels are plotted over time these transitions induce a per pixel periodic signal with sharp transitions which leads to high dimensionality in an appropriate sliding window embedding switching back to the time domain we can write xi t as xi t x f fi t ei t in other words each pixel is the sum of some constant offset plus a possibly infinite set of harmonics at integer multiples of for instance applying equation to a square wave of period centered at the origin is a roundabout way of deriving the fourier series sin t sin t sin t by sampling the sinc function sin f at intervals of every odd m coincides with proportional to and every even harmonic is zero conciding with in general the sharper the transitions are in xi t the longer the tail of f fi t will be and the more high frequency harmonics will exist in the embedding calling for a higher delay dimension to fully capture the geometry since every harmonic lives on a linearly independent ellipse similar observations about harmonics have been made in images for collections of patches around sharp edges figure persistent homology informally topology is the study of properties of spaces which do not change after stretching without gluing or tearing for instance the number of connected components and the number of essentially different loops which do not bound a disk are both topological properties of a space it follows that a circle and a square are topologically equivalent since one can deform one onto the other but a circle and a line segment are not because that would require either gluing the endpoints of the line segment or tearing the circle homology is a tool from algebraic topology designed to measure these types of properties and persistent homology is an adaptation of these ideas to discrete collections of points sliding window embeddings we briefly introduce these concepts next simplicial complexes a simplicial complex is a combinatorial object used to represent and discretize a continuous space with a discretization available one can then compute topological properties by algorithmic means formally a simplicial complex with vertices in a nonempty set v is a collection k of nonempty finite subsets v so that k always implies an element k is called a simplex and if has n elements then it is called an the cases n are special are called vertices are called edges and are called faces here is an example to keep in mind the circle s z c is a continuous space but its topology can be captured by a simplicial complex k with three vertices a b c and three edges a b b c a c that is in terms of topological properties the simplicial complex k a b c a b b c a c can be regarded as a combinatorial surrogate for s they both have connected component one loop which does not bound a region and no other features in higher dimensions persistent homology of point clouds the sliding window embedding of a video x is in practice a finite set swd x swd x t t t determined by a choice of t r finite moreover since swd x rw then the restriction of the ambient euclidean distance endows swd x with the structure of a finite metric space discrete metric spaces also referred to as point clouds are trivial from a topological point of view a point cloud with n points simply has n connected components and no other features holes in higher dimensions however when a point cloud has been sampled a continuous space with topology a circle or a torus one would expect that appropriate simplicial complexes with vertices on the point cloud should reflect the topology of the underlying continuous space this is what we will exploit next given a point cloud x dx where x is a finite set and dx is a distance function the complex or rips complex for short at scale is the collection of subsets of x with diameter less than or equal to x x dx xj that is x is the simplicial complex with vertex set equal to x constructed by adding an edge between any two vertices which are at most apart adding all triangular faces whose bounding edges are present and more generally adding all the whose k bounding facets have been included we show in figure the evolution of the rips complex on a set of points sampled around the unit circle epsilon epsilon epsilon figure the rips complex at three different scales on a point cloud with points sampled around s the idea behind persistent homology is to track the evolution of topological features of complexes such as x as the scale parameter ranges from to some maximum value for instance in figure one can see that x x has distinct connect components one for each point x has three connected components and x has only one connected component this will continue to be the case for every similarly there are no closed loops in x or x bounding empty regions but this changes when increases to indeed x has three holes the central prominent hole and the two small ones to the left side notice however that as increases beyond these holes will be filled by the addition of new simplices in particular for one has that x will have only one connected component and no other topological features in higher dimensions the family r x x is known as the rips filtration of x and the of topological features in each dimension connected components holes voids etc as changes can be codified in what are referred to as the persistence diagrams of r x specifically for each dimension n connected components holes voids etc one can record the value of for which a particular topological feature of the rips filtration appears its birth time and when it disappears its death time the times b d of features for r x form a multiset dgmn r x a set whose elements can come with repetition known as the persistence diagram of the rips filtration on x since dgmn r x is just a collection of points in the region x y x y we will visualize it as a scatter plot the persistence of a topological feature with times b d is the quantity d b its lifetime we will also include the diagonal y x in the scatter plot in order to visually convey the persistence of each pair in this setting points far from the diagonal with large persistence represent topological features which are stable across scales and hence deemed significant while points near the diagonal with small persistence are often associated with unstable features we illustrate in figure the process of going from a point cloud to the persistence diagram of its rips filtration original point cloud time of death persistence diagram time of birth class death d class birth d class death d class birth d figure from a point cloud to the persistence diagram of its rips filtration connected edges in the rips filtration are drawn in blue the of a class is indicated in red and filled in triangles are shaded green we remark that the computational task of determining all persistent homology classes of a filtered simplicial complex can surprisingly be reduced to computing the homology of a single simplicial complex this is in fact a problem in linear algebra that can be solved via elementary row and column operations on boundary matrices the persistent homology of r swd x and in particular its persistence diagrams for n are the objects we will use to quantify periodicity and quasiperiodicity in a video x figures and show the persistence diagrams of the rips filtrations on the sliding window embeddings for the commensurate and signals from figures and respectively we use fast new code from the ripser software package to make persistent computation feasible figure sliding window embedding of the harmonic signal fh left and the persistence diagrams n right of the associated rips filtration the sliding window embedding swd fh traces a topological circle wrapped around a torus the persistence diagram in dimension one shows only one pair with prominent persistence this is consistent with a point cloud sampled around a space with the topology of a circle figure sliding window embedding of the quasiperiodic signal fq left and the persistence diagrams n right of the associated rips filtration the sliding window embedding swd fq is dense on a torus the persistence diagram in dimension one shows two pairs with prominent persistence while the persistence diagram in dimension two shows one prominent pair this is consistent with a point cloud sampled around a space with the topology of a torus implementation details reducing memory requirements with svd suppose we have a video which has been discretely sampled at n different frames at a resolution of w h and we do a delay embedding with dimension d for some arbitrary assuming bit floats per grayscale value storing the sliding window embedding requires hn d bytes for a low resolution video only seconds long at using d already exceeds of memory in what follows we will address the memory requirements and ensuing computational burden to construct and access the sliding window embedding indeed constructing the rips filtration only requires pairwise distances between different delay vectors this enables a few optimizations first of all for n points in rw h where n w h there exists an n linear subspace which contains them in particular let a be the w h n matrix with each video frame along a column performing a singular value decomposition a u sv t yields a matrix u whose columns form an orthonormal basis for the aforementioned n linear subspace hence by finding the coordinates of the original frame vectors with respect to this orthogonal basis u t a u t u sv sv and using the coordinates of the columns of sv instead of the original pixels we get a sliding window embedding of lower dimension u t x t swd t u t x t for which kswd t swd k kswd x t swd x k note that sv can be computed by finding the eigenvectors of at a this has a cost of o w h n which is dominated by w h if w h n in our example above this alone reduces the memory requirements from to of course this procedure is the most effective for short videos where there are actually many fewer frames than pixels but this encompasses most of the examples in this work in fact the point for a video is minutes a similar approach was used in the classical work on eigenfaces when computing the principal components over a set of face images distance computation via diagonal convolutions a different optimization is possible if that is if delays are taken exactly on frames and no interpolation is needed in this case the squared euclidean distance between x i and x j is d x x i x j i m x j m dx let be the n n matrix of all pairwise squared euclidean distances between frames possibly computed with the memory optimization in section and let be the n d n d matrix of all pairwise distances between delay frames then equation implies that can be obtained from dx via convolution with a rect function or a vector of of length d over all diagonals in dx a moving average this can be implemented in time o n with cumulative sums hence regardless of how d is chosen the computation and memory requirements for computing depend only on the number of frames in the video also dy can simply be computed by taking the entry wise square root of another o n computation a similar scheme was used in when comparing distances of shape descriptors in videos of meshes figure shows matrices on embeddings of the pendulum video with no delay and with a delay approximately matching the period the effect of a moving average along diagonals with delay eliminates the caused by the video s mirror symmetry even for videos without mirror symmetries such as a video of a running dog figure introducing a delay brings the geometry into focus as shown in figure pairwise distances tau d pairwise distances tau d figure matrices and for a video of the oscillating pendulum bright colors indicate far distances and dark colors indicate near distances this example clearly shows how adding a delay embedding is like performing block averaging along all diagonals of the pairwise distance matrices and it gets rid of the mirror symmetry time figure an animation of a periodic video of a running dog which unlike an oscillating pendulum does not have mirror symmetry in the second half of its period pairwise distances tau d pairwise distances tau d figure matrices and for a video of a running dog even without the delay embedding d the video frames still form a topological loop however a delay embedding with d cleans up the geometry and leads to a rounder loop as seen in the resulting ssm normalization a few normalization steps are needed in order to enable fair comparisons between videos with different resolutions or which have a different range in periodic motion either spatially or in intensity first we perform a and sphere normalize vector normalization which was shown in to have nice theoretical properties that is g d t sw swd t swd t t t swd t t where is a w h d vector of all ones in other words one subtracts the mean of each component of each vector and each vector is scaled so that it has unit norm lives on the unit sphere in rw h subtracting the mean from each component will eliminate additive linear drift on top of the periodic motion while scaling addresses resolution magnitude differences note that we can still use the memory optimization in section but we can no longer use the optimizations in section since each window is normalized independently moreover in order to mitigate nonlinear drift we implement a simple convolution by the derivative of a gaussian for each pixel in the original video before applying the delay embedding t xi t this is a bandpass filter which could be replaced with any other bandpass filter leveraging application specific knowledge of expected frequency bounds this has the added advantage of reducing the number of harmonics enabling a smaller embedding dimesion scoring once the videos are normalized to the same scale we can score periodicity and quasiperiodicity based on the geometry of sliding window embeddings let dgmn be the persistence diagram for the rips filtration on the sliding window embedding of a video and define mpi dgmn as the largest difference d b for b d dgmn in particular dgmn max d b b d dgmn and mpi dgmn dgmn we propose the following scores periodicity score ps p s like we exploit the fact that for the rips filtration on s the persistence diagram has only one prominent pair with coordinates since this is the limit shape of a normalized perfectly periodic sliding window video the periodicity score is between not periodic and perfectly periodic quasiperiodicity score qps r qp s this score is designed with the torus in mind we score based on the second largest persistence times the largest persistence since we want a shape that has two core circles and encloses a void to get a large score based on the theorem of homology the void should die the moment the smallest dies modified periodicity score mps m p s we design a modified periodicity score which should be lower for quasiperiodic videos than what the original periodicity score would yield note that we use field coefficients for all persistent homology computations since as shown by this works better for periodic signals with strong harmonics before we embark on experiments let us explore the choice of two crucial parameters for the sliding window embedding the delay and the dimension d in practice we determine an equivalent pair of parameters the dimension d and the window size dimension and window size takens embedding theorem is one of the most fundamental results in the theory of dynamical systems in short it contends that under appropriate hypotheses there exists an integer d so that for all d d and generic the sliding window embedding swd x reconstructs the state space of the underlying dynamics witnessed by the signal x one common strategy for determining a minimal such d is the false scheme the idea is to keep track of the nearest neighbors of each point in the delay embedding and if they change as d is increased then the prior estimates for d were too low this algorithm was used in recent work on video dynamics for instance even if we can estimate d however how does one choose the delay as shown in the sliding window embedding of a periodic signals is roundest so that the periodicity score p s is maximized when the window size satisfies the following relation d l here l is number of periods that the signal has in and k to verify this experimentally we show in figure how the periodicity score p s changes as a function of window size for the pendulum video and how the choice of window size from equation maximizes p to generate this figure we fixed a sufficiently large d and varied let us now describe the general approach given a video we perform a estimation step see section next which results in a positive real number d for a given d n large enough we let be so that figure varying the window size in a delay embedding of the synthetic pendulum video which has a period length around frames red dashed lines are drawn at the window lengths that would be expected to maximize roundness of the embedding for that period length based on theory in fundamental frequency estimation though figure suggests robustness to window size as long as the window is more than half of the period we may not know what that is in practice to automate window size choices we do a coarse estimate using fundamental frequency estimation techniques on a surrogate signal to get a signal we extract the first coordinate of diffusion maps using nearest neighbors on the raw video frames no delay after taking a smoothed time derivative note that a similar diffusionbased method was also used in recent work by to analyze the frequency spectrum of a video of an oscillating pendulum spring system in a quasiperiodic state once we have the diffusion time series we then apply the normalized autocorrelation method of to estimate the fundamental frequency in particular given a discrete signal x of length n define the autocorrelation as rt x xj however as observed by a more robust function for detecting periodicities is the squared difference function x dt xj which can be rewritten as dt mt where mt x finally suggest normalizing this function to the range to control for window size and to have an interpretation akin to a pearson correlation coefficient nt mt mt mt the fundamental frequency is then the inverse period of the largest peak in nt which is to the right of a zero crossing the zero crossing condition helps prevent an offset of from being the largest peak defining the normalized autocorrelation as in equation has the added advantage that the value of nt at the peak can be used to score periodicity which the authors call clarity values closer to indicate more perfect periodicities this technique will sometimes pick integer multiples of the period so we multiply nt by a slowly decaying envelope which is for lag and for the maximum lag to emphasize smaller periods figure shows the result of this algorithm on a periodic video and figure shows the algorithm on an irregular video figure diffusion maps normalized autocorrelation fundamental frequency estimation on a periodic vocal folds video section the chosen period length is as indicated by the red dot over the peak this matches with the visually inspected period length figure diffusion maps normalized autocorrelation fundamental frequency estimation on a video of vocal folds with irregular oscillations section experimental evaluation next we evaluate the effectiveness of the proposed modified periodicity and quasiperiodicity scores on three different tasks first we provide estimates of accuracy for the binary classifications or in the presence of several noise models and noise levels the results illustrate the robustness of our method second we quantify the quality of periodicity rankings from machine scores as compared to those generated by human subjects in a nutshell and after comparing with several periodicity quantification algorithms our approach is shown to be the most closely aligned with the perception of human subjects third we demonstrate that our methodology can be used to automatically detect the physiological manifestations of certain speech pathologies normal biphonation directly from videos of vibrating vocal folds classification under varying noise as shown empirically in a common source of noise in videos comes from camera shake blur this is captured by point spread functions resembling directed random walks figure and the amount of blur noise level is controlled by the extent in pixels of the walk other sources are additive white gaussian noise awgn controlled by the standard deviation of the gaussian kernel and mpeg bit errors quantified by the percentage of corrupted information figure shows examples of these noise types for classification purposes we use three main recurrence classes three types of periodic videos true periodic tp an oscillating pendulum a bird flapping its wings and an animation of a beating heart two types of quasiperiodic videos true quasiperiodic tq one showing two solid disks which oscillate sideways at rates and the second showing two stationary gaussian pulses with amplitudes modulated by cosine functions two videos without significant recurrence true tn a video of a car driving past a landscape and a video of an explosion each one of these seven videos is then corrupted by the three noise models at three different noise levels blur awgn bit error as follows given a particular video a noise model and noise level instances are generated by sampling noise independently at random results we report in table the area under the receiver operating characteristic roc curve or auroc for short for the classification task tp tn resp tq tn and binary classifier furnished by periodicity resp quasiperiodicity score for instance for the blur noise model with noise level of pixels the auroc from using the periodicity score to classify the instances of the heartbeat video as periodic and the b blur a original c awgn d bit err figure the results of applying motion blur additive white gaussian noise and mpeg bit corruption to a video frame table auroc values for different levels of noise from the binary classification task periodic bird flapping heart beating pendulum driving left subcell explosions right subcell based on the periodicity score equation also two synthetic quasiperiodic videos sideways disks modulated pulses are compared to the same two videos based on the quasiperiodicity score equation awgn awgn awgn bit err bit err blur bird flapping heart beat pendulum blur blur quasiperiodic disks quasiperiodic pulses bit err instances of the driving video as not periodic is similarly for the mpeg bit corruption model with of bit error the auroc from using the quasiperiodicity score to classify the instances of the quasiperiodic sideways disks as quasiperiodic and the instances of the explosions video as not quasiperiodic is to put these numbers in perspective auroc is associated with a perfect classifier and auroc corresponds to classification by a random coin flip overall the type of noise that degrades performance the most across videos is the bit error which makes sense since this has the effect of randomly freezing corrupting or even deleting frames which all interrupt periodicity the blur noise also affects videos where the range of motion is small the pendulum video for instance only moves over a range of pixels at the most extreme end so an pixel blur almost completely obscures the motion comparing human and machine periodicity rankings next we quantify the extent to which rankings obtained from our periodicity score equation as well as three other methods agree with how humans rank videos by periodicity the starting point is a dataset of different creative commons videos each seconds long at frames per second some videos appear periodic such as a person waving hands a beating heart and spinning carnival rides some of them appear nonperiodic such as explosions a traffic cam and drone view of a boat sailing and some of them are in between such as the pendulum video with simulated camera shake it is known that humans are notoriously bad at generating globally consistent rankings of sets with more than or elements however when it comes to binary comparisons of the type should a be ranked higher than b few systems are as effective as human perception specially for the identification of recurrent patterns in visual stimuli we will leverage this to generate a globally consistent ranking of the videos in our initial data set use amazon s mechanical turk amt to present each pair of videos in the set of each to three different users for a total of pairwise rankings unique amt workers contributed to our experiment using an interface as the one shown in figure figure the interface that humans are given on amt for pairwise ranking videos by periodicity in order to aggregate this information into a global ranking which is as consistent as possible with the pairwise comparisons we implement a technique known as hodge rank aggregation hodge rank aggregation finds the closest consistent ranking to a set of preferences in a least squares sense more precisely given a set of objects x and given a set of comparisons p x x we seek a scalar function s on all of the objects that minimizes the following sum x sb sa a b where vab is a real number which is positive if b is ranked higher than a and negative otherwise thus s is a function whose discrete gradient best matches the set of preferences with respect to an norm note that the preferences that we feed the algorithm are based on the pairwise rankings returned from amt if video b is greater than video a then we assign vab or otherwise since we have rankings for each video we actually assign weights of or the are if all rankings agree in one direction and the are if one of the rankings disagrees with the other two figure shows a histogram of all of the weighted scores from users on amt they are mostly in agreement though there are a few scores as comparison to the human scores we use three different classes of techniques for machine ranking of periodicity sliding windows sw we sort the videos in decreasing order of periodicity score equation we fix the window size at frames and the embedding dimension at frames which is enough to capture strong harmonics we also apply a time derivative of width to every frame histogram of weighted pairwise turk scores counts score figure the histogram of scores that the workers on amt gave to all pairwise videos the authors of this work present two different techniques to quantify periodicity from a matrix ssm of video frames the first is a frequency domain technique based on the peak of the average power spectral density over all columns rows of the ssm after linearly and applying a hann window to turn this into a continuous score we report the ratio of the peak minus the mean over the standard deviation this method will be referred to as frequency score as the authors warn the frequency peak method has a high susceptibility to false positives this motivated the design of a more robust technique in which works by finding peaks in the normalized autocorrelation of the gaussian smoothed ssms for videos with mirror symmetry the peaks will lie on a diamond lattice while for videos without mirror symmetry they will lie on a square lattice after peak finding within neighborhoods one simply searches over all possible lattices at all possible widths to find the best match with the peaks since each lattice is centered at the autocorrelation point no translational checks are necessary to turn this into a continuous score let e be the sum of euclidean distances of the matched peaks in the autocorrelation image to the best fit lattice let be the proportion of lattice points that have been matched and let be the proportion of peaks which have been matched to a lattice point then we give the final periodicity score as cdscore a lattice which fits the peaks perfectly with no error e and no false positive peaks will have a score of and any video which fails to have a perfectly matched lattice will have a score greater than hence we sort in increasing order of the score to get a ranking as we will show this technique agrees the second best with humans after our periodicity score ranking one of the main drawbacks is numerical stability of finding maxes in critical points around nearly diagonal regions in square lattices which will erroneously inflate the score also the lattice searching only occurs over an integer grid but there may be periods that aren t integer number of frames so there will always be a nonzero e for such videos by contrast our sliding window scheme can work for any real valued period length diffusion maps normalized autocorrelation clarity finally we apply the technique from section to get an autocorrelation function and we report the value of the maximum peak of the normalized autocorrelation to the right of a zero crossing referred to as clarity by values closer to indicate more perfect repetitions so we sort in descending order of clarity to get a ranking figure shows an example of these three different techniques on a periodic video there is a dot which rises above the diagonal in the persistence diagram a lattice is found which nearly matches the critical points in the autocorrelation image and autocorrelation function on diffusion maps has a nice peak first coordinate figure an example of the sw score top the clarity score bottom left and the cdscore bottom right matched peaks in green and lattice in blue on a periodic video of a man waving his arms from the kth dataset by contrast for a nonperiodic video figure there is hardly any persistent homology there is no well matching lattice and the first diffusion coordinate has no apparent periodicities first coordinate figure an example of the sw score top the clarity score bottom left and the cdscore bottom left matched peaks in green and lattice in blue on a video of an explosion which is nonperiodic results once we have the global human rankings and the global machine rankings we can compare them using the kendall score given a set of objects n objects x and two total orders and where xa xb if xa xb and xa xb if xa xb the kendall score is defined as x xi xj xi xj n n i j for two rankings which agree exactly the kendall score will be for two rankings which are exactly the reverse of each other the kendall score will be in this way it analogous to a pearson correlation between rankings table the kendall scores between all of the machine rankings and the hodge aggregated human rankings human freq cdscore clarity human freq cdscore clarity table average runtimes in milliseconds per video for all of the algorithms freq cdscore clarity table shows the kendall scores between all of the different machine rankings and the human rankings our sliding window video methodology agrees with the human ranking more than any other pair of ranking types the second most similar are the sw and the diffusion clarity which is noteworthy as they are both geometric techniques table also shows the average run times in milliseconds of the different algorithms on each video on our machine this does highlight one potential drawback of our technique since tda algorithms tend to be computationally intensive however at this scale videos with at most several hundred frames performance is reasonable periodicity and biphonation in high speed videos of vocal folds in this final task we apply our methodology to a real world problem of interest in medicine we show that our method can automatically detect certain types of voice pathologies from glottography or high speed videos fps of the left and right vocal folds in the human vocal tract in particular we detect and differentiate quasiperiodicity from periodicity by using our geometric sliding window pipeline quasiperiodicity is a special case of what is referred to as biphonation in the biological context where nonlinear phenomena cause a physical process to bifurcate into two different periodic modes often during a transition to chaotic behavior the torus structure we sketched in figure has long been recognized in this context but we provide a novel way of quantifying it similar phenomena exist in audio but the main reason for studying laryngeal high speed video is understanding the biomechanical underpinnings of what is perceived in the voice in particular this understanding can potentially lead to practical corrective therapies and surgical interventions on the other hand the presence of biphonation in sound is not necessarily the result of a physiological phenomenon it has been argued that it may come about as the result of changes in states of arousal in contrast with our work the existing literature on techniques usually employs an inherently lagrangian approach where different points on the left and right vocal folds are tracked and coordinates of these points are analyzed as time series this is a natural approach since those are the pixels where all of the important signal resides and wellunderstood signal processing technique can be used however edge detectors often require tuning and they can suddenly fail when the vocal folds close in our technique we give up the ability to localize the anomalies since we are not tracking them but in return we do virtually no preprocessing and our technique is domain independent results we use a collection of videos for this analysis drawn from a variety of different sources there are two videos which correspond to normal periodic vocal folds three which correspond to biphonation and two which correspond to irregular we manually extracted frames per video and autotuned the window size based on autocorrelation of diffusion maps section we then chose an appropriate and chose a time spacing so that each point cloud would have points as shown in table our technique is able to differentiate between the four classes we also show pca and persistence diagrams for one example for each class in figure we see what appears to be a loop in pca and one strong persistent dot confirms this in figure we see a prominent torus in the persistence diagram in figure we don t see any prominent structures in the persistence diagram even though pca looks like it could be a loop or a torus note however that pca only preserves of the variance in the signal which is why high dimensional techniques are important to draw quantitative conclusions table results of our sliding window pipeline on videos of periodic vocal folds biphonation and irregularities we give the max persistence periodicity score ps the modified periodicity score mps the harmonic score hs and quasiperiodic score qps presented in section we also show the window size win that the autocorrelation technique in section gives we have bolded the top three mps and qps scores across all videos the max modified periodic scores include the two periodic videos and one of the biphonation videos the max quasiperiodic scores are all of the biphonation videos which means the one with a high periodicity score could be ruled out of the periodicity category video name periodic periodic figure biphonation biphonation biphonation figure mucus perturbed periodic irregular figure win ps mps qps discussion we have shown in this work how applying sliding window embeddings to videos can be used to translate properties of the underlying dynamics into geometric features of the resulting point cloud representation moreover we also showed how tools such as persistence homology can be leveraged to quantify the geometry of these embeddings the pipeline was evaluated extensively showing robustness to several noise models high quality in the produced periodicity rankings and applicability to the study of speech conditions form video data moving forward an interesting avenue related to medical applications is the difference between biphonation which occurs from quasiperiodic modes and biphonation which occurs from harmonic modes shows that field coefficients can be used to indicate the presence of a strong harmonic so we believe a geometric approach is possible this could be used for example to differentiate between subharmonic anomalies and quasiperiodic transitions please refer to supplementary material for an example video from each of these three classes figure video frames and sliding window statistics on a video of vocal folds undergoing normal periodic vibrations one strong loop is visible in pca and in the persistence diagrams figure video frames and sliding window statistics on a video of vocal folds undergoing biphonation courtesy of juergen neubauer pca suggests a possible torus and the persistence diagram indeed has the signature of a torus two strong independent and one acknowledgments the authors would like to thank juergen neubauer dimitar deliyski robert hillman alessandro de alarcon dariush mehta and stephanie zacharias for providing videos of vocal folds we also thank matt berger at arfl for discussions about sliding window video efficiency and we thank the anonymous workers on the amazon mechanical turk who ranked periodic videos figure video frames and sliding window statistics of irregular vocal fold vibrations though pca looks similar to figure no apparent or topological features are apparent in the high dimensional state space references mark allmen and charles r dyer cyclic motion detection using spatiotemporal surfaces and curves in pattern recognition international conference on volume pages ieee john atanbori peter cowling john murray belinda colston paul eady dave hughes ian nixon and patrick dickinson analysis of bat wing beat frequency using fourier transform in international conference on computer analysis of images and patterns pages springer ulrich bauer ripser a lean code for the computation of vietorisrips persistence barcodes http ronald r coifman and lafon diffusion maps applied and computational harmonic analysis matthew jc crump john v mcdonnell and todd m gureckis evaluating amazon s mechanical turk as a tool for experimental behavioral research plos one ross cutler and larry davis robust periodic motion detection analysis and applications ieee transactions on pattern analysis and machine intelligence alain de and hideki kawahara yin a fundamental frequency estimator for speech and music the journal of the acoustical society of america mauricio delbracio and guillermo sapiro removing camera shake via weighted fourier burst accumulation ieee transactions on image processing dimitar d deliyski pencho p petrushev heather shaw bonilha terri treman gerlach bonnie and robert e hillman clinical implementation of laryngeal videoendoscopy challenges and evolution folia phoniatrica et logopaedica roman goldenberg ron kimmel ehud rivlin and michael rudzsky behavior classification by eigendecomposition of periodic motions pattern recognition jerry p gollub and harry l swinney onset of turbulence in a rotating fluid physical review letters allen hatcher algebraic topology university press christian t herbst jakob unger hanspeter herzel jan g and lohscheller phasegram analysis of vocal fold vibration documented with laryngeal video endoscopy journal of voice hanspeter herzel david berry ingo r titze and marwa saleh analysis of vocal disorders with methods from nonlinear dynamics journal of speech language and hearing research hanspeter herzel robert reuter and richard a katz biphonation in voice signals in aip conference proceedings volume pages aip peng huang adrian hilton and jonathan starck shape similarity for video sequences of people international journal of computer vision shiyao huang xianghua ying jiangpeng rong zeyu shang and hongbin zha camera calibration from periodic motion of a pedestrian in proceedings of the ieee conference on computer vision and pattern recognition pages xiaoye jiang lim yuan yao and yinyu ye statistical ranking and combinatorial hodge theory mathematical programming holger kantz and thomas schreiber nonlinear time series analysis volume cambridge university press maurice g kendall a new measure of rank correlation biometrika matthew b kennel reggie brown and henry di abarbanel determining embedding dimension for reconstruction using a geometrical construction physical review a orrawan kumdee and panrasee ritthipravat repetitive motion detection for human behavior understanding from video images in signal processing and information technology isspit ieee international symposium on pages ieee ofir levy and lior wolf live repetition counting in proceedings of the ieee international conference on computer vision pages lohscheller hikmet toy frank rosanowski ulrich eysholdt and michael clinically evaluated procedure for the reconstruction of vocal fold vibrations from endoscopic digital videos medical image analysis philip mcleod and geoff wyvill a smarter way to find pitch in in proceedings of the international computer music conference pages daryush d mehta dimitar d deliyski thomas f quatieri and robert e hillman automated measurement of vocal fold vibratory asymmetry from videoendoscopy recordings journal of speech language and hearing research george a miller the magical number seven plus or minus two some limits on our capacity for processing information psychological review neubauer patrick mergell ulrich eysholdt and hanspeter herzel analysis of irregular vocal fold oscillations biphonation due to desynchronization of spatial modes the journal of the acoustical society of america sourabh a niyogi edward h adelson et al analyzing and recognizing walking figures in xyt in cvpr volume pages jose a perea persistent homology of toroidal sliding window embeddings in acoustics speech and signal processing icassp ieee international conference on pages ieee jose a perea and john harer sliding windows and persistence an application of topological methods to signal analysis foundations of computational mathematics mark a pinsky introduction to fourier analysis and wavelets volume american mathematical aaron m plotnik and stephen m rock quantification of cyclic motion of marine animals from computer vision in oceans volume pages ieee ramprasad polana and randal c nelson detection and recognition of periodic nonrigid motion international journal of computer vision qingjun qiu hk schutte lide gu and qilian yu an automatic method to quantify the vibration properties of human vocal folds via videokymography folia phoniatrica et logopaedica christian schuldt ivan laptev and barbara caputo recognizing human actions a local svm approach in pattern recognition icpr proceedings of the international conference on volume pages ieee steven m seitz and charles r dyer analysis of cyclic motion international journal of computer vision floris takens detecting strange attractors in turbulence in dynamical systems and turbulence warwick pages springer christopher tralie geometry of sliding window embeddings of periodic videos in international proceedings in informatics volume schloss fuer informatik matthew turk and alex pentland eigenfaces for recognition journal of cognitive neuroscience mikael florian t pokorny primoz skraba and danica kragic cohomological learning of periodic motion applicable algebra in engineering communication and computing v venkataraman and p turaga shape descriptions of nonlinear dynamical systems for videobased inference ieee transactions on pattern analysis and machine intelligence ping wang gregory d abowd and james m rehg event analysis for social game retrieval in computer vision ieee international conference on pages ieee inka wilden hanspeter herzel gustav peters and tembrock subharmonics biphonation and deterministic chaos in mammal vocalization bioacoustics thomas wittenberg manfred moser monika tigges and ulrich eysholdt recording processing and analysis of digital sequences in glottography machine vision and applications or yair ronen talmon ronald r coifman and ioannis g kevrekidis no equations no parameters no variables data and the reconstruction of normal forms by learning informed observation geometries arxiv preprint jing yang hong zhang and guohua peng period detection in videos signal image and video processing guoshen yu guillermo sapiro and mallat solving inverse problems with piecewise linear estimators from gaussian mixture models to structured sparsity ieee transactions on image processing stephanie rc zacharias charles m myer jareen lisa kelchner dimitar d deliyski and alessandro de comparison of videostroboscopy and videoendoscopy in evaluation of supraglottic phonation annals of otology rhinology laryngology page zomorodian and carlsson computing persistent homology discrete computational geometry
1
feb challenging images for minds and machines amir rosenfeld john tsotsos department of electrical engineering and computer science york university toronto on canada amir tsotsos february abstract there is no denying the tremendous leap in the performance of machine learning methods in the past some might even say that specific in pattern recognition such as are as good as solved reaching human and levels arguably lack of training data and computation power are all that stand between us and solving the remaining ones in this position paper we underline cases in vision which are challenging to machines and even to human observers this is to show limitations of contemporary models that are hard to ameliorate by following the current trend to increase training data network capacity or computational power moreover we claim that attempting to do so is in principle a suboptimal approach we provide a taster of such examples in hope to encourage and challenge the machine learning community to develop new directions to solve the said difficulties introduction once only known to a few outside of academia has become ubiquitous in both popular media and in the industry superhuman capabilities are now being gradually recorded in various fields in the game of go in face verification image categorization and even in logical reasoning in simple scenes most current leading methods involve some variant of deep learning consequentially they require large amounts of data with the exception of which used to gain experience this has elicited a era with increasingly datasets painstakingly labeled for object image annotation visual and pose estimation to name a few this is accompanied by a growing demand for computational power we bring forward challenges in vision which do not seem to be solved by current methods and more importantly by current popular methodologies meaning that neither additional data nor added computational power will be the drivers of the solution figure a children s puzzle where the goal is to find six hidden words book words story pages read novel for a machine this is far from child s play could this be solved by providing a million similar examples to a system does a human need such training related work imbalanced or small data datasets tend to be naturally imbalanced and there is a long history of suggested remedies handling lack of training data has also been treated by attempting to use data of lesser quality than handannotated dataset simulating data cite data for cars text recognition in the wild captcha transfer learning reusing features of networks trained on large is a useful starting point cf attempting to reduce the number of required training example in extreme cases to one or even zero examples deeplearning failures recently some simple cases where deep learning fails to work as one would possibly expect were introduced along with theoretical justifications challenging cases we present two examples and then discuss them they have a few common characteristics humans are able to solve them on the first encounter despite not having seen any such images before incidentally but not critically the two examples are from the domain of visual text recognition moreover though humans know how to recognize text as seen in regular textbooks etc the text in these images is either hidden rendered or distorted in an uncharacteristic manner children s games the first case is well exemplified by a child s game hidden word puzzles the goal is to find hidden words in an image fig shows an arbitrarily selected example for a human observer this is a solvable puzzle though it may take a few minutes to complete we applied two methods for text recognition sub image sned vvoz novees teg score table text detected by two recognition methods applied to of a children s puzzle means no text was detected by the method images scaled to fit figure figure variants of textual captcha captchas are becoming increasingly difficult reproduced from in the wild with available code or an on on the image in fig as this did not work immediately we focused on the word novel the n is below the forearm of the left person ending with an l below his foot by cropping it an rotating so the text is level cropping more tightly and even cropping only the letter l see table for the corresponding including the entire image at the top row and the results output by the two methods this is by no means a systematic test and some may even claim that it isn t fair and they would be right these systems were not trained on such images was only trained on a dataset of million synthetic training images and was only trained on tens of thousands of images from or used powerful networks where training data was less available captcha a mechanism to thwart automated misuse of websites by distinguishing between humans and machines textual captchas involve presenting an image of text which has to be read and written by the user we focus on this type of captcha though others exist the introduction of captchas immediately triggered the invention of new automatic ways to break them which eventually sparked an arms race between increasingly complex captchas and correspondingly powerful automated methods this caused a state where on the best leading textual methods involve training dnn s over data with similar distortion characteristics as the desired types of captcha though still these systems have limited success rates at times less than and on the other hand the level of distortion has become such that humans have a solving some of them machines vs humans as supervised learners one can rule out the suggested examples by saying that they are simply datapoints on behalf of a statistical learner s perspective yet it seems that with http ever supervision receive they are usually able to solve them despite not being especially exposed to this kind of stimulus moreover precisely these kinds of images are used routinely in human iq testing so they are a universally accepted indicator for human performance if these examples may seem esoteric we can revert to more common cases as a child how often is one exposed to bounding boxes of objects how often to delineations of objects with precise segmentation masks how often to facial and bodily and of objects overlayed on their field of view more critically for how many different object types does this happen if any for how many different instances with what level of precision of annotation and in how many modalities the granularity of visual supervision given to machines seems to be much finer than that given to humans as for the amount of directly supervised data it does not seem to really be the main limiting factor as already noted several times performance either saturates with training data or at best grows logarithmically increasing map from to when growing from to examples making the solution of more data for better performance simply impractical even for those with the most resources and this is for common problems such as object detection humans who only ever read and textbooks are able to solve captchas of various kinds without any special training on their first encounter with them the same is true for the picture puzzles mentioned above as it is for other cases not mentioned here we do not claim that humans are not subject to supervised learning in their early life and in later stages on the contrary supervisory signals arise from multiple sources caretakers who provide supervisory signals by teaching internal supervision provided by innate biases and finally rewards stemming from results of behaviour such as suffering pain from hitting an object but any such supervision is interspersed within a vast continuous stream of unsupervised data most of which does not have an easily measurable supervisory affect on the observer there is something fundamentally different about the way humans construct or use internal representations enabling them to reason about and solve new patternrecognition tasks we hypothesize that these are approached by generating procedures of a compositional nature when presented with a novel or known task as suggested by the visual routines of or the cognitive programs of we intend to maintain a collection of examples beyond the ones suggested above to encourage the community to attempt to solve them not by learning from vast amounts of similar examples but by learning from related simpler subtasks and learning to reason and solve them by composing the appropriate solutions references silver huang maddison guez sifre van den driessche schrittwieser antonoglou panneershelvam lanctot et mastering the game of go with deep neural networks and tree search nature vol no pp silver schrittwieser simonyan antonoglou huang guez hubert baker lai bolton et mastering the game of go without human knowledge nature vol no lu and tang surpassing face verification performance on lfw with qi and zhang face recognition via centralized coordinate learning arxiv preprint he zhang ren and j sun delving deep into rectifiers surpassing performance on imagenet classification in proceedings of the ieee international conference on computer vision pp santoro raposo barrett malinowski pascanu battaglia and lillicrap a simple neural network module for relational reasoning in advances in neural information processing systems pp perez de vries strub dumoulin and courville learning visual reasoning without strong priors arxiv preprint perez strub de vries dumoulin and courville film visual reasoning with a general conditioning layer arxiv preprint russakovsky deng su krause satheesh ma huang karpathy khosla bernstein et imagenet large scale visual recognition challenge international journal of computer vision vol no pp lin maire belongie hays perona ramanan and zitnick microsoft coco common objects in context in european conference on computer vision springer pp krishna zhu groth johnson hata kravitz chen kalantidis li shamma et visual genome connecting language and vision using crowdsourced dense image annotations international journal of computer vision vol no pp antol agrawal lu mitchell batra lawrence zitnick and parikh vqa visual question answering in proceedings of the ieee international conference on computer vision pp neverova and kokkinos densepose dense human pose estimation in the wild arxiv preprint lim salakhutdinov and torralba transfer learning by borrowing examples for multiclass object detection in advances in neural information processing systems pp zhu anguelov and ramanan capturing distributions of object subcategories in computer vision and pattern recognition cvpr ieee conference on ieee pp wang ramanan and hebert learning to model the tail in advances in neural information processing systems pp sun shrivastava singh and gupta revisiting unreasonable effectiveness of data in deep learning era in ieee international conference on computer vision iccv ieee pp sharif razavian azizpour sullivan and carlsson cnn features an astounding baseline for recognition in proceedings of the ieee conference on computer vision and pattern recognition workshops pp snell swersky and zemel prototypical networks for learning in advances in neural information processing systems pp shamir and shammah failures of deep learning arxiv preprint shi bai and yao an trainable neural network for imagebased sequence recognition and its application to scene text recognition ieee transactions on pattern analysis and machine intelligence vol no pp zhou yao wen wang zhou he and liang east an efficient and accurate scene text detector arxiv preprint veit matera neumann matas and belongie dataset and benchmark for text detection and recognition in natural images arxiv preprint a le baydin zinkov and wood using synthetic data to train neural networks is reasoning in neural networks ijcnn international joint conference on ieee pp von ahn blum hopper and langford captcha using hard ai problems for security in international conference on the theory and applications of cryptographic techniques springer pp singh and pal survey of different types of captcha international journal of computer science and information technologies vol no pp mori and malik recognizing objects in adversarial clutter breaking a visual captcha in computer vision and pattern recognition proceedings ieee computer society conference on vol ieee pp chen luo guo zhang and gong a survey on breaking technique of captcha security and communication networks vol zhu vondrick ramanan and fowlkes do we need more training data or better models for object detection in bmvc vol citeseer zhu vondrick fowlkes and ramanan do we need more training data international journal of computer vision vol no pp hestness narang ardalani diamos jun kianinejad patwary ali yang and zhou deep learning scaling is predictable empirically arxiv preprint ullman harari and dorfman from simple innate biases to complex visual concepts proceedings of the national academy of sciences vol no pp ullman visual routines cognition vol no pp tsotsos and kruijne cognitive programs software for attention s executive frontiers in psychology vol
1
some theory for ordinal embedding may ery abstract motivated by recent work on ordinal embedding kleindessner and von luxburg we derive large sample consistency results and rates of convergence for the problem of embedding points based on triple or quadruple distance comparisons we also consider a variant of this problem where only local comparisons are provided finally inspired by jamieson and nowak we bound the number of such comparisons needed to achieve consistency keywords ordinal embedding multidimensional scaling mds dissimilarity comparisons landmark multidimensional scaling introduction the problem of ordinal embedding also called multidimensional scaling borg and groenen consists of finding an embedding of a set of items based on pairwise distance comparisons specifically suppose that is some dissimilarity measure between items i j n n we assume that and for all i j n these dissimilarities are either directly available but assumed to lack meaning except for their relative magnitudes or only available via comparisons with some other dissimilarities meaning that we are only provided with a subset c n such that i j k note that the latter setting encompasses the former given c and a dimension d the goal is to embed the items as points pn rd in a way that is compatible with the available information specifically pj i j k c where denotes the euclidean norm the two most common situations are when all the quadruple comparisons are available meaning c n or all triple comparisons are available meaning c i j i k i j k n which can be identified with n this problem has a long history surveyed in young and hamer with pioneering contributions from shepard b and kruskal the main question we tackle here is that of consistency suppose that the items are in fact points xn rd and when the s are available suppose that g where g is an unknown increasing function provided with a subset c cn of dissimilarity comparisons as in is it possible to reconstruct the original points in the limit n clearly the reconstruction can only be up to a similarity transformation that is a transformation f rd rd such that for some x f y for all x y rd or equivalently of the form f x x b where r is an orthogonal transformation and b is a constant vector since such department of mathematics university of california san diego usa a transformation leaves the distance comparisons unchanged this question is at the foundation of multidimensional scaling early work only addressed the continuous case where the x s span a whole convex subset u rd in that setting the goal becomes to characterize isotonic functions on u that is functions f u rd satisfying y x f y f y y y u shepard argues that such functions must be similarities and cites earlier work aumann and kruskal suppes and winet dealing with the case d only recently has the finite sample case been formally considered indeed kleindessner and von luxburg prove a consistency result showing that if xn u rd where u is a bounded connected and open subset of rd satisfying some additional conditions for example a finite union of open balls and c n then in the large sample limit with xn becoming dense in u it is possible to recover the x s up to a similarity transformation note that u is then uniquely defined as the interior of xi i we note that kleindessner and von luxburg focus on the strictly isotonic case where the second inequality in is strict our first contribution is an extension of this consistency result for quadruple learning to triple learning where c n in the process we greatly simplify the arguments of kleindessner and von luxburg and weaken the conditions on the sampling domain u we note that terada and von luxburg have partially solved this problem by a reduction to the problem of embedding a graph however their arguments are based on an apparently incomplete proof in von luxburg and alamgir which is itself based on a rather sophisticated approach our proofs are comparatively much simpler and direct our second contribution is to provide rates of convergence a problem left open by kleindessner and von luxburg in the context of quadruple learning we obtain a rate in o where is the hausdorff distance between the underlying sample xn and u meaning n xi this is the first convergence rate for exact ordinal embedding that we know of we are not able to obtain the same rate in the context of triple learning compared to establishing consistency the proof is much more involved the last decade has seen a surge of interest in ordinal embedding motivated by applications to recommender systems and psychometric studies made available via the internet for example databases for music artists similarity ellis et mcfee and lanckriet sensor localization nhat et is another possible application modern datasets being large all quadruple or triple comparisons are rarely available motivating the proposal of embedding methods based on a sparse set of comparisons agarwal et borg and groenen jamieson and nowak terada and von luxburg terada and von luxburg study what they call local ordinal embedding which they define as the problem of embedding an unweighted neighbor graph with our notation this is the situation where c i j k k k being the dissimilarity between item i and its kth nearestneighbor terada and von luxburg argue that when the items are points xn sampled from a smooth density on a bounded connected convex and open subset u rd with smooth boundary then k kn log n is enough for consistency our third contribution is to consider the related situation where c i j k and max k which provides us with the graph and also all the quadruple comparisons between the nearest neighbors in this setting we are only able to show that kn n log n is enough beyond local designs which may not be feasible in some settings jamieson and nowak consider the problem of adaptively sequentially selecting triple comparisons in order to minimize the number of such comparisons and yet deduce all the other triple comparisons they consider a few methods among which a version of the landmark mds method of de silva and tenenbaum less ambitious is the problem of selecting few comparisons in order to consistently embed the items when these are points in a euclidean space our fourth contribution is to show that one can obtain a consistent embedding with a landmark design based on an n queries where an is any diverging sequence moreover the embedding can be computed in expected time an n for some function the rest of the paper is organized as follows in section we state our theoretical results and prove the simpler ones we then gather the remaining proofs in section section concludes the paper with a short discussion theory in this section we present our theoretical findings most proofs are gathered in section we already defined isotonic functions in following kleindessner and von luxburg we say that a function f u rd rd is weakly isotonic if x f y x f z y z u obviously if a function is isotonic then it is weakly isotonic weak isotonicity is in fact not much weaker than isotonicity indeed let p be a property isotonic and say that a function f u rd rd has the property p locally if for each x u there is r such that f has property p on b x r u where b x r denotes the open ball with center x and radius lemma any locally weakly isotonic function on an open u is also locally isotonic on u proof this is an immediate consequence of kleindessner and von luxburg lem which implies that a weakly isotonic function on b x r is isotonic on b x suppose we have data points xn rd define xn xn n let xj and suppose that we are only provided with a subset cn n of distance comparisons as in to an exact ordinal embedding p n rd which by definition satisfies we associate the map rd defined by xi pi for all i n we crucially observe that in the case of all quadruple comparisons cn n the resulting map is isotonic on in the case of all triple comparisons cn n is only weakly isotonic on instead in light of this and the fact that the location orientation and scale are all lost when only ordinal information is available the problem of proving consistency of exact ordinal embedding reduces to showing that any such embedding is close to a similarity transformation as the sample size increases n this is exactly what kleindessner and von luxburg do under some assumptions ordinal embedding based on all triple comparisons our first contribution is to extend the consistency results of kleindessner and von luxburg on quadruple learning to triple learning following their presentation we start with a result where the sample is infinite which is only a mild generalization of kleindessner and von luxburg th theorem let u rd be bounded connected and open suppose is dense in u and consider a locally weakly isotonic function rd then there is a similarity transformation s that coincides with on the proof is largely based on that of kleindessner and von luxburg th but a bit simpler see section we remark that there can only be one similarity with the above property since similarities are affine transformations and two affine transformations of rd that coincide on affine independent points are necessarily identical in this theorem the set is dense in an open subset of rd and therefore infinite in fact kleindessner and von luxburg use this theorem as an intermediary result for proving consistency as the sample size increases most of their paper is dedicated to establishing this as their arguments are quite elaborate we found a more direct route by tending to the limit as soon as possible based on lemma below which is at the core of the theorem for the remaining of this section we consider the finite sample setting u rd is bounded connected and open xn u is such that xn n is dense in u and q rd is a function with values in a bounded set q in the context of we implicitly extend to for example by setting x q for all x where q is a given point in q although the following holds for any extension lemma consider rd finite and q rd where q is bounded then there is n n infinite such that x x exists for all x this is called the diagonal process in kelley problem d ch although the result is classical we provide a proof for completeness proof without loss of generality suppose xn let since n q and q is bounded there is infinite such that exists in turn since n is bounded there is infinite such that exists continuing this process which formally corresponds to a recursion we obtain nk n such that for all k nk is infinite and xk exists let nk denote the kth element in increasing order of nk and note that nk k is strictly increasing define n nk k since np p k nk we have xk xk and this is valid for all k corollary consider the setting and assume that is weakly isotonic then is sequentially for the pointwise convergence topology for functions on and all the functions where it accumulates are similarity transformations restricted to the corresponding result kleindessner and von luxburg th was obtained for isotonic instead of weakly isotonic functions and for domains u that are finite unions of balls and the convergence was uniform instead of pointwise for now we provide a proof of corollary which we derive as a simple consequence of theorem and lemma proof lemma implies that is sequentially for the pointwise convergence topology let be an accumulation point of meaning that there is n n infinite such that x x for all x take x y z such that by definition there is m such that x y z and therefore x y x z for all n passing to the limit along n n we obtain x y x z hence is weakly isotonic on and by theorem it is therefore the restriction of a similarity transformation to it is true that kleindessner and von luxburg th establishes a uniform convergence result we do the same in theorem below but with much simpler arguments the key are the following two results bounding the modulus of continuity of a resp weakly isotonic function we note that the second result for weakly isotonic functions is very weak but sufficient for our purposes here for v rd define v inf which is their hausdorff distance we say that yi i i rd is an if yj for all i j we recall that the size of the largest of a euclidean ball of radius r is of exact order for a set v rd let diam v supx be its diameter and let v arg sup v b v v r which is the diameter of a largest ball inscribed in v everywhere in the paper d is fixed and in fact implicitly small as we assume repeatedly that the sample of size n is dense in a domain of rd in particular all the implicit constants of proportionality that follow depend solely on lemma let v rd be open consider v and set v let q be isotonic where q rd is bounded there is c diam q v such that x c proof the proof is based on the fact that an isotonic function transforms a packing into a packing take x such that x and let since v is open it contains an open ball of diameter v let ym be an of such a ball with m v d for some constant depending only on then let xm such that maxi xi by the triangle inequality for all i j we have xj yj because is isotonic we have xi xj so that xm is a therefore there is a constant depending only on d such that m diam q d we conclude that diam q v for v rd and h let v h x v v x b y h v we note that v h is the complement of the hull of v c rd v see cuevas et and references therein lemma in the context of lemma if is only weakly isotonic then there is c diam q such that for all h x c v h proof assume that v h for otherwise there is nothing to prove take x v h and such that x and let because is bounded it is enough to prove the result when let y v be such that x b y h v there is y b y h such that y xy and y define u y x let x and for j define zj u let k be maximum such that since k satisfies we have k min by construction for all j k zj xy and b zj b y h let x and take xk such that maxj zj by the triangle inequality for j k which implies by induction that by weak isotonicity this implies that xj x we also have for any i j k such that i j xi zi by weak isotonicity this implies that xj xi xj for all i j consequently xj j k forms a of q hence k c diam q d for some constant c we conclude with the lower bound on from this control on the modulus of continuity we obtain a stronger version of corollary theorem under the same conditions as corollary we have the stronger conclusion that there is a sequence sn of similarities such that for all h h x sn x as n if in fact each is isotonic then this remains true when h we remark that when u is a connected union of a possibly uncountable number of open balls of radius at least h then u u h this covers the case of a finite union of open balls considered in kleindessner and von luxburg we also note that if u is bounded and open and has bounded curvature then there is h such that u u h this follows from the fact that in this case u c has positive reach federer and is therefore when h is below the reach cuevas et prop moreover our arguments can be modified to accommodate sets u with boundaries that are only lipschitz by reasoning with wedges in lemma theorem now contains kleindessner and von luxburg th and extends it to weakly isotonic functions and to more general domains u overall our proof technique is much simpler shorter and elementary define u which quantifies the density of in u because and is dense in u we have as n proof let be an accumulation point of for the pointwise convergence topology meaning there is n n infinite such that x x for all x we show that in fact the convergence is uniform first suppose that each is isotonic in that case lemma implies the existence of a constant c such that x c for all x and for all passing to the limit along n n we get x for all x in fact we already knew this from corollary since we learned there that coincides with a similarity and is therefore lipschitz fix there is m such that then there is m such that m xi xi for all n n with n for such an n and x let i m be such that xi by the triangle inequality x x x xi xi xi xi x c xi xi xi c since x is arbitrary and can be taken as small as desired this shows that the sequence n n convergences uniformly to over n n this proposition is stated for compact sets which is not the case of u c but easily extends to the case where set is closed with compact boundary when the are only weakly isotonic we use lemma to get a constant c depending on diam q and h such that x c for all x u h and all and for all passing to the limit along n n we get x for all x in fact x for all x from corollary as explained above the rest of the arguments are completely parallel we conclude that n n convergences uniformly to over u h n n let s denote the similarities of rd for any functions rd define h x x and also s inf s our end goal is to show that s as n suppose this is not the case so that there is and n n infinite such that s for all n n by corollary there is n and s s such that s x x for all x as we showed above the convergence is in fact uniform over u h n meaning s at the same time we have s s we therefore have a contradiction rates of convergence beyond consistency we are able to derive convergence rates we do so for the isotonic case the quadruple comparison setting recall that u theorem consider the setting with isotonic there is c depending only on d u and a sequence of similarities sn such that x sn x c diam q if u u h for some h then c c diam u where c is a function of d diam u u diam u the proof of theorem is substantially more technical than the previous results and thus postponed to section although kleindessner and von luxburg are not able to obtain rates of convergence the proof of theorem bares resemblance to their proof technique and in particular is also based on a result of alestalo et al on the approximation of see lemma we will also make use of a related result of vestfrid on the approximation of approximately midlinear functions see lemma we mention that we know of a more elementary proof that only makes use of alestalo et but yields a slightly slower rate of convergence we note that there is a constant c depending only on d such that this is because u being open it contains an open ball and this lower bound trivially holds for an open ball and such a lower bound is achieved when the xi s are roughly regularly spread out over u if instead the xi s are iid uniform in u and u is sufficiently regular for example u u h for some h then o log n as is this would give the rate and we do not know whether it is optimal even in dimension d remark we are only able to get a rate in for the weakly isotonic case we can do so by adapting of the arguments underlying theorem but only after assuming that u u h for some h and resolving a few additional complications ordinal embedding with local comparisons terada and von luxburg consider the problem of embedding an unweighted graph which as we saw in the introduction is a special case of ordinal embedding their arguments which as we explained earlier seem incomplete at the time of writing indicate that k kn log n is enough for consistently embedding a graph we consider here a situation where we have more information specifically all the distance comparisons between formally this is the situation where cn i j k and j k nkn i where nk i denotes the set of the k items nearest item i if the items are points xn rd an exact ordinal embedding is only constrained to be locally weakly isotonic as we explain now we start by stating a standard result which relates a graph to an graph lemma let u rd be bounded connected and open and such that u u h for some h sample xn iid from a density f supported on u with essential range in strictly there is a constant c such that if k nr d c log n then with probability tending to xi xj xi r xi n where neighk xi denotes the set of the k points in xj j n nearest xi the proof is postponed to section and only provided for completeness therefore assuming that k c log n where c is the constant of lemma we may equivalently consider the case where cn i j k and max rn for some given rn an exact embedding rd in that case is isotonic on b x rn for any x we require in addition that rn x this is a reasonable requirement since it is possible to infer it from cn indeed for k n we have rn if and only if k k k cn or k cn here we assume that for all i and if i j as is the case for euclidean distances we can still infer this even if the quadruples in cn must include at least three distinct items indeed suppose k n are such that there is no i such that i k i cn or i i k cn then a for all i such that max rn or b rn assume that rn with c sufficiently large so that situation a does not happen conversely if k is such that rn then when a does not happen there is i such that i k i cn or i i k cn theorem consider the setting and assume in addition that u u h for some h and that is isotonic over balls of radius rn and satisfies there is a constant c depending only on d h u diam u diam q and similarities sn such that x sn x assume the data points are generated as in lemma in that case we have o log n and theorem implies consistency when rn log n by lemma this corresponds to the situation where we are provided with comparisons among kn neighbors with kn n log if the result of terada and von luxburg holds in all rigor then this is a rather weak result landmark ordinal embedding inspired by jamieson and nowak we consider the situation where there are landmark items indexed by ln n and we are given all distance comparisons from any point to the landmarks formally with triple comparisons this corresponds to the situation where cn i j k n if the items are points xn rd an exact ordinal embedding is only constrained to be weakly isotonic on the set of landmarks and in addition is required to respect the ordering of the distances from any point to the landmarks the following is an easy consequence of theorem corollary theorem remains valid in the landmark triple comparisons setting meaning with as just described as long as the landmarks become dense in u jamieson and nowak study the number of triple comparisons that are needed for exact ordinal embedding with a counting argument they show that at least cn log n comparisons are needed where c is a constant depending only on if we only insist that the embedding respects the comparisons that are provided then corollary implies that a landmark design is able to be consistent as long as the landmarks become dense in u this consistency implies that as the sample size increases an embedding that respects the landmark comparisons also respects all other comparisons approximately this is achieved with o triple comparisons where is the number of landmarks and the conditions of corollary can be fulfilled with at any speed so that the number of comparisons is nearly linear in proof we focus on the weakly isotonic case where we assume that u u h for some h let xl l ln denote the set of landmarks since becomes dense in u meaning u by theorem there is a sequence of similarities sn such that x sn x now for x let such that we have x sn x x sn sn x by lemma for some constant the middle term is the first term is bounded by bounded by for the third term express sn in the form sn x rn x bn where r rn is an orthogonal transformation and bn rd take two distinct landmarks such that diam u which exist when n is sufficiently large since and at the same time sn diam u sn sn diam q diam q eventually we have diam q diam u hence the third term on the rhs of is bounded by thus the rhs of is bounded by which tends to as n this being valid for any x we conclude we remark that at the very end of the proof we obtained a rate of convergence as a function of the density of the landmarks and the convergence rate implicit in theorem this leads to the following rate for the quadruple comparisons setting which corresponds to the situation where cn i j k n i j k here is constrained to be isotonic on the set of landmarks and as before is required to respect the ordering of the distances from any data point to the landmarks corollary consider the setting in the landmark quadruple comparisons setting meaning with as just described let denote the set of landmarks and set u there is a constant c and a sequence of similarities sn such that x sn x proof the proof is parallel to that of corollary here we apply theorem to get this bounds the second term on the rhs of the first term is bounded by by lemma while the third term is bounded by as before are constants computational complexity we now discuss the computational complexity of ordinal embedding with a landmark design the obvious approach has two stages in the first stage the landmarks are embedded this is the goal of agarwal et for example here we use brute force proposition suppose that m items are in fact points in euclidean space and their dissimilarities are their pairwise euclidean distances then whether in the triple or quadruple comparisons setting an exact ordinal embedding of these m items can be obtained in finite expected time proof the algorithm we discuss is very naive we sample m points iid from the uniform distribution on the unit ball and repeat until the ordinal constraints are satisfied since checking the latter can be done in finite time it suffices to show that there is a strictly positive probability that one such sample satisfies the ordinal constraints let xm denote the set of xm b that satisfy the ordinal constraints meaning that xj when i j k seeing xn as a subset of b m rdm it is clearly open and sampling xm iid from the uniform distribution on b results in sampling xm from the uniform distribution on b m which assigns a positive mass to any open set in the second stage each point that is not a landmark is embedded based on the order of its distances to the landmarks we quickly mention the work davenport who develops a convex method for performing this task here we are contented with knowing that this can be done for each point in finite time function of the number of landmarks for example a brute force approach starts by computing the voronoi diagram of the landmarks and iteratively repeats within each cell creating a tree structure each point that is not a landmark is placed by going from the root to a leaf and choosing any point in that leaf cell say its barycenter thus if there are landmarks the first stage is performed in expected time f and the second stage is performed in time n g the overall procedure is thus computed in expected time f n g remark the procedure described above is not suggested as a practical means to perform ordinal embedding with a landmark design the first stage described in proposition has finite expected time but likely not polynomial in the number of landmarks for a practical method we can suggest the following embed the landmarks using the method of agarwal et al which solves a semidefinite program or the method of terada and von luxburg which uses an iterative strategy embed the remaining points using the method of davenport which solves a quadratic program although practical and reasonable we can not provide any theoretical guarantees for this method more proofs in this section we gather the remaining proofs and some auxiliary results we introduce some additional notation and basic concepts for zm rd let aff zm denote their affine hull meaning the affine subspace they generate in rd for a vector x in a euclidean space let denote its euclidean norm for a matrix m let denote its usual operator norm meaning max and tr m m its frobenius norm regular simplexes these will play a central role in our proofs we say that zm rd with m form a regular simplex if their pairwise distances are all equal we note that necessarily m d and that regular simplexes in the same euclidean space and with same number of distinct nodes m are similarity transformations of each other for example segments m equilateral triangles m tetrahedron m by recursion on the number of vertices m it is easy to prove the following lemma let zm form regular simplex with edge length and let denote the barycenter of zm then zi m and if z zm form a regular simplex then m in dimension m there are exactly two such points z proof of theorem we assume d see kleindessner and von luxburg for the case d we divide the proof into several parts continuous extension lemma implies that is locally uniformly continuous indeed take and let r such that b r u and is weakly isotonic on b r applying lemma with v b r and b r so that v because is dense in v and noting that v r v yields a constant cr such that x cr for all x b r being locally uniformly continuous we can uniquely extend to a continuous function on u also denoted by by continuity this extension is locally weakly isotonic on u isosceles preservation sikorska and szostok say that a function f v rd rd preserves isosceles triangles if x f y x f z y z in our case by continuity we also have that preserves isosceles triangles locally indeed for the sake of pedagogy let u u and r such that b u r u and is weakly isotonic on b u r take x y z b u be such that for t r define zt t x tz let t such that zt b u r because zt we have x y x zt letting t we get x y x z by continuity of since y and z play the same role the converse inequality is also true and combined yield an equality midpoint preservation let v rd be convex we say that a function f v rd preserves midpoints if f x f y f y we now show that preserves midpoints locally kleindessner and von luxburg also do that however our arguments are closer to those of sikorska and szostok who make use of regular simplexes the important fact is that a function that preserves isosceles preserves regular simplexes let u u and r such that b u r u and preserves isosceles on b u r take x y b u and let x y let zd form a regular simplex with barycenter and side length s and such that s for all i in other words x zd forms a regular simplex placed so that is the barycenter of zd by symmetry y zd forms a regular simplex also by lemma we have d d so that zd b b u r by the triangle inequality and the fact that hence x zd and y zd are regular simplexes if one of them is singular so is the other one in which case x y otherwise necessarily x is the symmetric of y with respect to aff zd the only other possibility would be that x y but in that case we would still have that zi x for all i d since zi d by lemma implying that zi and is weakly isotonic in that neighborhood so assume that x is the symmetric of y with respect to aff zd for a x y zi is constant in i and therefore so is a zi so that a belongs to the line of points equidistant to zd this implies that x y are collinear and because we also have x y so that is necessarily the midpoint of x and y conclusion we arrived at the conclusion that can be extended to a continuous function on u that preserves midpoints locally we then use the following simple results in sequence with lemma we conclude that is locally affine with lemma we conclude that is in fact affine on u and with lemma we conclude that is in fact a similarity on u lemma let v be a convex set of a euclidean space and let f be a continuous function on v with values in a euclidean space that preserves midpoints then f is an affine transformation proof this result is in fact and we only provide a proof for completeness it suffices to prove that f is such that f t x ty t f x tf y for all x y v and all t starting with the fact that this is true when t by recursion we have that this is true when t is dyadic meaning of the form t where j and k are both integers since dyadic numbers are dense in by continuity of f we deduce the desired property lemma a locally affine function over an open and connected subset of a euclidean space is the restriction of an affine function over the whole space proof let u be the domain and f the function cover u with a countable number of open balls bi i i such that f coincides with an affine function fi on bi take i j i distinct since u is connected there must be a sequence i km j all in i such that bks for s m since bks is an open set we must have fks and this being true for all s it implies that fi fj lemma an affine function that preserves isosceles locally is a similarity transformation proof let f be an affine function that preserves isosceles in an open ball without loss of generality we may assume that the ball is b and that f so that f is linear fix and let a take x rd different from and let u we have x u u f f a hence x valid for all x rd and f being linear this implies that f is a similarity auxiliary results we list here a number of auxiliary results that will be used in the proof of theorem the following result is a perturbation bound for trilateration which is the process of locating a point based on its distance to landmark points for a real matrix z let z denote its largest singular value lemma let rd such that aff rd and let z denote the matrix with columns consider p q rd and define ai zi and bi zi for i d then d z max d z max i i proof assume without loss of generality that in that case note that and also redefine z as the matrix with columns zd and note that the first d singular values remain unchanged since aff rd there is rd and rd such that p d zi and q d zi for p we have zi for all i d or in matrix form z u where u ud and ui similarly we find z v where v vd and vi hence we have p q z d d max d max i i simultaneously p q z combining both inequalities we conclude for we say that zm rd form an regular simplex if min zj max zj lemma let zm form an regular simplex with maximum edge length achieved by there is a constant cm and zm aff zm with and and forming a regular simplex with edge length such that maxi zi proof by scale equivariance we may assume that we use an induction on in what follows cm cm cm etc are constants that depend only on for m the statement is trivially true suppose that it is true for m and consider an regular simplex rd with maximum edge length by changing d to m if needed without loss of generality assume that aff rd in that case zm is an regular simplex with maximum edge length achieved by and by the inductive hypothesis this a aff zm with and and forming a implies the existence of zm regular simplex of edge length such that m zi cm for some constant cm let p be the orthogonal projection of onto a before continuing let p be the set of such p obtained when fixing zm and then varying zi b cm for i m and among the points that make an regular simplex with zm let be the barycenter of zm and note that now set by the pythagoras theorem we have zi zi with zi so that zi by the triangle inequality zi cm so that zi zi zi cm cm cm using the fact that zi zi hence p q cm m by lemma this implies that since p we must therefore have m cm let be on the same side of a as and such form a regular simplex note that is the orthogonal projection of onto that zm a by the pythagoras theorem applied multiple times we obtain the following first we have p p p because and p are orthogonal to a and therefore parallel to each other and both orthogonal to for the second term we already know that cm while the first term is bounded by since on the one hand while on the other hand and we know that hence we find that for some constant function of m only this shows that the induction hypothesis holds for m lemma there are constants cm cm such that if zm form an regular simplex with maximum edge length then cm and proof by scale equivariance we may assume that by lemma there is a constant cm zm aff zm forming a regular simplex with edge length such that maxi zi cm by weyl s inequality horn and johnson cor z z z on the one hand z is a positive constant depending only on m while on the other hand z z mcm lemma let zm form an regular simplex with maximum edge length and barycenter let p aff zm and define maxi zi mini zi there is a constant cm depending only on m such that cm when proof by scale equivariance we may assume that by lemma we have m max zm zi i by lemma there is a constant cm and we when such that cm also have maxi zm zi from this we conclude lemma let q be isotonic where q rd let v rd and r and set b v r there is c diam q such that for all x with x b v and for all x c proof let and suppose that which implies that in that case lemma where the constant there is denoted here by diam q yields x and similarly this proves henceforth we assume that first assume that in that case we immediately have x for the reverse let yt t x and note that take t and note that t so that yt b v r and therefore there is such that yt we have yt so that x applying the triangle inequality and lemma we then have x x x with yt when we choose t because x b v we still have yt b v r because of the constraint on the remaining arguments are analogous when repeating what we just did both ways and with yields the result lemma consider rd isotonic where rd let v denote the convex hull of set v and c diam diam then x for all x such that proof we first prove that if c and are such that x for all x with then diam c diam indeed take x let u x and l and define yj x sj u where sj j for j j and then let by construction yj v with x and let xj be such that yj with x and by the triangle inequality xj yj sj hence j x xj j c c diam since and l diam now assume that is isotonic and suppose that x for some x such that then we have when satisfy we just showed that this implies that diam c diam v and we conclude using the fact that diam the following result is on neighbor interpolation lemma let be a subset of isolated points in v rd and set v for any function rd define its neighbor interpolation as v rd as y x y y y arg min consider the modulus of continuity of which for is defined as sup x x then the modulus of continuity of denoted satisfies moreover for any y y v and any x such that and y y y x proof fix and take y y v such that y we have for all x y and y for all y so that y for all such x and by the triangle inequality therefore y y sup x x y y sup x x since this is true for all y y v such that y we conclude that for the second part of the lemma we have y y x y x y where the second term is bounded by y x sup x y sup x using the fact that and similarly for the third term let v rd be convex in our context we say that f v rd is midlinear if f x f y y lemma let v rd be with respect to some point in its interior there is a constant c depending only on v such that for any midlinear function f v rd there is a affine function t rd rd such that x t x note that if v is a ball then by invariance considerations c only depends on proof this is a direct consequence of vestfrid th we say that f v rd rd is an if x f y y for a set v rd define its thickness as v inf diam v u rd recalling the definition of in we note that v v but that the two are distinct in general lemma let v rd be compact and such that v diam v for some there is a constant c depending only on d such that if f v rd is an then there is an isometry r rd rd such that x r x proof this is a direct consequence of alestalo et th lemma let t rd rd be an affine function that transforms a regular simplex of edge length into an regular simplex of maximum edge length there is a constant c depending only on d and an isometry r such that x x for all x b proof by invariance we may assume t is linear and that the regular simplex is formed by zd and has edge length letting wi t zi we have that wd form an regular simplex of maximum edge length maxi lemma gives forming a regular simplex of edge length such that maxi for some constant let r be the orthogonal transformation such that r zi for all i d we have zi zi for all i in matrix notation letting z zd we have z d max z d c z z f i i d i i at the same time z with z being a positive constant depending only on hence z lemma suppose that rd rd are two affinities such that y r x x for some y rd and r then x x for all x rd proof by translation and scale invariance assume that y and r let li si si for x b we x x x x hence for x rd x x which in turn implies that x x x x proof of theorem without loss of generality we may assume that dn diam indeed suppose that dn but different from for otherwise is a degenerate similarity and the result follows let which is isotonic on and satisfies diam if the result is true for there is a similarity such that x x for some constant we implicitly assume that the set contains the origin so that remains bounded we then have x sn x cdn where sn dn is also a similarity let r u so that there is some such that b r u let b and diam let w be any vector and define let be such that necessarily because the distance from to exceeds note that r by isotonicity x whenever let yk be a of u so that k c diam u d for some constant c let xik be such that yk so that u k b yk k b xik where let zk xik for clarity take x because u is open it is so there is a continuous curve u such that x and let k be such that x b and then for j let inf s sj s j b zkl and let k be such that let j min j which is indeed finite by construction when by we have zkj thus by the triangle inequality x this being true for all x this prove that dn dn diam u interpolation let denote the interpolation of as in we claim that there is a dn and diam u dn such that satisfies the following properties for all y y y y u y y y y y y and also and y y y y y y if y y b satisfy y y y y y y y if y y b and indeed let x such that y y y for we start by applying lemma to get y y x y where is the modulus of continuity of we then use lemma which gives that for all and some c dn to get y c y for we first note that by the triangle inequality which in turn implies that x since is isotonic we then apply lemma to get that y y y y and conclude with lemma as for for we may apply lemma with let v be the convex hull of so that v b let z be a point in that ball if z let w z and if z let w be any vector define z z w and notice that the distance from z to exceeds therefore if x is such that then necessarily x we then note that we conclude that v since y we get that x with c diam diam we then apply lemma to obtain y y y c c using lemma as for for note that x b b and by the triangle inequality by lemma where the constant there is denoted here by c dn this implies that x c when which is true when and we then apply lemma together with lemma as for case d this case is particularly simple note that u is a bounded open interval of we show that the function is approximately midlinear on u take x y u and define x y by the fact that takes its values in r and we have x y x y when is small enough hence is midlinear on u by the result of vestfrid namely lemma there is c since u is a ball and an affine function tn such that y tn y since all affine transformations from r to r are possibly degenerate similarities we conclude case d for the remaining of this subsection we assume that d approximate midlinearity we show that there is a constant c such that is locally approximately midlinear take x y b and let x y let t be a constant to be set large enough later if then by x y b so that x y therefore assume that let zd be constructed as in the proof of theorem by construction both x zd and y zd form regular simplexes and is the barycenter of zd by lemma for any i j d zj d d which coupled with the fact that x y b yields that zi b for all i now let x by we have zi zj maxi j zi zj let cd by and lemma zi zj zj cd cd t hence assuming t cd we have zi zj maxi j zi zj where cd t in that case x zd form a regular simplex by symmetry the same is true of y zd define x y by lemma zj cd when t cd since and cd by this implies that zi zj by t so that since we already assumed that t cd for a x y zi is constant in i d therefore by mini a zi maxi a zi define as the orthogonal projection of a onto the affine space a aff zd and let a by the pythagoras theorem we have zi a zi in particular max zi min zi max a zi min a zi i i i i min a zi i where dn r once let denotes the barycenter of zd assume that t is sufficiently large that where is the constant of lemma by that lemma and the fact that zd form a regular simplex of maximum edge length bounded by we have let l be the line passing through and perpendicular to a we just proved that x y are within distance from l where let denote the orthogonal projection of onto x y since we can apply to get x y x y x y x y using the fact that max x y due to and when t is large enough by lemma we then obtain x y for some constant in particular recalling that x y this implies that x y when it remains to argue that is close to we already know that x y are within distance from l and by convexity the same must be true of let m x y and l m let pm denote the orthogonal projection onto m when m is a linear subspace by pythagoras theorem x y x y x y cos implying that sin since pm sin and is parallel to m we also have sin so that cos for some constant once is small enough we conclude that x y by the triangle inequality approximate affinity we now know that is midlinear on b for some constant c dn r diam q r this implies by the result of vestfrid that is lemma that there is an affine function tn such x tn x for all x w for some constant rc diam q r approximate similarity reinitialize the constants ck k we saw above that transforms the regular simplex x zd with height denoted h satisfying h into a one where cd t in what follows choose these points so that they are all in b and the simplex has height h from here on reinitialize the variables x y etc we can then take t yielding for a constant r by the triangle inequality we have min zi tn zj min zi zj max zi zj max zi tn zj by the triangle inequality and max zi tn zj max zi zj i j i j max i j zj hence we find that tn tn zd form a regular simplex where note that its maximum edge length is bounded as follows max zi zj i j when by lemma there is a constant and an isometry such that we have x x where because h r and the bounds on above there is a constant such that this implies that x x x tn x x x covering and conclusion reinitialize the constants ck k let and let uk u be such that uk form a maximal of u the number is not essential here but will play a role in the proof of theorem note that u uk where uk u b uk and note that u for u uk there are w such that define by and then we have u w let uk diam uk which is strictly positive the result of alestalo et al namely lemma gives a constant and an isometry rk such that u rk u let min uk uk take k k k such that uk so that there is u u such that b u uk since min k max x x max x x x x u max x x max x x we have x x for all x rd by lemma hence x x diam u for all x u if instead uk we do as follows since u is connected there is a sequence k km in k such that uki we thus have x x by the triangle inequality we conclude that x x for any k k noting that since for any k k and x uk x x x x we conclude that for any x u x x this concludes the proof when d a refinement of the constant assume now that u u h for some h tracking the constants above we see that they all depend only on d u diam u diam q as well as and defined in and respectively we note that diam uk r and uk min h by lemma so that min h to bound we can do as we did at the beginning of this section so that at the end of that section we can restrict our attention to chains km where to be sure fix k k and let u be a curve such that uk and define and then inf s sj s ukj and let k be such that which is since uk k k is a of u we then have sj we can therefore redefine in as min uk because u u h for each k k there is vk such that uk b vk min h u by the triangle inequality b vk min h when so that min h so we see that everything depends on d h u diam u diam q the second part of the theorem now follows by invariance considerations proof of lemma let c ess inf u f and c ess supu f which by assumption belong to fix i n and let ni j i xi r for j i pi j p xi r xi r f u du for an upper bound we have pi j c vol b xi r u c vol b xi r r d q where vol denotes the lebesgue measure in rd and is the volume of the unit ball in rd hence p ni n q p bin n q n q by bennett s inequality for the binomial distribution by the union bound we conclude that maxi ni n q with probability at least which tends to if nr d log n and is sufficiently large for a lower bound we use the following lemma lemma suppose u rd is open and such that u u h for some h then for any x u and any r b x r u contains a ball of radius min r h moreover the closure of that ball contains x proof by definition there is y u such that x b y h u we then have b x r u b x r b y h so it suffices to show that the latter contains a ball of radius min r h by symmetry we may assume that r if then b x b y h and we are done otherwise let z with t and note that b z b x r y h and x z now that lemma is established we apply it to get pi j c vol b xi r u min r h d q hence p ni n p bin n q n q by the union bound we conclude that mini ni n with probability at least q which tends to if nr d log n and is sufficiently large recall that h is fixed more auxiliary results we list here a few additional of auxiliary results that will be used in the proof of theorem for v rd and x v define the intrinsic metric x sup l l v with x l where is if s t for all s t l if no such curve exists set x the intrinsic diameter of v is defined as sup x x v we note that if l x then there is a curve with length l joining x and recall that a curve with finite length is said to be rectifiable see burago et for a detailed account of intrinsic metrics for u rd and h let u x u b x h u this is referred to as an erosion of the set u in mathematical morphology lemma if u rd is open and connected then for each pair of points x u there is h and a rectifiable curve within u joining x and proof take x u by taking an intersection with an open ball that contains x if needed we may assume without loss of generality that u is bounded since every connected open set in a euclidean space is also waldmann example there is a continuous curve u such that x and a priori could have infinite length however is compact for each t let r t be such that bt b t r t u since bt there is tm such that m btj since is connected necessarily for all j there is sj tj such that sj btj let and sm then sj u for all j m and therefore the polygonal line defined by x sm is inside m btj u where r m r tj by construction this polygonal line joins x and and is also rectifiable since it has a finite number of vertices lemma suppose u rd is bounded connected and such that u u h for some h then there is such that for all the intrinsic diameter of u is finite proof let v u by assumption for all x u there is y v such that x b y h u in particular u v let be a connected component of v pick and note that b h u by definition and also because is connected let be the volume of the unit ball in rd since the connected components are disjoint and each has volume at least hd while u has volume at most diam u d v can have at most diam u d connected components which we now denote by vk pick yk vk for each k k applying lemma for each pair of distinct k k k there is a rectifiable path u joining yk and by lemma the length of denoted dk is finite and there is hk such that u let maxk k dk and mink k hk we now show that each connected component vk has finite diameter in the intrinsic metric of v u since vk is bounded there is xm vk such that vk mk qj where qj b xj v take any x vk let j j mk be such that x qj and qj since vk is connected there is a sequence j jsk j mk such that qjs for all s sk choose zs qjs and let x and zsk then zs for all k qjs v it joins x let l be the polygonal line formed by zsk by construction l and x and has length at most sk hence x x sk mk this being valid for all x vk we proved that vk has diameter at most dk mk h in the intrinsic metric of v let k dk now take and any x u let y y v be such that x b y h and b y h let k k k be such that y vk and y there are curves v of length at most such that joins y and yk while joins y and we then join yk and with all together we have the curve xy y which joins x and lies entirely in u and has length bounded by h h and this is true for any pair of such points lemma suppose that rd rd are two affinities such that maxj zj zj where zd form in a regular simplex with minimum edge length at least there is c depending only on d such that if then x x for all x rd proof note that this is closely related to lemma by translation and scale invariance assume that and let li si si we have zj zj zj zj let z denote the matrix with columns zd in matrix notation we have zj j we also have z and by lemma z z when where depends only on in that case for another constant equivalently for x rd x x which in turn implies that x x x x proof of theorem because is bounded independently of n we may assume without loss of generality that rn and rn h for all n where will be chosen large enough later on take y u and let b y rn and qy we first show that there is diam q u such that for any y u diam qy rn for this we mimic the proof of lemma take x such that x diam qy let u be such that b u u u let ym be an rn of b u u with m u d for some then let xis s m be such that m xis by the triangle inequality for all s t we have xit rn by we have xis xit so that xim form a therefore m diam q d for some we conclude that diam q u rn rn we apply theorem to uy b y rn and with the fact that uy as we saw in the proof of and invariance considerations we obtain a constant c and a similarity sy such that x sy x c diam qy note that all the quantities with subscript y depend also on n but this will be left implicit fix u for x there is y u such that x uy assume is parameterized by arc length and let be given by lemma and let d denote the intrinsic diameter of u then assuming rn there is a curve u of length l d joining and y let yj jrn for j j and then y we have z z by the triangle inequality we also have uyj rn because rn let vj be such that b vj rn uyj fix j and let vj d denote a regular simplex inscribed in the ball b vj rn let rn denote its edge length then let xs d be such that maxk k vj k when is large enough xj d b vj rn by the triangle inequality moreover maxk l k l as well as k l when is large enough fj xj d is therefore an regular simplex with and minimum edge length rn now since maxk xj k xj k by lemma for all z rd z z for some c assuming in particular by the fact that diam u this gives x x for some diam u hence x sy x j since j this being true for any arbitrary x we conclude that max x x discussion this paper builds on kleindessner and von luxburg to provide some theory for ordinal embedding an important problem in multivariate statistics aka unsupervised learning we leave open two main problems what are the optimal rates of convergence for ordinal embedding with all triple and quadruple comparisons what is the minimum size of k kn for consistency of ordinal embedding based on the neighbor distance comparisons we note that we only studied the large sample behavior of exact embedding methods in particular we did not discuss or proposed any methodology for producing such an embedding for this we refer the reader to agarwal et borg and groenen terada and von luxburg and references therein in fact the practice of ordinal embedding raises a number of other questions in terms of theory for instance how many flawed comparisons can be tolerated acknowledgements we are grateful to vicente malave for introducing us to the topic and for reading a draft of this paper we also want to thank an associate editor and two anonymous referees for pertinent comments and for pointing out some typos and errors we learned of the work of ulrike von luxburg and her collaborators at the mathematical foundations of learning theory workshop held in barcelona in june we are grateful to the organizers in particular lugosi for the invitation to participate this work was partially supported by the us office of naval research references agarwal wills cayton lanckriet kriegman and belongie generalized multidimensional scaling in international conference on artificial intelligence and statistics pp alestalo trotsenko and isometric approximation israel journal of mathematics aumann and kruskal the coefficients in an allocation problem naval research logistics quarterly borg and groenen modern multidimensional scaling theory and applications springer burago burago and ivanov a course in metric geometry volume american mathematical society providence cuevas fraiman and on statistical properties of sets fulfilling conditions adv in appl probab davenport a lost without a compass nonmetric triangulation and landmark multidimensional scaling in computational advances in adaptive processing camsap ieee international workshop on pp ieee de silva and j tenenbaum sparse multidimensional scaling using landmark points technical report technical report stanford university ellis whitman berenzweig and lawrence the quest for ground truth in musical artist similarity in proceedings of the international symposium on music information retrieval ismir pp federer curvature measures trans amer math soc horn and johnson matrix analysis cambridge university press cambridge corrected reprint of the original jamieson and nowak embedding using adaptively selected ordinal data in communication control and computing allerton annual allerton conference on pp ieee kelley general topology volume of graduate texts in mathematics springerverlag kleindessner and von luxburg uniqueness of ordinal embedding in proceedings of the conference on learning theory pp kruskal j b multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis psychometrika mcfee and lanckriet learning similarity the journal of machine learning research nhat vo challa and lee nonmetric mds for sensor localization in international symposium on wireless pervasive computing iswpc pp shepard the analysis of proximities multidimensional scaling with an unknown distance function psychometrika shepard the analysis of proximities multidimensional scaling with an unknown distance function ii psychometrika shepard metric structures in ordinal data journal of mathematical psychology sikorska and szostok on mappings preserving equilateral triangles journal of geometry suppes and winet an axiomatization of utility based on the notion of utility differences management science terada and u von luxburg local ordinal embedding in proceedings of the international conference on machine learning pp vestfrid i a linear approximation of approximately linear functions aequationes mathematicae von luxburg and alamgir density estimation from unweighted neighbor graphs a roadmap in advances in neural information processing systems pp waldmann topology an introduction springer international publishing young and hamer multidimensional scaling history theory and applications lawrence erlbaum associates inc
10
a framework for datatype transformation jan kort and ralf universiteit van amsterdam voor wiskunde en informatica vrije universiteit van amsterdam arxiv feb centrum abstract we study one dimension in program evolution namely the evolution of the datatype declarations in a program to this end a suite of basic transformation operators is designed we cover refactorings but also and adaptations both the object programs that are subject to datatype transformations and the meta programs that encode datatype transformations are functional programs introduction we study operators for the transformation of the datatype declarations in a program the presentation will be biased towards the algebraic datatypes in haskell but the concepts are of relevance for many typed declarative languages mercury and sml as well as frameworks for algebraic specification or rewriting like casl elan and maude our transformations are rather syntactical in nature as opposed to more semantical concepts such as data refinement our transformations contribute to the more general notion of functional program refactoring the following introductory example is about extracting a new datatype from constructor components of an existing datatype this is illustrated with datatypes that represent the syntax of an imperative language the following extraction identifies a piece of syntax to enable its reuse in later syntax extensions datatypes with focus on two constructor components data prog prog progname dec stat data dec vdec id type data stat assign id expr if expr stat stat after extraction of dec stat to constitute a new datatype block data prog prog progname block data block block dec stat in the present paper we describe the design of a framework for datatype transformations including the operators for the above extraction in sec we identify all the concerns addressed by the framework in sec we describe all the basic operators for datatype transformations in sec these operators are lifted from datatypes to complete programs related work is discussed in sec the paper is concluded in sec kort concerns in datatype transformation the central contribution of the present paper is a simple and editingcomplete suite of operators for datatype transformations before we embark on this suite we identify the concerns addressed by our approach datatype transformations via scripting or interactive tool support primitives for datatype transformations generic for conciseness of datatype transformations flexible means of referring to fragments of interest in datatype transformations we will now discuss these concerns in some depth scripting interactive tool support from the point of view of a programmer datatype transformations should be founded on intuitive scenarios for adaptation to actually perform datatype transformations there are two modes of operation the first mode is scripting the programmer encodes the desired transformation as an expression over basic or operators the second mode is interactive transformation based on a corresponding gui the benefits of an interactive tool are rather obvious such a tool is useful to issue a transformation on the basis of an dialogue and to provide a tailored list of options for transformations that make sense in a given context a crucial benefit of interactive transformation is that the gui can be used to provide feedback to the programmer which locations were changed where is the programmer s attention needed to complete the issued transformation scenario the apparent benefits of scripting such as the opportunities to revise transformations and to replay them can be also integrated into an interactive setting in fig we illustrate the interactive treatment of the introductory example using our prototypical tool th transform haskell as the snapshot indicates we use a designated fold dialogue to perform the extraction of the piece of syntax folding is the basic transformation underlying extraction this dialogue combines several transformation steps and side conditions in a convenient way the figure shows the following situation the user has selected two consecutive types dec stat and initiated the fold dialogue the user has also typed in block in the type name field the introduction is marked automatically since the given type name does not yet exist the user has also selected the kind radiobutton to be data and filled in block in the cons name field after this the user would press replace to make the change if there had been more than one occurrence the user could replace them all with replace all or step through all occurrences with next and replace only specific ones with replace as with ordinary find and replace in text editors kort fig a snapshot related to the interactive treatment of the introductory example here is an list of further common transformation scenarios renaming type and constructor names permuting type arguments and constructor components the dual of extracting datatypes inlining datatypes including a constructor declaration together with associated functionality excluding a constructor declaration together with associated functionality inserting a constructor component together with associated functionality deleting a constructor component together with associated functionality transformation primitives the core asset of our framework is a suite of basic operators which can be either used as is or they can be completed into more complex compound transformations in the design of this suite we reuse design experience from a related effort on grammar adaptation indeed there is an obvious affinity of grammar transformations and datatype transformations a challenging problem that we did not need to address in this previous work is the completion of datatype transformations to apply to entire functional programs in which evolving datatypes reside we list the required properties of our basic transformation operators correctness mostly we insist on structure preservation that is the resulting datatype is of the same shape as the original datatype this is enforced by the and postconditions of the operators kort completeness the operators are that is they capture all scenarios of datatype evolution that are otherwise performed by plain text editors adaptations are defined in terms of disciplined primitives orthogonality the operators inhabit roles higherlevel scenarios for interactive transformation are derivable operators for datatype transformations are complementary to transformations locality the basic operators operate on small code locations as opposed to global or exhaustive operators which iterate over the entire program note that some operators are necessarily exhaustive an operator to rename a type name implementability the operators are implemented as syntactical transformations that are constrained by simple analyses to check for and postconditions but which otherwise do not necessitate any offline reasoning universality while the present paper focuses on datatype transformations the principles that are embodied by our operators are universal in the sense that they also apply to other abstractions than datatypes functions or modules we do not list these properties to announce a formal treatment this would be very challenging as we opt for the complex language setup of haskell the above properties provide merely a design rationale a formal approach is an important subject for future work but it does not contribute anything to the narrow goal of the present paper to compile an inventory of the basic roles in datatype transformation generic we implement transformation operators and compound in haskell we reuse a publicly available abstract syntax for haskell we rely on generic programming techniques to perform on the haskell syntax in haskell we use the of generic programming that allows us to complete functions on specific syntactical sorts into generic traversals that process subterms of the specific sorts accordingly this style of metaprogramming is known to be very concise because one only provides functionality for the types and constructors that are immediately relevant for the given problem all our datatype transformations are of type trafo which is defined as follows type trafo hsmodule maybe hsmodule that is a datatype transformation is a partial function on hsmodule the abstract syntactical domain for haskell modules partiality is expressed by means of the maybe type constructor that wraps the result type partially is needed to model side conditions in fig we illustrate generic by giving the definition of a simple operator for replacing type names the specification formalises the fact that the used abstract syntax is part of the haskell core libraries in the package http kort replace a type name replacetypeid typeid typeid trafo replacetypeid n n full tdtp adhoctp adhoctp idtp declsite refsite where transform declaring occurrences of type names declsite hsdecl maybe hsdecl declsite hstypedecl l ps t n return hstypedecl l n ps t declsite hsdatadecl l c ps cds d n return hsdatadecl l c n ps cds d declsite hsnewtypedecl l c ps cd d n return hsnewtypedecl l c n ps cd d declsite decl return decl transform using occurrences of type names refsite hstype maybe hstype refsite hstycon unqual n return hstycon unqual n refsite tpe return tpe fig specification of the replacement operation underlying renaming of type names type names can occur in two kinds of locations either on a declaration site when we declare the type or on a using site when we refer to the type in a type expression so we need to synthesise a transformation which pays special attention to the syntactical domains for declaring and using sites indeed in the figure there are two cases which customise the identity function idtp in the given context we choose the traversal scheme full tdtp for full traversal in manner this way we will reach each node in the input tree to transform type names on declaring and using sites the operator replacetypeid by itself is a total function so the maybe in its type is not really needed here partiality would be an issue if we derived an operator for renaming type names this necessitates adding a side condition to insist on a fresh new name means of referring to fragments of interest both the basic operators for datatype transformation but also actual transformation scenarios in scripts or in interactive sessions need to refer to program fragments of interest recall our introductory example extracting a type necessitates referring to the constructor components that are meant to constitute the new type in our framework we use three ways to refer to fragments of interest focus markers on subterms this approach is particularly suited for interactive transformations here relevant fragments can be directly marked in fig we extend haskell s abstract syntax to include term constructors for focusing on relevant fragments in datatype transformations that is we are prepared to focus on names of types on type expressions and on lists of constructor components selectors of subterms this approach is particularly suited for scripting transformations selectors for haskell s type expressions are defined in fig the three forms of typesel represent the three kinds of declarations that involve types the helper typesel allows to select any part of a given type expression kort focus on names data hsname hsnamefocus hsname focus on type expressions data hstype hstypefocus hstype focus on lists of constructor components data hscondecl hscondecl srcloc hsname hsfocusedbangtype hsrecdecl srcloc hsname hsname hsbangtype data hsfocusedbangtype hsunfocusedbangtype hsbangtype hsfocusedbangtype hsbangtype fig kinds of focus for datatype transformation data typesel aliasref typeid typesel conref conpos typesel sigref funid typesel data typesel selstop seldom typesel selcod typesel selith parapos typesel selfun typesel selarg typesel type typeid type conid type funid type conpos type parapos data hsname hsname hsname hsname conid parapos int refer to a type alias refer to a constructor component refer to a function signature reference stops here refer to domain of function type refer to of function type refer to products component refer to type constructor refer to type argument refer to a type refer to a constructor refer to a function name refer to a component of a constructor refer to a parameter position syntactical sort for all kinds of names fig selectors that refer to type expressions and others predicates on subterms such predicates typically constrain the type of a term or the pattern this approach is particularly suited for the repeated application of a transformation to different focuses that match a given predicate there are ways to mediate between these different ways of referring to subterms for example given a term with a focus marker on a type expression one can compute the selector that refers to the focused subterm given a predicate on type expressions one can compute the list of all selectors so that an operator that is defined on selectors can be used with predicates as well finally given a selector one can also add the corresponding focus marker in the input at hand basic operators for datatype transformation we will now describe the themes that constitute our operator suite renaming type and constructor names permutation of type parameters and constructor components swapping types on use sites introduction elimination of type declarations folding unfolding of type declarations kort sample input datatype data conslist a nil cons a conslist a renamed and permuted datatype data snoclist a lin snoc snoclist a a fig illustration of renaming and permutation renametypeid renameconid permutetypeid permuteconid typeid trafo conid trafo parapos trafo parapos trafo rename a type declaration rename a constructor permute type parameters permute constructor components fig operators for renaming and parameter permutation renametypeid hsident conslist hsident snoclist renameconid hsident nil hsident lin renameconid hsident cons hsident snoc permuteconid hsident snoc seqtrafo seqtrafo seqtrafo fig script for the scenario in fig wrapping unwrapping of constructor components inclusion exclusion of entire constructor declarations insertion deletion of constructor components as this list makes clear we group an operator with its inverse such as in folding unfolding unless the operator can be used to inverse itself this is the case for renaming permutation and swapping the operators from the first six groups are almost the last two groups deal with and transformations we will now explain the operators in detail including illustrative examples we will only explain the effect of the operators on datatype declarations while we postpone lifting the operators to the level of complete programs until sec renaming and permutation let us start with the simplest datatype refactorings one can think of these are transformations to consistently rename type or constructor names and to permute parameters of type and constructor declarations in fig a simple example is illustrated we rename the type name conslist the constructor names nil and cons and we permute the two parameter positions of cons the resulting datatype specifies a snoclist as opposed to the conslist before in fig we declare the operators for renaming names and permuting parameter lists in fig we include the script that encodes the sample as a sequence of basic renaming and permuting transformations to this end we assume a sequential composition operator seqtrafo for datatype transformations in the script seqtrafo is used as an infix operator seqtrafo kort data hsdecl introtypes elimtypes syntactical sort for type declarations hsdecl trafo typeid trafo introduction of type declarations elimination of type declarations fig operators for introduction and elimination of datatypes type typehdr typeid typevar type typevar hsname header lhs of type declaration type variables foldalias typesel typehdr trafo unfoldalias typesel trafo folding the referred type unfolding the referred type fig operators for folding and unfolding introduction elimination the next group of operators deals with the introduction and elimination of type declarations see fig introduction means that the supplied types are added while their names must not be in use in the given program elimination means that the referenced types are removed while their names must not be referred to anymore in the resulting program the two operators take lists of types as opposed to single ones because types can often only be introduced and eliminated in groups say mutually recursive systems of datatypes all kinds of type declarations make sense in this context aliases newtypes and proper datatypes the operators for introduction and elimination are often essential in compound transformations this will be illustrated below when we reconstruct the introductory example in full detail see sec folding unfolding instantiating the folklore notions of unfolding and folding for datatypes basically means to replace a type name by its definition and vice versa extra provisions are needed for parameterised datatypes the prime usage scenarios for the two operators are the following extraction introduction of a type followed by its folding inlining unfolding a type followed by its elimination to give an example the introductory example basically extracts the structure of imperative program blocks to actually reconstruct this example we need a few more operators so we postpone scripting the example see sec the operators for folding and unfolding are declared in fig the operators make a strict assumption the type which is subject to folding or unfolding is necessarily a type alias as opposed to a proper datatype this assumption simplifies the treatment of the operators considerably since type aliases and their definitions are equivalent by definition extra operators for wrapping and unwrapping allow us to use proper datatypes during folding and unfolding as well this will be addressed below in the type of the foldalias operator we do not just provide a type kort type conrange conpos int refer to consecutive components groupconrange ungroupconpos conrange trafo conpos trafo typeid conid trafo typeid trafo typeid trafo typeid trafo group constructor components inline product turn type alias into newtype turn newtype into datatype turn datatype into newtype turn newtype into type alias fig operators for wrapping and unwrapping original syntax data prog data dec data stat data expr prog progname dec stat vdec id type assign id expr if expr stat stat var id const int after grouping dec and stat data prog prog progname dec stat after introduction of block to prepare folding data prog prog progname dec stat type block dec stat after folding away the type expression dec stat data prog prog progname block type block dec stat after turning block into a proper datatype with the constructor block data prog prog progname block data block block dec stat after ungrouping the product dec stat data prog prog progname block data block block dec stat fig illustration of wrapping unwrapping and extraction name but also a list of type variables cf helper type typehdr this is needed for parameterised datatypes where we want to specify how the free type variables in the selected type expression map to the argument positions of the type alias the preconditions for the operators are as follows in the case of foldalias we need to check if the referenced type expression and the side of the given alias declaration coincide in the case of unfolding we need to check that the referenced type expression corresponds to an application of a type alias wrapping unwrapping we will now consider operators that facilitate certain forms of wrapping and unwrapping of datatype constructors see fig there are operators for grouping and ungrouping that is to turn consecutive constructor components into a single component that is of a product type and vice versa there are also operators to mediate between the different kinds of type declarations namely type aliases newtypes and datatypes this will allow us to toggle the representation of datatypes in basic ways as a result the normal forms assumed by other operators can be established recall for example the use of type aliases in folding and unfolding this separation of concerns serves orthogonality kort groupconrange hsident prog introtypes hstypedecl noloc block hstytuple hstyapp hstycon unqual hsident list hstycon unqual hsident dec hstyapp hstycon unqual hsident list hstycon unqual hsident stat foldalias conref hsident prog selstop hsident block hsident block hsident block hsident block ungroupconpos hsident block seqtrafo seqtrafo seqtrafo seqtrafo seqtrafo fig script for the scenario in fig data maybe k data maybe k data maybe k a nothing just a k k k k a nothing just a k k k k a nothing just a k k k data conslist a nil maybe a k k cons a conslist a fig illustration of the generalisation of maybe to conslist in fig we show the steps that implement the introductory example as one can see we basically implement extraction but extra steps deal with grouping and ungrouping the two components subject to extraction also the extracted type should be a proper datatype as opposed to a type alias see transition from to for completeness sake the transformation script is shown in fig the script precisely captures the steps that underly the interactive transformation in fig some of the operators are not completely that is strictly speaking the structures of the datatypes before and after transformation are not fully equivalent for example a newtype and a datatype are semantically distinguished even if the defining constructor declaration is the very same this is because a constructor of a datatype involves an extra lifting step in the semantical domain there is an extra bottom element the operators for grouping and ungrouping also deviate from full structure preservation swapping types on use sites we will now deal with transformations that eliminate or establish type distinctions by what we call swapping types on use sites in fig we illustrate a typical application of swapping in the example we want to generalise the standard datatype maybe to allow for lists instead in fact we do not want to change the general definition of the library datatype maybe but we only want to change it on one use site not shown in the figure this is where swapping helps as an intermediate step we can replace maybe on the use site by a newly introduced datatype maybe with equivalent structure the figure illustrates how subsequent adaptations derive kort type datanames type dataunifier typeid conid datanames datanames swapalias swapdata typesel typeid typeid trafo typesel dataunifier trafo fig operators for swapping types on use sites type condecl data hstype conid hstype includecondecl excludecondecl constructor declaration syntactical sort for type expressions typeid condecl trafo conid trafo fig operators for inclusion and exclusion of constructor declarations syntax as of fig data prog data block data dec data stat data expr prog progname block block dec stat vdec id type assign id expr if expr stat stat var id const int after syntax extension by statement blocks data stat assign id expr if expr stat stat sblock block fig illustration of constructor inclusion the conslist datatype from the clone of the maybe datatype in particular we add the boxed constructor component the swapping operators are declared in fig there is one operator for type aliases and another for datatype declarations in the case of proper datatypes one needs to match the constructors in addition to just the names of the types this is modelled by the helper datatype dataunifier the type of the operator swapdata clarifies that we are prepared to process a list of dataunifiers this is necessary if we want to swap mutually recursive systems of datatypes inclusion exclusion we now leave the ground of transformations that is we will consider transformations where input and output datatypes are not structurally equivalent in fact we consider certain ways to extend or reduce the structure of the datatype the first couple of and transformations is about inclusion and exclusion of constructor declarations see fig these operators are only feasible for proper datatypes and not for type aliases or newtypes this is because a type alias involves no constructor at all and a newtype is defined in terms of precisely one constructor declaration in fig we show an example for constructor inclusion in fact we just continue the introductory example to make use of the extracted block structure in a language extension for statement blocks that is we include a constructor application for stat to capture block as another statement form this continuation of the kort insertconcomp deleteconcomp conpos hstype trafo conpos trafo fig operators for insertion and deletion of constructor components a datatype for a transition relation function and helpers type transrel a a maybe a data maybe a nothing just a data conslist a nil cons a conslist a introduction of a substitute for maybe data maybe a nothing just a swapping maybe and maybe in transrel type transrel a a maybe a extension of maybe to fit with shape of conslist data maybe a nothing just a maybe a swapping maybe and conslist in transrel type transrel a a conslist a fig illustration of component insertion and type swapping introductory example amplifies the intended use of our operator suite for program evolution in the sense of datatype refactoring and adaptation insertion deletion inclusion and exclusion of constructor declarations is about the branching structure of datatypes we will now discuss operators that serve for the insertion or deletion of constructor components see fig insertion of a component c into a constructor declaration c cn proceeds as follows given the target position for the new component be it i n the new constructor declaration is simply of the form c c ci cn in general c might need to refer to type parameters of the affected datatype deletion of a constructor declaration relies on the identification of the obsolete component in fig we elaborate on the earlier example for generalising maybies to lists recall fig at the top of fig we see three datatypes transrel maybe and conslist the idea is indeed to replace maybe by conslist in the using occurrence in transrel that is we want to allow for a function from a to a list of as instead of a partial function from a to a we call this adaptation a generalisation because a list is more general than an optional in the initial phase of the generalisation of maybe we disconnect the relevant occurrence of maybe in transrel from other possible occurrences in the program so we introduce a copy maybe of maybe and we perform type swapping so that transrel refers to maybe instead of the maybe now we need to make maybe structurally equivalent to conslist this amounts to adding a recursive component to the second constructor just then we can again swap types to refer to conslist in the of transrel kort datatype transformation meets program transformation we will now over the groups of operators to investigate their impact on functional programs it would be utterly complex to formalise the link between datatype and program transformation the mere specification of the transformations is already intractable for a publication because of its size and the number of details so we will describe the implied program transformations informally while omitting less interesting details renaming type names only occur inside type declarations and type annotations so there is no need to adapt expressions or function declarations except for their signatures or the type annotations of expressions constructor names can very well occur inside patterns and expressions that contribute to function declarations renaming these occurrences is completely straightforward permutation the permutation of type parameters does not necessitate any completion at the level of function declarations the permutation of constructor components however needs to be realized in patterns and expressions as well this is particularly simple for cases because all components are matched by definition hence we can directly permute the in an affected constructor pattern witnessing permutations of constructor components in expression forms is slightly complicated by currying and style instead of permuting components in possibly incomplete constructor applications we could first get access to all components by given a constructor c with say n potential components according to its declaration we first replace c by xn c xn as justified by then we witness the permutation by permuting the arguments xn in the expression in the presence of a nonstrict language with an evaluation order on patterns the permutation of constructor components might actually change the behaviour of the program regarding termination we neglect this problem we should also mention that it is debatable if the described kind of is really what the programmer wants because it obscures the code introduction elimination introduction does not place any obligations on the functions defined in the same program in the case of elimination we have to ensure that the relevant types are not used by any function if we assume that all function declarations are annotated by or inferred signatures then the precondition for elimination can be checked by looking at these signatures there is an alternative approach that does not rely on complete type annotations we check that no constructor of the relevant types is used kort folding unfolding the restriction of folding and unfolding to type aliases guarantees that these operators do not necessitate any adaptation of the function declarations this is simply because interchanging a type alias and its definition is completely and by definition this is extremely convenient despite the crucial role of the operators for folding and unfolding they do not raise any issue at the level of function declarations wrapping unwrapping grouping and ungrouping these operators are handled using the same overall approach as advocated for the permutation of constructor components that is in patterns we witness grouping or ungrouping by inserting or removing the enclosing in expressions we perform to access the relevant components and then we group or ungroup them in the constructor application mediation between newtypes and datatypes these datatype transformations do not imply any adaptations of the functions that involve the datatype in question as we indicated earlier the extra bottom value of a datatype when compared to a newtype allows a program to be undefined in one more way newtype to alias migration we simply remove all occurrences of the associated constructor both in pattern and expression forms we require that the relevant newtype is not covered by any instance declaration of some type class or constructor class otherwise we had to inline these members in a way prior to the removal of the constructor if we neglected this issue the resulting program either becomes untypeable or a different instance is applied accidentally which would be hazardous regarding semantics preservation alias to newtype migration this operator requires a treatment for function declarations the crucial issue is how to know the following what expressions have to be wrapped with the newtype constructor in what patterns does the newtype constructor need to be stripped our approach is as simple as possible we observe that the new newtype might be used in the declarations of other datatypes the corresponding patterns and expressions can be easily located and adapted as in the case of permutation grouping and ungrouping recall we also need to adapt function declarations if their argument or result types are known to refer to the relevant alias this basically means that we need to access the affected arguments and result expressions in all relevant equations to unwrap the arguments and wrap the result expressions these adaptations are slightly complicated by the fact that the affected type alias can occur in arbitrarily nested locations in fig we illustrate the effect of the operator in the introductory example we show the interpreter function that maps over the statements kort interpreter function before the illustrative extraction run prog state run prog name decs stats mapm interpret stats the same function after extraction run prog state run prog name block decs stats mapm interpret stats fig function adaptation triggered by migration input program type transrel a a maybe a data maybe a nothing just a deadend transrel a a bool deadend r a case r a of nothing true just false output program type transrel a a maybe a data maybe a nothing just a deadend transrel a a bool deadend r a case tomaybe r a of nothing true just false induced helper for type swapping tomaybe maybe a maybe a tomaybe nothing nothing tomaybe just a just a fig function adaptation triggered by type swapping of the program the program name and the declarations do not carry any semantics here the type of the function run exhibits that the meaning of a program is a computation that involves a state for the program variables the adapted version of run refers to the extra constructor block which resulted from extraction swapping types on use sites this operator relies on the same techniques as however instead of wrapping and unwrapping a constructor we invoke conversion functions that mediate between the two structurally equivalent types these mediators merely map old to new constructors and vice versa and hence they are immediately induced by the datatype transformation itself namely by the dataunifiers passed to the swap operator this approach implies that we only perform very local changes the program code will still work on the old datatypes thanks to the mediators the impact of swapping types at the function level is illustrated in fig we deal with the initial steps of the migration in fig where we replace the occurrence of maybe within transrel by a structurally equivalent maybe we show an illustrative function deadend which performs a test if the given transition relation allows for a transition in the presence of a given state a the adapted function deadend refers to the conversion function tomaybe prior to performing pattern matching on the obsolete maybe type kort input program data stat assign id expr if expr stat stat interpret stat state interpret assign i e envlookup i interpret if e reval e output program data stat assign id expr if expr stat stat sblock block interpret stat state interpret assign i e envlookup i interpret if e reval e interpret sblock fig inclusion of a constructor declaration inclusion exclusion intuitively the inclusion of a constructor should be complemented by the extension of all relevant case discriminations this normally means to add a equation or a case to a case expression for the new constructor dually exclusion of a constructor should be complemented by the removal of all equations or cases that refer to this constructor in the case of added equations we view the sides of these equations as a kind of hot spot to be resolved by subsequent transformations to this end we use undefined as a kind of marker dually in the case of removed constructors we also need to replace occurrences of the constructor within expressions by when using interactive tool support these markers are useful to control further steps in a transformation scenario in fig we progress with our running example of an interpreter for an imperative language we illustrate the step where blocks are turned into another form of statements hence the shown output program involves a new equation that interprets statement blocks this added equation reflects that the meaning of such blocks is as yet undefined subject to subsequent adaptations insertion deletion inserting a component into a declaration for a constructor c means that all patterns with c as outermost constructor must be adapted to neglect the added component and all applications of c must be completed to include for the added component dually deletion of a component from c means that all applications of c and all patterns with c as outermost constructor need to be cleaned up to project away the obsolete component any reference to a pattern variable for the obsolete component is replaced by as in the case of permutation and others is needed to actually get access to constructor components in expressions in fig the insertion of a constructor component is illustrated by continuing the scenario from fig the adapted equation of tomaybe involves an extended pattern as the don t care pattern indicates the definition of tomaybe does not make use of the added component in fact the definition of the function deadend does not need to be adapted it only tests for the availability of a transition step kort output program type transrel a a maybe a data maybe a nothing just a maybe a deadend transrel a a bool deadend r a case tomaybe r a of nothing true just false induced helper for type swapping tomaybe maybe a maybe a tomaybe nothing nothing tomaybe just a just a fig illustration of the insertion of a constructor component normally other functions will start to rely on the richer pattern related work transformational program development formal program transformation separates two concerns the development of an initial maybe inefficient program the correctness of which can easily be shown and the stepwise derivation of a better implementation in a manner partsch s textbook describes the formal approach to this kind of software development pettorossi and proietti study typical transformation rules for functional and logic programs in formal program transformation in part also addresses datatype transformation say data refinement here one gives different axiomatisations or implementations of an abstract datatype which are then related by transformation steps this typically involves some amount of mathematical program calculation by contrast we deliberately focus on the more syntactical transformations that a programmer uses anyway to adapt evolving programs database schema evolution there is a large body of research addressing the related problem of database schema evolution as relevant for example in database and reverse engineering the schema transformations themselves can be compared with our datatype transformations only at a superficial level because of the different formalisms involved there exist formal frameworks for the definition of schema transformations and various formalisms have been investigated an interesting aspect of database schema evolution is that schema evolution necessitates a database instance mapping compare this with the evolution of the datatypes in a functional program here the main concern is to update the function declarations for compliance with the new datatypes it seems that the instance mapping problem is a special case of the program update problem refactoring the transformational approach to program evolution is nowadays called refactoring but the idea is not new refactoring means to improve the structure of code so that it becomes more comprehensible maintainable and adaptable interactive refactoring tools are being studied and used extensively in the programming context typical examples of functional program refactorings are described in the introduction of a monad in a program the precise inhabitation of kort the refactoring notion for functional programming is being addressed in a project at the university of kent by thompson and reinke see there is also related work on in a functional context by erwig previous work did not specifically address datatype transformations the refactorings for class structures are not directly applicable because of the different structure and semantics of classes algebraic datatypes structure editing support for interactive transformations can be seen as a sophistication of structure editing this link between transformation and editing is particularly appealing for our syntactical transformations not surprisingly concepts that were developed for structure editing are related to our work for example in primitives of structure editing are identified based on the notion of focus to select subtrees and on navigation primitives left right up and down trees subtrees and paths are here defined as follows data tree type subtree type path type layer fork label tree path tree layer label tree tree the t in a subtree p t is the currently selected tree and it is between the left and right trees in the top layer the head of the p this approach does not account for the heterogeneous character of language syntaxes but it shows that the fact if a focus resides in a term can be encoded in types concluding remarks contribution we identified the fundamental primitives for datatype transformation these operators are meant to support common scenarios of program adaptation in functional programming or other settings where algebraic datatypes play a role in fact all the identified operators are universal in the sense that they are also meaningful for other program abstractions than just datatypes function declarations we deliberately focused on adaptations of datatypes because a vast body of previous work addressed transformations for recursive functions despite the focus on datatype transformations we had to consider program transformations that are necessitated by the modification of datatypes regarding the executable specification of the operator suite we adhered to the formula metaprograms haskell programs we employed generic functional programming in the interest of conciseness we also employed designated means of referring to fragments of interest a focus concept partial project failure we are confident that the identified operators are sufficient and appropriate for actual datatype transformations we have attempted to complement this framework development by actual interactive tool support we initially thought that using haskell for this interactive tooling as well would be a good idea since the actual transformation operators are implemented in haskell anyway and the interactive dialogues need to cooperate with the operator framework to perform analyses haskell indeed seems to be the obvious choice to make a long story kort short there are many gui libraries for haskell but none of them is suitable for developing a sophisticated gui for interactive program transformation at the moment it seems that environments for interactive language tools would provide a better starting point environments based on attribute grammars perspective to cover full haskell a few further operators would have to be added to our suite in particular operators that support type and constructor classes we should also pay full attention to some idiosyncrasies of haskell cf refutable irrefutable patterns then there are also transformation techniques that seem to go beyond our notion of program evolution but it is interesting to cover them anyway we think of techniques like turning a system of datatypes into functorial style or threading a parameter through a system of datatypes the ultimate perspective for the presented work is to integrate the datatype transformations into a complete and refactoring tool for functional programming along the lines of thompson s and reinke s research project another perspective for our research is to further pursue the intertwined character of datatype and program transformations in the context of xml format and api evolution references arango baxter freeman and pidgeon tmm software maintenance by transformation ieee software may batini ceri and navathe conceptual database design redwood city us burstall and john darlington a transformation system for developing recursive programs journal of the acm january banerjee kim kim and korth semantics and implementation of schema evolution in databases sigmod record proc conf on management of data may de roever and kai engelhardt data refinement proof methods and their comparison volume of cambridge tracts in theoretical computer science cambridge university press erwig and ren a language for programming software updates in proceedings of the acm sigplan workshop on programming pages acm press fowler the design of existing code addison wesley griswold and notkin program restructuring as an aid to software maintenance technical report seattle wa usa august hainaut tonneau joris and chandelon schema transformation techniques for database reverse engineering in proc of the int conf on er approach institute kort koorn generating uniform for interactive programming environments phd thesis university of amsterdam kuiper and saraiva lrc a generator for incremental languageoriented tools in koskimies editor compiler construction cc volume of lncs pages april tool demonstration reuse by program transformation in greg michaelson and phil trinder editors functional programming trends pages intellect grammar adaptation in oliveira and zave editors proc formal methods europe fme volume of lncs pages moore automatic inheritance hierarchy restructuring and method refactoring in oopsla conference proceedings programming systems languages and applications pages acm press mcbrien and poulovassilis a formal framework for er schema transformation in embley and goldstein editors conceptual modeling er international conference on conceptual modeling los angeles california usa november volume of lncs pages opdyke refactoring frameworks university of illinois at phd thesis partsch specification and transformation of programs pettorossi and proietti rules and strategies for transforming functional and logic programs acm computing surveys june roberts brant and johnson a refactoring tool for smalltalk theory and practice of object systems tapos reps and teitelbaum the synthesizer generator a system for constructing editors sufrin and de moor modeless structure editing in roscoe and woodcock editors proceedings of the symposium in celebration of the work of tony hoare september thompson and reinke refactoring functional programs technical report computing laboratory university of kent at canterbury october also see http
6
on the capacity of a class of noise channels hamid ghourchian gholamali aminian amin gohari mahtab mirmohseni and masoumeh jun department of electrical engineering sharif university of technology tehran iran h ghourchian aminian aminzadeh mirmohseni mnasiri abstract in some applications the variance of additive measurement noise depends on the signal that we aim to measure for instance additive gaussian noise agsdn channel models are used in molecular and optical communication herein we provide lower and upper bounds on the capacity of additive noise asdn channels the idea of the first lower bound is the extension of the majorization inequality and for the second one it uses some calculations based on the fact that h y h y both of them are valid for all additive noise asdn channels defined in the paper the upper bound is based on a previous idea of the authors symmetric relative entropy and is used for the additive gaussian noise agsdn channels these bounds indicate that in asdn channels unlike the classical awgn channels the capacity does not necessarily become larger by making the variance function of the noise smaller we also provide sufficient conditions under which the capacity becomes infinity this is complemented by a number of conditions that imply capacity is finite and a unique capacity achieving measure exists in the sense of the output measure keywords noise channels molecular communication channels with infinite capacity existence of distribution introduction an additive gaussian noise agsdn channel with input x and output y is defined by e x fy p x where is a given function from r to alternatively we may describe the agsdn channel by y x x z where z n is a standard gaussian random variable and independent of the input x for constant function x c the agsdn channel reduces to a simple additive gaussian channel more generally we may relax the gaussian assumption on z and consider an additive noise asdn channel defined by y x x z where noise z is assumed to be a continuous random variable with a given pdf fz z and be independent of the input for instance one can consider an asdn with z being a truncated this work was supported by insf research grant on communications the first two authors contributed equally to this work see definition for the definition of continuous random variables version of the gaussian distribution as a better model in an application if we know that the output y has minimum and maximum values in that applications below we provide a number of applications in which the asdn channel arises the agsdn channel appears in optical pcommunications when modeling the shot noise or the optical amplification noise for x x in molecular communication the agsdn channel with x c x arises in the ligand receptor model the particle sampling noise the particle counting noise and the poisson model for an absorbing receiver in all cases the reason for appearance of a gaussian signaldependent noise is the approximation of a binomial or poisson distribution with a gaussian distribution observe that the mean and variance of a binomial distribution with parameters n p relate to each other the mean is np and the variance is np p respectively as a result the mean and variance of the approximated gaussian distribution also relate to each other see section for a detailed overview besides the above applications of asdn in molecular communications we shall provide two other cases where this channel model is helpful consider the brownian motion of a particle with no drift over a nonhomogeneous medium with x denoting the diffusion coefficient of the medium at location x the diffusion coefficient x describes the movement variance of a particle when in location x more specifically the motion of the particle is described by the stochastic differential equation dxt xt dbt where bt is the standard wiener process standard brownian motion alternatively we can express the above equation using the following integral z xt xu dbu t let us denote the position of the particle at time by x and its position after t seconds by y xt if t is a small and fixed number reduces to y x x z where z n thus the movement of the particle follows an agsdn channel law if t is small as another example consider the molecular timing channel in a medium in a molecular timing channel information is encoded in the release time of molecules a molecule released at time x hits the receiver after a delay z at time y x molecules are absorbed once they hit the receiver as such the distribution of z is that of the first arrival time the existing literature only studies this problem when the medium is see if the medium is uniform and z is distributed according to the inverse gaussian distribution if there is a flow in the medium or the distribution if there is no flow in the medium as a result the channel is called the additive inverse gaussian noise additive channel or the additive noise in the literature however in a medium or when the distance between the transmitter and receiver varies over time the distribution of z depends on the release time x as a result we obtain a noise additive component for instance the additive noise can have a distribution with a scale parameter that depends on input x using the scaling property of the distribution we can express this as x z where z is the standard distribution and x is the scale parameter this would be an asdn channel in the third item we discussed brownian motion after a small time elapse a brownian motion with no drift is an example of a martingale now let us consider a martingale after a large time elapse here the agsdn channel also arises as a conditional distribution in any process that can be modeled by a discrete time martingale with bounded increments assume that is such a martingale then e xn e furthermore by the martingale central limit theorem the conditional distribution of xn given x for large values of n can be approximated by a gaussian distribution with mean x and a variance x that depends on x finally we relate the asdn channel to real fading channels with a direct line of sight consider a scalar gaussian fading channel y x hx n where x is the input h n is the gaussian fading coefficient and n n is the additive environment noise the first x term on the side of corresponds to the direct line of sight while the hx term is the fading term the distribution of y given x x is n x thus can be expressed as y x x z where p x z n a fast fading setting in which h varies independently over each channel use corresponds to a memoryless asdn channel the purpose of this paper is to study the capacity of a memoryless additive noise asdn channel defined via y x x z under input cost constraints the memoryless assumption implies that the noise z is drawn independently from fz z in each channel use related works in vector agsdn channels subject cost constraints are studied it is shown that under some assumptions p the capacity achieving distribution is a discrete distribution the agsdn channel with x x is investigated in wherein capacity upper and lower bounds are derived considering peak and average constraints note that the memoryless agsdn includes the additive white gaussian noise awgn channel as its special case the capacity of awgn channel under power constraint is classical and is obtained by an input of gaussian random variable its capacity under both average and peak power constraints is quite different as the capacity achieving input distribution is discrete with a finite number of mass points see for further results on the capacity of the awgn channel with both average and peak power constraints our contributions our contributions in this work can be summarized as follows we provide a new tool for bounding the capacity of continuous channels note that i x y h y h y we provide two sufficient conditions under which h y h x which results in i x y h x h y and leads to lower bounds on the channel capacity of an asdn channel it is known that increasing the noise variance of an awgn channel decreases its capacity however we show that this is no longer the case for noise channels the constraint x x for all x does not necessarily imply that the capacity of an agsdn channel with x is less than or equal to the capacity of an agsdn with x we identify conditions under which the capacity of the asdn channel becomes infinity in particular this implies that the capacity of a agsdn channel with p x tends to infinity as tends to zero thus the capacity of the real gaussian fast fading channel given earlier in this section tends to infinity as tends to zero this parallels a similar result given in for complex gaussian fading channels we provide a new upper bound for the agsdn channel based on the kl symmetrized upper bound of this upper bound is suitable for the low snr regime when x is large this is p in contrast with the upper bound of theorems for agsdn channels with x x which is suitable for large values of peak and average constraints furthermore we give ourp upper bound for a large class of functions x while the technique of is tuned for x x this paper is organized as follows section includes some of primary definitions and notations in section our main results are given this includes two lower bounds and one upper bound on the capacity of the asdn channel there are some useful lemmas in section used in the paper the numerical results and plots are given in section the proofs of our results are given in section definitions and notations in this section we review the definitions of continuous and discrete random variables as well as entropy and differential entropy relative entropy and mutual information throughout this paper all the logarithms are in base random variables are denoted by capital letters and probability measure functions are denoted by letter the collection of borel measurable sets in r is denoted by b r we sometimes use and as a for almost everywhere and everywhere respectively the set a is when z a the set a is if it is when is the lebesgue measure definition relative entropy section for random variables x and y with probability measures and the relative entropy between x and y is defined as follows i h x e log x d d xky x where is the derivative and means is absolutely continuous a for all a b if a where b is the borel of the space over which the measures are defined definition mutual information section for random variables x y with joint probability measure y the mutual information between x and y is defined as follows i x y d y where is the product measure defined as a c a c where a bx the borel of the space over which is defined and c by the borel of the space over which is defined similarly for three random variable x y z with joint measure y z conditional mutual information i x y is defined as i x y z i x z definition continuous random variable let x be a and random variable that is measurable with respect to b r we call x a continuous random variable if its probability measure induced on r b is absolutely continuous with respect to the lebesgue measure for b r a for all a b with zero lebesgue measure we denote the set of all absolutely continuous probability measures by ac note that the theorem implies that for each random variable x with measure ac there exists a b r function fx r such that for all a b r we have that z a pr x a fx x dx a the function fx is called the probability density function pdf of x we denote pdf of absolutely continuous probability measures by letter f definition discrete random variable a random variable x is discrete if it takes values in a countable alphabet set x probability mass function pmf for discrete random variable x with probability measure is denoted by px and defined as follows px x x pr x x x definition entropy and differential entropy chapter we define entropy h x for a discrete random variable x with measure and pmf px as h x h h px x px x log x if the summation converges observe that h x e log px x px x for a continuous random variable x with measure and pdf fx we define differential entropy h x as z fx x log h x h h px f x x if the integral converges similarly the differential entropy is the same as h x e log fx x similarly for two random variables x y with measure y if for all x is absolutely discrete with pmf py the conditional entropy h y is defined as h y e log py y likewise for two random variables x y with measure y if for all x is absolutely continuous with pdf fy the conditional differential entropy h y is defined as h y e log fy y we allow for differential entropy to be or if the integral is convergent to or we say that h x if and only if dx and fx x z fx x log dx converges to a finite number fx x z fx x log where x fx x x fx x similarly we define h x when we write that h x we mean that the differential entropy of x exists and is not equal to the following example from demonstrates the differential entropy can be or example differential entropy becomes plus infinity for the following pdf defined over r x e x log x f x x on the other hand as shown in differential entropy is minus infinity for x g x x log x log log x otherwise definition riemann integrable functions given u in this work we utilize riemann integrable functions g u r on open interval u such functions satisfy the property that for any c u the function z x h x g t dt c is by the fundamental theorem of calculus h is continuous on u but not necessarily differentiable unless g is continuous as an example consider the function g x for x and g otherwise this function is riemann integrable on the restricted domain but not integrable on main results we are interested in the capacity of an asdn channel with the input x taking values in a set x and satisfying the cost constraint e gi x k for some functions gi the common power constraint corresponds to gi x p for some p but we allow for more general constraints then given a density function fz z for the noise z and function we consider the following optimization problem c sup i x y where x and y are related via and f supp x e gi x for all i k we sometimes use supp x to denote the support of measure supp when the probability measure on x is clear from the context as an example if in an application input x satisfies x u the set x can be taken to be u to reflect this fact similarly the constraint x u reduces to x u and u reduces to x u the rest of this section is organized as follows in section we provide conditions that imply finiteness of the capacity of an asdn channel in section we review the ideas used for obtaining lower bounds in previous works and also in this work then based on the new ideas introduced in this work we provide two different lower bounds in sections and finally in section we provide an upper bound for agsdn channels existence and finiteness of channel capacity theorem assume that an asdn channel satisfies the following properties x is a closed and also bounded subset of r there exists u such that x u real numbers exist such that x for all x x positive real m and exist such that fz z m and e the cost constraint functions gi are bounded over x then the capacity of the asdn channel is finite furthermore there is a capacity achieving probability measure in other words the capacity c can be expressed as a maximum rather than a supremum c max i x y moreover the output distribution is unique if and both achieves the capacity then y y r where and are the pdfs of the output of the channel when the input probability measures are and respectively remark the above theorem is a generalization of that given in theorem for the special case of gaussian noise z the proof can be found in section to give a partial converse of the above theorem consider the case that the second assumption of the above theorem fails when there is a sequence xi of elements in x such that xi converges to zero or infinity the following theorem shows that mutual information can be infinity in such cases theorem consider an asdn channel with x where x is not necessarily a closed set suppose one can find a sequence of elements in x such that converges to or such that as a sequence on real numbers has a limit possibly outside x which we denote by the limit c can be plus or minus infinity one can find another real number c such that the open interval e c or e c depending on whether c or c belongs to x furthermore e and is monotone and continuous over then one can find a measure defined on e such that i x y provided that z is a continuous random variable and has the following regularity conditions z pr z pr z furthermore there is more than one measure that makes i x y in fact input x can be both a continuous or discrete random variable one can find both an absolutely continuous measure with pdf fx and discrete pmf px such that i x y is infinity when the measure on input is either fx or px the proof can be found in section and uses some of the results that we prove later in the paper remark as an example consider an agsdn channel with x u for an arbitrary u and x for for this channel we have c if we have no input cost constraints setting this shows that the capacity of the channel given in is infinity if that is when there is no additive noise this parallels a similar result given in for complex gaussian fading channels we only require monotonicity here and not strictly monotonicity remark it is known that increasing the noise variance of an awgn channel decreases its capacity however we show that this is no longer the case for noise channels consider two agsdn channels with parameters x and x respectively which are defined over x with the following formulas x x x no input cost constraints are imposed it is clear that x x for all x x however by considering the constraint x from theorem we obtain that the capacity of the first channel is finite while from theorem we obtain that the capacity of the second channel is therefore the constraint x x for all x x does not necessarily imply that the capacity of an agsdn channel with x is less than or equal to the capacity of an agsdn with x lower bounds on capacity to compute capacity from one has to take maximum over probability measures in a potentially large class practically speaking one can only find a finite number of measures in f and evaluate mutual information for them ideally should form an of the entire f with an appropriate distance metric so that mutual information at every arbitrary measure in f can be approximated with one of the measures this can be computationally cumbersome even for measures defined on a finite interval as a result it is desirable to find explicit lower bounds on the capacity observe that i x y h y h y to compute the term h y observe that given x x we have y x x z and thus h y x log x h z see lemma thus h y e log x h z however the term p h y is more challenging to handle authors in consider an agsdn channel with x x for x as well as show that h y h x and hence i x y h x y this implies that instead of maximizing i x y one can maximize h x y to obtain a lower bound the proof of the relation h y h x in is we review it here to motivate our own techniques in this paper first consider the special case of in this case we get x and the agdsn reduces to awgn channel y x in this special case one obtains the desired equation by writing h y h y h x h h x p however the above argument does not extend for the case of since x x depends on x as argued in without loss of generality one mayp assume that this is because one can express a noise channel with x x as y x where and are independent standard normal variables thus we can write y where x from the argument for awgn channels we have that h y h thus it suffices to show that h h x this is the special case of the problem for and corresponds to x x to show h y h x when y x xz more advanced ideas are utilized in the key observation is the following assume that x x gx x x be exponentially distributed with mean e x then y has density p exp gy y p then for any arbitrary input distribution fx from the data processing property of the relative entropy we have d fy kgy d fx kgx where fy is the output density for input density fx once simplified this equation leads to h fy h fx the above argument crucially depends on the particular form of the output distribution corresponding to the input p exponential distribution it is a specific argument that works for the specific choice of x x and normal distribution for z and can not be readily extended to other choices of and fz z in this paper we propose two approaches to handle more general settings idea we provide the following novel general lemma that establishes h y h x for a large class of asdn channels lemma take an arbitrary channel characterized by the conditional pdf fy satisfying z fy dx y x where x and y are the support of channel input x and channel output y respectively take an arbitrary input pdf fx x on x resulting in an output pdf fy y on y assuming that h x and h y exist we have h y h x the proof is provided in section as an example lemma yields an alternative proof for the result of for an agsdn p channel note that as we mentioned before in order to prove that h y h x for x x we only need to prove it for x c x to this end observe that since x we have z z fy dx x dx x x z e dv y where x v and v the proof for equation is given in appendix a idea we provide a variation of the type of argument given in by introducing a number of new steps this would adapt the argument to asdn channels in the following sections we discuss the above two ideas separately first idea for lower bound theorem assume an asdn channel defined in where u with u and noise with pdf fz z such that z u dx fz x x is riemann integrable on u x then if x is continuous random variable with pdf fx x supported over u i x y h x h z provided that the integrals defining h x and h z converge to a real number or the function x is an increasing function of x defined by z x x dt u t c where c u is arbitrary remark note that for any c u x is well defined see definition by selecting a different u we obtain a different function x such that z c x x dt t however h x is invariant with respect to adding constant terms and thus invariant with respect to different choices of c u the above theorem is proved in section corollary let w x since is a function as x we obtain max h x h z max h w h z fw where f is defined in and w fw belongs to g fw ac supp x e gi w for all i k here x x x x hence from theorem we obtain that max i x y max h w h z fw in order to find the maximum of h w over fw g we can use known results on maximum entropy probability distributions see chapter corollary consider an asdn channel satisfying and assume that the only input constraint is x u x u then from corollary we obtain the lower bound u max h w h z log dx h z fw x by taking a uniform distribution for fw w over x if this set is bounded section else if x has an infinite length the capacity is infinity by choosing a pdf for w whose differential entropy is infinity see example the equivalent pdf fx x for x is the pdf of w for more insight we provide the following example example consider an awgn channel namely an agsdn channel with x with x r and z n let us restrict to measures that satisfy the power constraint e x p that is x since z r fz x x z dx r p e dx we can apply corollary here w x thus the lower bound is max fw e w p h w h z p log where it is achieved by gaussian distribution w n p section it is that the capacity of awgn channel is p c log comparing and we see that the lower bound is very close to the capacity in the high snr regime as another example consider the constraints x and e x on admissible input measures here we obtain the lower bound max fw w e w h w h z e log where we used the fact that the maximum is achieved by the exponential distribution fw w exp for w and fw w for w section unlike the first example above an exact capacity formula for this channel is not known second idea for lower bound now we are going to provide another lower bound which is more appropriate in the channels for which z is either or and x is a monotonic function an example of such channels is the molecular timing channel discussed in the introduction theorem assume an asdn channel defined in with u for u if x is a continuous random variable with pdf fx x and x is continuous and monotonic over u is riemann integrable on u x then i x y x provided that are and in order to define the variables and the function x take some arbitrary and proceed as follows if the function x is increasing over u let x z x log x c pr z dt t if the function x is decreasing over u let z x log x c pr z x dt t where c u is arbitrary and p log p p log p remark observe that in both cases x is an strictly increasing function of x defined over u as x and log x is increasing similar to remark the choice of c u does not affect the value of h x and hence the lower bound however the choice of affects the lower bound the above theorem is proved in section corollary similar to corollary let v x since is a strictly increasing function we obtain max x max v fv where f is defined in and v fv belongs to g fv ac supp x e gi v for all i k hence from theorem we obtain that max i x y max h v fv where and are constants defined in theorem as mentioned earlier to maximize h v over fv g we can use known results on maximum entropy probability distributions see chapter corollary consider an asdn channel satisfying and assume that the only input constraint is x u x u then from corollary we obtain the lower bound z u max h v log log dx fv x where and are defined in theorem and lim x lim x the lower bound is achieved by taking a uniform distribution for fv w over x if this set is bounded section else if x has an infinite length the capacity is infinity by choosing a pdf fv v such that h v see example the equivalent pdf fx x for x is the pdf of v an upper bound we begin by reviewing upper bound given in to motivate our own upper bound the upper bound in works by utilizing topsoe s inequality to bound mutual information i x y from above as follows i x y d f kq y for any arbitrary pdf q y on output y the distribution q y is chosen carefully to allow for calculation of the above kl divergence the particular form of x x makes explicit calculations possible the second difficulty in calculating the above expression is that we need to take expected value over input measure however the capacity achieving input measure is not known this difficulty is addressed by the technique of input distributions that escape to infinity under some assumptions about the peak constraint in this part we give an upper bound based on the kl symmetrized upper bound of the idea is that i x y d y d y d y dsym y our upper bound has the advantage of being applicable to a large class of x to state this upper bound let cov x y e xy e x e y be the covariance function between two random variables x and y theorem for any agsdn channel defined in we have x i x y cov x x cov x x x provided that the covariance terms on the right hand side are finite the proof can be found in section corollary for an agsdn channel with parameters x z n and x u if functions x and x are increasing over x and x is convex over x then f max i x y u uf e x where f u u u the corollary is proved in section remark even though corollary is with the assumption if we formally set we see that f and the upper bound on capacity becomes infinity this is consistent with theorem when p corollary the particular choice of x x that was motivated by applications discussed in the introduction has the property that x x are increasing and theorem can be applied some useful lemmas in this section we provide three lemmas used in the proof of theorems in this paper lemma in an asdn channel defined in with continuous random variable noise z with pdf fz and noise coefficient x the conditional measure has the following pdf fy y fz x x moreover y is a continuous random variable with the pdf fy y e fz x x furthermore if h z exists h y can be defined and is equal to h y e log x h z the lemma is proved in section lemma let x be a continuous random variable with pdf fx x for any function u such that x is riemann integrable over u and x where u we have that h x e log x h x where z x x t dt c where c u is an arbitrary constant note that if the side does not exist or becomes the same occurs for the side and vice versa the lemma is proved in section lemma let x be a random variable with probability measure and the functions w x and v x be increasing over u where u if v x is convex over u then max e x cov w x v x w u w v u v where furthermore for the case u a maximizer of is the pmf px px u for the case u if v x is linear a maximizer of is the pmf px px u the proof is given in section capacity capacity nats per channel use capacity nats per channel use kl upper bound capacity corollary lower bound corollary lower bound a figure capacity and symmetrized divergence upper bound in terms of for agsdn channel with a p and function x x figure capacity and lower bound at corollary and in terms of agsdn channel with function x x numerical results p in this section some numerical results are given for x x and z n the upper bound corollary and the capacity are depicted in the logarithmic scale in fig where we have considered the peak constraint a and average constraint it can be observed that the distance between the upper bound and the capacity is a small constant in the logarithmic scale and low snr regime this is consistent with that argues that the upper bound based on symmetrized kl divergence is mostly suitable for the low snr regime p the lower bounds of corollaries and are plotted in fig for the function x x in terms of peak constraint a here is assumed the lower bound of corollary for x a is computed by the following closed form formula q z a p log h z log a c log x while the lower bound of corollary equals p z a q a a p log log dx log log a c c where and pr z log log we maximized over in order to find the lower bound of corollary the first lower bound is better than the second one mainly because of the multiplicative coefficient of the second lower bound since the second lower bound is for a more general class of channels we should consider the positive or negative part of the support of z causing a multiplicative of coefficient for the gaussian noise however if the support of z is positive or negative reals the two lower bounds do not differ much proofs proof of theorem finiteness of capacity the first step is to show that the capacity is finite sup i x y to prove this it suffices to show that the supremum of both h y and h y over f are finite y y uniformly on utilizing lemma the existence and boundedness of h y is obtained as follows y max log log z uniformly on from lemma we obtain that y is continuous with a pdf fy y to prove that the integral defining h y is convergent to a finite value existence of entropy and furthermore the integral is convergent to a value that is bounded uniformly on f it is sufficient to show that there are some positive real and v such that for any f we have sup fy y e also from lemma we obtain that for any f fy y m thus holds with in order to prove note that e e max e e uniformly on thus h y is and uniformly bounded on hence from the definition of mutual information we obtain that i x y h y h y is bounded uniformly for existence of a maximizer let c sup i x y we would like to prove that the above supremum is a maximum equation implies existence k of a sequence of measures in f such that lim i xk yk c k k where xk and yk is the output of the channel when the input is xk furthermore k without loss of generality we can assume that is convergent in the measure to a measure the reason is that since x is compact the set f is also compact with respect to the measure proposition thus any sequence of measures in f has a convergent k subsequence with no loss of generality we can take the subsequence as thus from convergence in measure we know that there is f such that lim e g xk e g x for all g r c such that x we would like to prove that i x y c where y is the output measure of the channel when the input measure is this will complete the proof from the argument given in the first part of the proof on finiteness of capacity h y and h y are and finite as a result to show we only need to prove that lim h yk h y lim h yk h y since is obtained from and lemma in order to prove we proceed as follows k step we begin by showing that the sequence is a cauchy sequence with respect to total variation m m n n n kv where for any two arbitrary probability measure and the total variation distance is defined by x k sup ei ei i where em b r is the collection of all the available finite partitions step having established step above we utilize the fact that the space of probability measures is complete with respect to the total variation metric to show this note that by lemma all the yk s have a pdf and hence the total variation can be expressed in terms of the k norm between pdfs lemma from we obtain that this space of pdfs is complete with respect to k norm k as a result converges to some measure yb by with respect to the total variation metric we further claim that this convergence implies that lim h yk h yb k the reason is that from and we see that fy and are uniformly bounded and have finite therefore follows from theorem thus in step we obtain that the sequence h yk has a limit step we show that the limit found in step is equal to h y h yb h y this completes the proof of hence it only remains to prove and proof of since i xk yk is convergent to c for any there exists n such that i xk yk now consider m n n let q be a uniform bernoulli random variable independent of all m previously defined variables when q we sample from measure and when q we n e e defined as follows sample from measure this induces the measure x x m n e we have a markov chain q x e ye let ye be the output of the channel when the input is x note that e ye i xm ym i xn yn i x from concavity of mutual information in input measure we obtain that e ye i xm ym i xn yn c i x e ye c since f is an intersection of half spaces it is convex and as a result thus i x and we obtain that e ye i x e ye q i x e e e e because of the markov chain q x y we obtain i y q x and as a result i ye q d q from the pinsker s inequality we obtain that q kv where q kv is the total variation between the measures q and note that n m q kv kv kv therefore from and we obtain that m n kv kv as a result m n kv k hence by taking we obtain that is a cauchy sequence proof of to this end it suffices to prove that r where e exp is the characteristic function of the random variable x since yk converge to yb in total variation and the fact that convergence in total variation is stronger than weakly convergence from we obtain that their characteristic functions also converge to pointwise hence it suffices to prove that converge to pointwise from we obtain that h i e xk xk z e xk similarly h i e x since xk converges to x in measure and the function g x x is bounded x x x from we obtain that e g xk converges to e g x pointwise uniqueness of the output pdf the proof is the same as the first part of the proof of theorem this completes the proof proof of theorem for a continuous input measure we utilize a later result in the paper namely theorem by choosing c u when c or u c when to use corollary observe that the image of e under has infinite length this is because the sequence in e was such that the monotone function converged to zero or infinity on that sequence then it is obtained that any pdf fx such that h x makes i x y infinity if which leads to where x is the bijective function of x defined in the statement of theorem in order to prove that let the random variable be z conditioned to z due to the continuity of z and the fact that pr z we obtain that has a valid pdf z defined by fz z z z where pr z since h z exists and z we obtain that e fz z hence e log e fz z therefore h y exists and y a similar treatment can be used to prove y it remains to construct a discrete pmf with infinite mutual information the statement of the theorem assumes existence of a sequence in an open interval e c x or e c if c such that c is the limit of the sequence converges to or is monotone and continuous over e we now make the following claim about existence of another sequence xi e with certain nice properties claim suppose that one can not find a interval e such that x for all x then there exists a b and a sequence xi e such that if x is increasing pr a z b xi xi xi xi xj xj xj xj xi j n if x is decreasing pr z xi xi xi xi xj xj xj xj xi j n we continue with the proof assuming that this claim is correct we give the proof of this claim later to show how this claim can be used to construct a discrete pmf with infinite mutual information consider the possibility that the assumption of the claim fails x for all x then y x in that interval when x therefore we can provide any discrete distribution in that interval such that h x as a result i x y i x x h x thus we should only consider the case that the assumption of the claim holds assume that x is increasing the construction when x is decreasing is similar fix a given a b xi satisfying and take an arbitrary pmf pi such that x i pi log pi then we define a discrete random variable x taking values in xi such that pr x xi pi we claim that i x y to this end it suffices to show i x y pr a z b i x y z b pr a z b i x y z b proof of define random variable e as following z a b a b from the definition of mutual information we have that i x y i x y i y i y e h e since i x y pr e i x y pr e i x y we conclude proof of since i x y z b h x h a z b it suffices to show that h x h a z b p the equality h x pi log pi follows to prove the other equality note that y belongs to the interval xi xi xi xi when x xi therefore since the intervals xi xi xi xi are disjoint x can be found from y thus x is a function of y when a z b as a result the second equality of is proved now it only remains to prove our claim on the existence of a b and xi we assume that x is increasing the proof when x is decreasing is similar from the assumptions on z that pr z we obtain there exists b such that pr z b as a result we select a since x is monotone we can not have for two arbitrary distinct and in e since this implies that x for all x in between and as a result we shall not worry about the constraint on xi because xi can occur for at most one index i and we can delete that element from the sequence to ensure to show the existence of xi we provide a method to find with respect to xi the method is described below and illustrated in figure take an arbitrary element of observe that since x is continuous and increasing over e the functions x x and x x are continuous and strictly increasing over e as well as x x x x therefore for the case c happening when xi converge to lim x x lim x x c hence for a given xi e due to the intermediate value theorem there exists unique satisfying c xi such that xi xi similarly for the case c happening when converge to if xi e there exists unique satisfying c xi such that xi xi it can be easily obtained that the intervals created this way are disjoint and the process will not stop after finite steps therefore the theorem is proved ascending c c x a x x a x x b x x b x c c descending descending ascending x a x x a x x b x x b x c c c figure possible cases for x when c proof of theorem from lemma we obtain that h y exists hence utilizing lemma we can write h y h x i x y h x h y provided that u z fy dx where it is satisfied due to z u z fy dx u fz x x dx where the last inequality comes from the assumption of the theorem from lemma we have that h y e log x h z therefore can be written as i x y h x e log x h z exploiting lemma we obtain that h x e log x h x where x is defined in hence the proof is complete proof of theorem we only prove the case that x is an increasing function over u the proof of the theorem for decreasing functions is similar to the increasing case and we only need to substitute z with z we claim that i x y x y consider random variable e as following z from the definition of mutual information we have that i x y i x y i y i y e h e therefore since i x y pr z i x y pr z i x y we conclude now we find a lower bound for i x y from lemma we obtain that y is a continuous random variable we claim that i x y y h y z y e log x h y h x e log x x h x e log z h x where is obtained from lemma and the fact that random variable z conditioned to z is also continuous when pr z moreover is obtained by adding and subtracting the term e log x note that we had not assumed that x needs to be differentiable we had only assumed that u is continuous and monotonic over u however every monotonic function is differentiable almost everywhere the set of points in which x is not differentiable has lebesgue measure zero we define x to be equal to zero wherever x is not differentiable and we take x to be the derivative of x wherever it is differentiable with this definition of x and from the continuity of x we have that the integral of x x gives us back the function log x since x is an increasing positive function and z we conclude that x x e log z e log x x from lemma and the fact that the integral of x x gives us back the function log x we obtain that x h x e log h x x where x is defined in theorem as a result from we obtain that i x y y h x e log x h x h using this inequality in conjunction with we obtain a lower bound on i x y the lower bound that we would like to prove in the statement of the theorem is that i x y x as a result it suffices to prove that for all continuous random variables x with pdf fx x we have h y h x e log x to this end observe that h y h y z thus if we show that h y z h x e log x the proof is complete we can write that z h y z fz z h y z dz where z is z conditioned to z and the pdf of z is denoted by fz z by defining the function rz x x x we obtain that yz rz x where yz is y which is conditioned to z z since x is a continuous increasing function rz x is a bijection for all z and so its inverse function y exists moreover since x is continuous and rz is a bijection yz is also continuous random variable with pdf fy y defined as following fyz y fx x x where x y thus we have that h y z log fyz yz log e log x fx x x e log x by taking expected value over z from both sides is achieved therefore the theorem is proved proof of theorem based on we obtain that i x y dsym y utilizing lemma we obtain that the pdfs fy y and fy exist and are therefore dsym y y d y fy y fy y y log log fy y fy y y log e log fy y fy y log e log fy y fy y log h y fy y again from lemma since z n we obtain that log y x log x fy x therefore since z y x x we obtain that h h y e log x the measure zero points where x is not differentiable affect fyz y on a measure zero points however note that fyz y fx y is always correct and thus the values of fyz y on a measure zero set of points are not important in addition h y x log e log x fy y x by expanding we obtain that x y x y e y e x x x x by substituting y with x x z and simplifying we can write y x x e e x e x x x x x x e x x x x x e e x e x x x x x e e x x which equals to cov x x x x x x therefore from all above equations the theorem is proved proof of corollary observe that u u f u u u u u u then using theorem it suffices to prove the following two inequalities u u cov x x x u and cov x where x x u u u u u since x is increasing we obtain that x and x are also increasing therefore from lemma equation is proved similarly is also obtained from lemma because x and x are increasing functions proof of lemma from definition we obtain that fy y h x h y e log fx x now utilizing the inequality log x x it suffuces to prove that fy y e fx x to this end we can write z z fy y fy y fx y x y e dy dx fx x fx x x y z z fy fy y dy dx zx y z fy y fy dx dy y x z fy y dy y where the last inequality holds because of the assumption of the lemma therefore the lemma is proved proof of lemma the conditional pdf fy can be easily obtained from the definition of channel in in order to calculate h y using the definition we can write h y e e log x e y fy y f z x exploiting the fact that y x x z h y is obtained it only remains to prove that y is continuous to this end from the definition of the channel in we obtain that fy y pr z fz x x where fy y and fz z are the cdfs of the random variables y and z defined by fy y pr y y and fz z pr z z respectively in order to prove the claim about fy y we must show that z y e fz dy e fz x x x for all y because of the fubini s theorem chapter it is equivalent to x lim e fz x equivalently we need to show that for any there exists m such that x e fz x since fz z there exists r such that fz z therefore since fz z for all z we can write x x pr e fz x x we can write pr x pr x x x now we can take m large enough such that x pr x n as a result is proved proof of lemma since x is riemann integrable x is continuous and since x x is a strictly increasing function over the support of x it yields that x is an injective function and there exists an inverse function for now define random variable y x assume that the pdf of x is fx x since x is a continuous random variable and x is a bijection y is also continuous random variable with the following pdf fy y fx x d dx x where x y hence we have that fy y fx y y now we can calculate the differential entropy of y as following h y log fy y y log fx y x log fx x x e log x therefore the lemma is proved proof of lemma first assume that v x ax b with a we will prove the general case later in this case we claim that the support of the optimal solution only needs to have two members to this end note that the following problem is equivalent to the original problem defined in max max e x cov w x v x since v x ax b for a given we would like to maximize cov w x v x e w x v x b e w x which is a linear function of subject to e x which is also a linear function of by the standard cardinality reduction technique fenchel s extension of the caratheodory theorem we can reduce the support of to at most two members see appendix c for a discussion of the technique assume that the support of is where u with pmf px px thus we can simplify cov w x v x as cov w x v x x px xi w xi v xi x x px xi px xj w xi v xj p w w v v where the last equality can be obtained by expanding the sums thus the problem defined in equals the following max p p p w w v v we claim that the optimal choice for is to see this observe that w x and v x are increasing functions and hence p p p and w w v v w w v v hence is optimal substituting v x ax b we obtain that the problem is equivalent with the following a max p p w x w x p p utilizing kkt conditions one obtains that the optimal solution is u u x now we consider the general case of v x being a convex function but not necessarily linear since v x is convex we obtain that v x v x v u v u the right hand side is the line that connects the two points v and u v u this line lies above the curve x v x for any x u therefore e v x v e x v u v thus e x implies that e v x where v v u v now we relax the optimization problem and consider max e v x cov w x v x the solution of the above optimization problem is an upper bound for the original problem because the feasible set of the original problem is a subset of the feasible set of the relaxed optimization problem now using similar ideas as in the linear case we conclude that the support of the optimal has at most two members and the optimal solution is u v u v u u it can be verified that v u v u v note that in the case u we obtain that e x u where x distributed with the optimal probability measure as a result the constraint e x is redundant therefore the support of the optimal has two members which shows that the upper bound is tight in this case conclusion in this paper we studied the capacity of a class of additive noise channels these channels are of importance in molecular and optical communication we also gave a number of new application of such channels in the introduction a set of necessary and a set of sufficient conditions for finiteness of capacity were given we then introduced two new techniques for proving explicit lower bounds on the capacity as a result we obtained two lower bounds on the capacity these lower bounds were helpful in inspecting when channel capacity becomes infinity we also provided an upper bound using the symmetrized kl divergence bound references moser capacity results of an optical intensity channel with gaussian noise ieee transactions on information theory vol no pp pierobon and akyildiz noise analysis for molecular communication in nanonetworks ieee transactions on signal processing vol no pp aminian ghazani mirmohseni and fekri on the capacity of and molecular communications with ieee transactions on molecular biological and communications vol no pp arjmandi gohari and bateni nanonetworking a new modulation technique and performance analysis ieee communications letters vol no pp gohari mirmohseni and information theory of molecular communication directions and challenges to appear in ieee transactions on molecular biological and communications srinivas eckford and adve molecular communication in fluid media the additive inverse gaussian noise channel ieee transactions on information theory vol no pp khormuji on the capacity of molecular communication over the aign channel in information sciences and systems ciss annual conference on pp ieee li moser and guo capacity of the memoryless additive inverse gaussian noise channel ieee journal on selected areas in communications vol no pp farsad murin eckford and goldsmith capacity limits of molecular timing channels chan hranilovic and kschischang probability measure for conditionally gaussian channels with bounded inputs ieee transactions on information theory vol no pp smith the information capacity of sclar gaussian channels information and control vol no pp jiang wang wang and dai a tight upper bound on channel capacity for visible light communications ieee communications letters vol no pp lapidoth moser and wigger on the capacity of optical intensity channels ieee transactions on information theory vol no pp chen hajek koetter and madhow on fixed input distributions for noncoherent communication over channels vol no pp aminian arjmandi gohari and mitra capacity of diffusionbased molecular communication networks over channels ieee transactions on molecular biological and communications vol no pp ihara information theory for continuous systems singapore world scientific cover and j thomas elements of information theory new york john wiley sons topsoe an information theoretical identity and a problem involving capacity studia scientiarum math hungarica vol no ghourchian gohari and amini existence and continuity of differential entropy for a class of distributions ieee communications letters stein and shakarchi real analysis measure theory integration and hilbert spaces new jersey princeton university press el gamal and kim network information theory cambrdige university press a proof of equation take some arbitrary x then equation holds because z z v y e v dv e dv v v z z v y v y e v dv e dv c v v z z v v y e dv e e dv v v z z v y v y e dv e dv v v now utilize the change of variables y v v y v v to the above integrals note that y dv v y dv v for y if v then if v then and if v then y y for y if v if v then and if v then y y now for y we have z z e z z z similarly for y we have e z z z e e e e z e e z therefore the proof is complete
7
stochastic bandits robust to adversarial corruptions mar thodoris vahab renato paes abstract we introduce a new model of stochastic bandits with adversarial corruptions which aims to capture settings where most of the input follows a stochastic pattern but some fraction of it can be adversarially changed to trick the algorithm click fraud fake reviews and email spam the goal of this model is to encourage the design of bandit algorithms that i work well in mixed adversarial and stochastic models and ii whose performance deteriorates gracefully as we move from fully stochastic to fully adversarial models in our model the rewards for all arms are initially drawn from a distribution and are then altered by an adaptive adversary we provide a simple algorithm whose performance gracefully degrades with the total corruption the adversary injected in the data measured by the sum across rounds of the biggest alteration the adversary made in the data in that round this total corruption is denoted by our algorithm provides a guarantee that retains the optimal guarantee up to a logarithmic term if the input is stochastic and whose performance degrades linearly to the amount of corruption c while crucially being agnostic to it we also provide a lower bound showing that this linear degradation is necessary if the algorithm achieves optimal performance in the stochastic setting the lower bound works even for a known amount of corruption a special case in which our algorithm achieves optimal performance without the extra logarithm cornell university teddlyk work supported under nsf grant part of the work was done while the author was interning at google google research mirrokni google research renatoppl introduction in online learning with bandit feedback a learner needs to decide at each time between alternative actions or arms of unknown quality facing a between exploiting profitable past actions or exploring new actions about which she has little information bandit problems are typically classified according to how the rewards are generated in stochastic bandits rewards are drawn from fixed but unknown distributions which models settings where the alternatives follow particular patterns and do not react to the learner the other extreme is adversarial bandits which are robust to rewards that are specifically designed to trick the learner as in settings in this paper we focus on settings where the overall behavior is essentially stochastic but a small fraction of the rewards can be adversarially changed classic stochastic bandit algorithms like upper confidence bound ucb or active arm elimination aae base most of their decisions on a few observations made in an initial phase of the algorithm and therefore can be easily tricked into incurring linear regret if very few arms are corrupted adversarial bandit algorithms like are not fooled by such tricks but can not exploit the fact that the input is mostly stochastic our goal is to robustify the stochastic setting by designing algorithms that can tolerate corruptions and still be able to exploit the stochastic nature of the input the algorithms we design are agnostic to the corruption they can tolerate any level of corruption and the guarantee degrades gracefully as more corruption is added moreover we prove lower bounds showing that our results are tight up to a logarithmic factor before we explain our technical contribution in detail we describe examples of settings we have in mind click fraud in online advertising the platform selects for each pageview an ad to display and obtains a certain reward if the user clicks on the ad the click probabilities are unknown the tension between repeatedly displaying a particular profitable ad that provides reliable revenue and exploring other potentially more rewarding options is a major application of stochastic bandits in the ads industry if it weren t for a phenomenon known as click fraud this would be a textbook example of stochastic bandits in click fraud botnets maliciously simulate users clicking on an ad to trick learning algorithms one example is a bot consistently making searches to trigger some ad and not clicking on it to make it seem like a certain ad has very low in order to boost its competitor recommendation systems a platform recommending activities or services to a user faces the same suggesting new restaurants leads to faster learning of the best spots but may result to dissatisfaction of the customers who are led to disappointing experiences while most inputs follow a stochastic pattern some inputs are typically corrupted either maliciously fake reviews by competitors or construction makes the restaurant less desirable in certain interval this corruption may again exhibit arbitrary patterns and is not identically distributed over time yet it is dwarfed by the fact that most of the input is stochastic there are several other such examples emails mostly follow a stochastic pattern except a fraction of them which are spam and are designed to trick algorithms internet searches follow a predictable pattern except certain spikes caused by unpredictable events data collection used in the econometric process often suffers from errors that affect a small part of the input in all those cases the vast majority of the input follows a predictable pattern but a fraction of the samples are corrupted our contribution our model in this paper we introduce a new model of stochastic bandits with adversarial corruptions the goal of this model is to encourage the design of bandit algorithms that i work well in mixed adversarial and stochastic models and ii whose performance deteriorates gracefully as we move from fully stochastic to fully adversarial models in this model there are k arms each associated with a fixed reward distribution f a at each round t a random reward rst a f a is drawn and an adversary can change the reward to r t a possibly using information about the realizations of a from both the current and previous rounds t as well as the probability that the learner puts on each arm the learner then draws an arm at and obtains r t at both as p reward and feedback we say that the adversary is if in every sample path we have t maxa t a rst a our results the main result theorem in section is a learning algorithm we term active arm elimination race that with probability has regret x k c log log t log a where a is the gap of arm a the difference in stochastic means of arm a and the optimal arm for arms with very small gap when a t the inverse dependence on the gap can be replaced by t it is possible to improve the bound by log factor for t log kt two maximum expected regret against any fixed arm obtaining o a important features of the algorithm are that the guarantee is agnostic the algorithm does not need to know the corruption level the guarantee is provided with respect to how much corruption was added in retrospect if the corruption level is known we can remove the dependence on k log as shown in theorem high probability our bounds hold with high probability which is important for practical applications as the ones described above in contrast the weaker definition of often hides events with large regret that are offset by events with large negative regret the stochastic case corresponds to c in which case we recover a bound that is t than the guarantee provided by ucb our algorithm obtains o log t log a with probability while ucb obtains this bound without the log t term en the result we show an algorithm that for any known c provides p p kt log regret o for stochastic input and o k c log a if it is a in other words if we only need to tolerate either a known level c or zero corruptions we save a logarithmic factor from the bound and match the bound provided by ucb in the stochastic case another question is whether the linear dependence on the corruption level is tight in section we show that it can not be improved upon without decay in the stochastic guarantee while still guaranteeing logarithmic regret when the input is stochastic the lower bound is an adaptation from the adversarial to the corrupted setting of a result from auer and chiang this holds even for the case where the corruptions are either or a known level c where our algorithm provides a matching upper bound we prove in theorem that an algorithm with o log t in the stochastic setting c then for every constant there is a o t instance where the algorithm incurs regret t with constant probability our algorithm can also be viewed through the lens of the best of both worlds literature where the goal is to design algorithms that simultaneously provide logarithmic regret guarantees in the stochastic regime and guarantees in the adversarial in section we sketch how our algorithm can be appropriately modified to obtain for any constant a a e e o c for c o t and o t otherwise we observe that the results in the best of both worlds literature correspond to the case where a we note that such bounds are obtained for and not regret with our techniques the starting point of our design are classical stochastic bandit learning algorithms like ucb and active arm elimination such algorithms are very susceptible to corruptions since they base most of their decisions on a small initial exploration phase therefore with a small number of corruptions it is possible to completely trick the algorithm into eliminating the optimal arm we address this issue by robustifying them using a approach the learning algorithm consists of multiple layers running in parallel the layers have decreasing speed and increasing tolerance to corruption the first layer finishes very fast selecting an arm as optimal but provides no tolerance to corruption subsequent layers are more robust but also slower the resulting algorithm is a race between different layers for picking the optimal arm once the fastest layer finishes it provides a first crude estimate of the optimal arm once slower layers finish we obtain finer and finer estimates of the optimal arm our second main idea is that we can obtain more robust algorithms by subsampling if a layer is only selected with probability p it only receives in expectation a of the corruption injected by the adversary if p is low enough the layer behaves almost as if it was stochastic finally we couple the different layers together by a process of global eliminations this process enables slower layers to eliminate arms in faster layers such a process is necessary for preventing inaccurate layers from pulling suboptimal arms too often related work online learning with stochastic rewards goes back to the seminal work of lai and robbins the case of adversarial rewards was introduced by auer et al the reader is referred to the books of and lugosi bubeck and and slivkins for an elaborate overview of the area these two extremes suffer from orthogonal problems the one is overoptimistic expecting that all rewards come from the same distribution while the other one is too pessimistic in order to be protected against malicious adversaries our work addresses the middle ground rewards come from distributions but are often adversarially corrupted this is motivated by the of stochastic learning algorithms to even small corruption levels closely related to our work lie the works on best of both worlds guarantees these works achieve up to logarithmic factors the optimal guarantee for stochastic rewards and the optimal or actual regret guarantee for adversarial rewards bubeck and slivkins and auer and chiang begin from a stochastic algorithm and test whether they encounter behavior in which case they switch to adversarial algorithm in contrast seldin et al begin from an adversarial algorithm with very optimistic learning rate and adapt it if they encounter such behavior recently and independently to this work wei and luo provide a best of both worlds result with a guarantee on the adversarial setting via a novel analysis of the omd algorithm of foster et al although the aforementioned algorithms are very elegant their analysis is not robust to inputs that are slightly away from stochastic our work bridges this gap by designing algorithms with a more smooth behavior for instances there have been other works that attempt to provide improved guarantees than the adversarial setting when instances are well behaved hazan and kale offer regret guarantees that scale with the variance of the losses instead of the time horizon this guarantee is meaningful in settings that have a very predictable nature and have usually the same performance such as routing however they do not address most applications of stochastic bandits in click fraud for example the rewards come from bernoulli distributions and the variance of such a distribution is high even if the input is totally stochastic another approach is the work of shamir and szlak who consider an input that is adversarial but random local permutations are applied to obtain a more benign instance this approach is very relevant in settings like buffering but is again not applicable to our settings on the opposite side attempting to provide improved guarantees for the stochastic setting or enhancing their range is a very active area of research for instance the moss algorithm of audibert and bubeck provides the optimal upper bound for stochastic bandits while retaining the optimal stochastic guarantee the algorithm of garivier and provides improved constants in the upper bound of the stochastic guarantee matching the lower bound of lai and robbins for bernoulli rewards the robust ucb algorithm extends the results to rewards replacing with the weaker assumption of bounded variance however all the above results are not robust to corruptions from an adaptive adversary due to their deterministic nature since the adversary knows the arm the learner will select they can always corrupt the optimal arm whenever it is about to be selected and therefore cause the learner to either play it multiple times even if it is suboptimal or decide against playing it even with a small amount of corruption similarly as in our lower bound there is also prior work on incorporating corruptions in online decision making in the online learning front there are two such attempts to the best of our knowledge in their best of both worlds result seldin and slivkins allow for some contamination in the data as long as they are obliviously selected and they do not decrease the gap by more than a factor of the second work is a recent paper by gajane et al who suggest a model of corrupted feedback aiming for differential privacy unlike our model their corruptions are neither adversarial nor adaptive both of these works make benign assumptions about the nature of corruption and do not address the main roadblock in the settings we consider an adversarial saboteur will try to add faulty data in the beginning to change the order between the two arms and with a minimal corruption she will achieve this goal closer to our model are the works on robust allocation such as online matching with corrupted data unlike online matching though in online learning we can not evaluate the optimum at every round since the algorithm s decisions affect the information it observes last learning in the presence of corruptions has recently received great attention in the batch learning setting for instance recent works study inference under the presence of adversarially corrupted data designing estimators that are robust to corrupted data ing in auctions with some faulty data due to econometrics errors our work suggests a similar framework for the study of online learning that is robust to adversarial corruptions in the more challenging problem of sequential decision making where decisions also affect the information observed model corrupted stochastic bandits we study an online bandit learning setting with k arms each arm a k is associated with a distribution f a with mean a the distributions are assumed to have positive measure only on rewards in and are unknown to the learner we refer to the optimal arm as arg maxa a and define a a we consider an adversary who can corrupt some of the stochastic rewards the adversary is adaptive in the sense that the corrupted rewards can be a function of the realization of the stochastic rewards up to that point and of the learner s choices in previous rounds more formally the protocol between learner and adversary at each round t t is as follows the learner picks a distribution wt over the k arms stochastic rewards are drawn for each arm rst a f a the adversary observes the realizations of rst a as well as rewards and choices of the learner in previous steps and returns a corrupted reward r t a the learner draws arm at wt and observes r t at we refer to maxa t a rst a as the amount of corruption injected in round the instance is if the total injected corruption is at most c for all realizations of the random variables x t a rst a c t a note that the adversary is assumed to be adaptive in the sense that she has access to all the realizations of random variables for all rounds t and the realization of rewards at round t but only knows the player s distribution at round t and not the arm at our guarantees gracefully degrades with the total corruption injected by the adversary regret notions regret corresponds to the difference between the reward obtained by the algorithm and the reward of the best arm in hindsight x reg max r t a r t at a t the regret is a random variable that depends on the random rewards the randomness used by the learner and the randomness of the adversary we say that a regret bound r t holds with probability if p reg r t where the probability is taken over all the three sources of randomness described we note that is one arm with optimal mean and this does not preclude the existence of other arms with the same mean if more than one such arms exist let be an arbitrary arm with optimal mean and the other arms a with optimal mean have gap a finally is a weaker notion that compares the expected performance of the learner with the arm with the highest expected performance in other words x pseudoreg max e r t a r t at a t we note that by jensen s inequality pseudoreg e reg we often obtain improved bounds for since it allows us to offset large positive regret events with large negative regret events the upper bound active arm elimination race active arm elimination the starting point of our design is the active arm elimination algorithm for stochastic bandits which can be viewed as an alternative presentation of the more famous ucb algorithm it is based on the following idea in an initial exploration phase we pull arms in a fashion and compute an estimate e a as the average empirical reward of arm a after n a pulls of arm a usual concentration arguments establish that with probability at p least the difference of the empirical and actual means is at most wd a o log t a we say that e a wd a e a wd a is the confidence interval of arm a this means in particular that given two arms a and if the difference in empirical means becomes larger than the widths of the confidence intervals e a e wd a wd then with high probability arm a is not optimal once this happens the algorithm eliminates arm by removing it from the rotation after both arms a and the optimal arm are pulled o log t a times the confidence intervals will be small enough that arm a will be eliminated eventually all arms but the optimal are eliminated and we enter what is called the exploitation phase in this phase we only pull the arm with optimal mean before we enter exploitation we pulled each suboptimal arm a at most o log t a times each of thosep suboptimal pulls incurs regret a in expectation which leads to the bound of o log t a this bound can also be converted to a high probability bound if we replace log t by log arms with small a we note that for the arms that have a t the inverse dependence on the gap may initially seem vacuous for instance when there are two optimal arms a with the same mean the upper bound becomes infinite as a however inverse dependence on the gap can be replaced by a t in the case of and t in the case of actual regret due to variance reasons for simplicity of exposition we omit this in the current section but we demonstrate how to perform this replacement in section enlarged confidence intervals the active arm elimination algorithm is clearly not robust to corruption since by corrupting the first o log t steps the adversary can cause the algorithm to eliminate the optimal arm as the algorithm never pulls the suboptimal arms after exploration it is not able to ever recover one initial idea to fix this problem is to enlarge the confidence intervals we can decompose the rewards r t a in two terms rst a ct a where the first term comes from the stochastic reward and the second is the corruption introduced by the adversary if the total corruption introduced by the p adversary is at most c then with width wd a o log t a a a similar analysis to above gives us the following regret bound theorem for corruption if c is a valid upper bound then active arm elimination with log log c with probability n a has regret o wd a n a a proof sketch the proof follows the standard analysis of active arm elimination we first establish that with high probability the optimal arm is never inactivated lemma and then upper bound the number of times each suboptimal arm is played lemma the guarantee directly follows by multiplying the number of plays for each arm by its gap a for the guarantee we need to also show that the regret incurred in the meantime is not much more than the above we provide proof details about the theorem and lemmas in appendix lemma with probability at least arm never becomes inactivated lemma with probability at least all arms a become inactivated after n a log plays a stochastic bandits robust to known corruption the drawback of the active arm elimination algorithm with enlarged confidence intervals theorem is that even if there are no corruptions it still incurs a regret proportional to a warm up to p log the main theorem we provide an algorithm that achieves the usual bound of o a p log if the if the input is purely stochastic and at the same time achieves o k c a input is for a known in the next subsection we modify the algorithm to make it agnostic to the corruption level two instances of active arm elimination the first idea is to run two instances of active arm elimination the first is supposed to select the correct arm if there is no corruption and the second is supposed to select the right arm if there is c corruption the first instance is very fast but it is not robust to corruptions the second instance is slower but more precise in the sense that it can tolerate corruptions since the second instance is more trustworthy if the second instance decides to eliminate a certain arm a we eliminate the same arm in the faster instance decrease corruption by to keep the regret low if the input is stochastic the second instance of active arm elimination can not pull a suboptimal arm too many times therefore the technique in theorem alone is not enough the main idea of the algorithm is to make arm a behave as if it was almost stochastic by running the second instance with low probability if the learner selects to run the second instance with probability then when the adversary adds a certain amount of corruption to a certain round the second instance observes that corruption with probability therefore the expected amount of corruption the learner observes in the second instance is constant this makes the arms behave almost like stochastic arms in that instance learning algorithm we obtain our algorithm by combining those ideas we have two instances of active arm elimination which we denote by f fast and s slow each instance keeps an estimate of the mean ef a and es a corresponding to the average empirical reward of that arm and also keeps track of how many times each arm was pulled in that instance nf a and ns a this f allows p us to define a notion of confidence interval in each of the instances we define wd a o log t a as usual and for the slow instance we define slighly larger confidence intervals p wds a o log t a log t a the reason will be clear in a moment also each instance keeps a set of eliminated arms for that instance i f and i s in each round with probability we make a move in the fast instance we choose the next active arm a in the round robin order arm a k i f which was played less often pull this arm and increase nf a and update ef a accordingly as usual if there are two active arms a and such that ef a ef wdf a wdf we eliminate by adding it to i f with the remaining probability we make a move in the slow instance by executing the exact same procedure as described for the other instance there is only one difference which causes the two instances to be coupled when we inactivate an arm a in s we also eliminate it in this leaves us with a potential problem it is possible that all arms in the f instance end up being eliminated if we reach that point we play an arbitrary active arm of the slow instance any arm a k i s the resulting algorithm is formally provided in algorithm algorithm active arm elimination race for known corruption c initialize a a i for all a k and f s for rounds t t sample algorithm s with probability else if k i play arm at arg k a update at a e at r t at a and a a while exists arms a a k i with a a eliminate by adding it to i if s then eliminate from the other algorithm by adding it to i f else play an arbitrary arm in the set k i s towards the performance guarantee lemma bounds the amount of corruption that actually enters the slow active arm elimination algorithm which enables the regret guarantee in theorem lemma in algorithm the slow active arm elimination algorithm s observes with probability at least corruption of at most ln during its exploration phase when picked with probability proof sketch if one cared just about the expected corruption that affects s this is at most a constant number since the total corruption is at most c and it affects s with probability to prove a guarantee we require a concentration inequality on martingale differences since the corruptions can be adaptively selected by the adversary we provide the details in appendix q log theorem algorithm run with widths wds a log and wdf a ns a ns a q p p log log log has o for the stochastic case and o k c for a a nf a the case with probability at least proof sketch the result for the stochastic case follows standard arguments for stochastic algorithms since we obtain double the regret of this setting as we run two such algorithms with essentially the same confidence intervals for the case we establish via lemma an upper bound on the corruption that will affect the slow active arm elimination algorithm thanks to the this upper bound is close to a constant instead of depending on c which allows to not incur dependence on c in the stochastic case having this upper bound we can apply it to the algorithm of the previous section to get an upper bound on the number of plays of suboptimal arms in since the algorithms are coupled such a bound implies an upper bound on the regret that it can cause in f as well this is because in expectation the arm is played at most k c times more in f as it may be selected every single time in f prior to getting eliminated by s and f is selected c times more often than to obtain the above guarantee with high probability we lose an extra logarithmic factor the details of the proof are provided in appendix b stochastic bandits robust to agnostic corruption multiple layers of active arm elimination in the previous subsection we designed an algorithm with two layers one is faster but can not tolerate corruptions and the second one is slower but more robust in order to be agnostic to corruption we need to plan for all possible amounts of corruption to achieve this we introduce log t layers each layer is slower but more robust than the previous one we achieve that by selecting the layer with probability proportional to by the argument in the last section if the corruption level is at most c then each layer with log c will observe o corruption in expectation and at most o log t corruption with high probability global eliminations we couple the log t instances through what we call global eliminations if arm a is eliminated by the layer then we eliminate a in all layers this is important to prevent us from pulling arm a too often if arm a is suboptimal and the adversary is e a in then arm a eventually becomes eliminated in the layer after being pulled o e then it takes o a iterations until that layer since layer is played with probability e a from that arm arm is eliminated globally in which case we will have total regret at most o active arm elimination race we now describe our main algorithm in the paper we call it a race since we view it as multiple layers racing to pick the optimal arm the less robust layers are faster so they arrive first and we keep choosing mostly according to them until more robust but slower layers finish and correct or confirm the current selection of the best arm the algorithm keeps log t different instances of active arm elimination the instance has as state the empirical means of each arm a the number a of times each arm a was pulled and the set i of inactive arms the p width of the confidence interval for arm a in the layer is implicitly defined as a o log t a log t a in each round t we sample log t with probability with remaining probability we pick layer when layer is selected we make a move in the active arm elimination instance corresponding to that layer we sample the active arm in that layer with the least number of pulls arm a k i minimizing a in case k i is empty we pull an arbitrary arm from k i for the lowest such that k i is the way we couple different layers is that once arm is eliminated in layer because there is another active arm a in layer such that a a we eliminate arm in all previous layers keeping the invariant that i i i figure provides an example of the state of the algorithm which is formally defined in algorithm arm arm arm d d d lg t elg t nlg t elg t nlg t d d d d elg t d nlg t d figure example of the state of the algorithm for each layer and arm a we keep the estimated mean a and the number of pulls a red cells indicate arms that have been eliminated in that layer if an arm is eliminated in a layer it is eliminated in all previous layers if a layer where all the arms are eliminated like layer in the figure is selected we play an arbitrary active arm with the lowest layer that contains active arms algorithm active arm elimination race initialize a a i for all a k and log t for rounds t t sample layer log t with probability with remaining prob sample if k i play arm at arg k a update at a e at r t at a and a a while exists arms a k i with a a eliminate by adding it to i for all else find minimum such that k i and play an arbitrary arm in that set we now provide the main result of the paper a regret guarantee for algorithm theorem algorithm which is agnostic to the coruption level c when run with widths a q log a log a has regret x k c log log t log a proof sketch similarly to the previous theorem the regret guarantee comes from the summation between layers that are essentially stochastic where the corruption is below corruption level their log r regret since less than c for layer r from each of these layers we incur o a there are at most log t such layers the second term in the theorem is derived the challenge is to bound the regret incurred by layers that are not robust to the corruption however there exists some layer that is above the corruption level by bounding the amount of steps that this level will require in order to inactivate each arm a in the incorrect layers via lemma we obtain similarly to theorem a bound on the regret caused by this arm in those layers since we take the minimum such layer and the tolerance of layers is within powers of the fact that its corruption level does not match exactly the corruption that occurred only costs an extra factor of in the regret the details of the proof are provided in appendix the lower bound for the two arms case where the gap between the arms is theorem presents an algorithm which achieves o log if the input is stochastic and o c log with probability if the input is at most we show below that this dependence is tight the lower bound theorem adapts the technique of auer and chiang from the adversarial to the corrupted setting the main idea is that an algorithm with logarithmic regret in the stochastic setting can not query the arm more than log t times this implies a long time period where the learner queries the input only constant number of times by corrupting all rounds before this period an adversary can make the optimal arm look and trick the learner into not pulling the optimal arm for long time causing large regret theorem adapts this argument bounding the expected positive regret e where max x the high probability bounds provided imply bounds on the expected positive regret both proofs are provided in appendix theorem consider a bandits algorithm that has the property that for any stochastic input in the two arm setting it has bounded by c log t where for any there is a corruption level c with t c t and a instance such that with constant probability the regret is c theorem if a bandits algorithm that has the property that for any stochastic input in the two arm setting it has bounded by c t for for any there is a corruption level c with t c t and a instance such that e t for all extensions in this section we discuss some extensions that our algorithm can accommodate definition of corruption we presented all results measuring the corruption as the sum over all rounds of the maximum across arms of the corruption injected by the adversary x max t a rst a a t p t t in fact all our results can be improved via using c a t a rs a and replacing c by max c a c for summand a more formally our main theorem theorem p t becomes t theorem algorithm which is agnostic to the corruptions c a t a rs a when run q with widths a log a log a has regret x k max c c a log log t log a the proof follows the same arguments since it only compares each arm a with this result is nice since the contribution of each arm to the regret is a function only of its own gap and the corruption injected to it and the one injected to arm the latter dependence on the corruption on the optimal arm is essential since the main attack we presented to the classical arguments only corrupts arm the lower bound of the previous section also only adds corruption to dependence on the gap in section all our guarantees have an inverse dependence on the gap a of all arms a note that such a guarantee is completely meaningless for arms with a very small gap for instance if there exist two optimal arms then there is an arm a with a which makes the presented bound infinite and therefore vacuous as we hinted there though this inverse dependence can be improved for arms with small a t our proofs generally relied on setting an upper bound on the number of times that a suboptimal arm is played and thereby providing an upper bound on the regret they cause for arms with a t an alternative analysis is to say that even if they are erroneously selected every single time we can upper bound the loss in performance for pseudoregret the performance loss if they were selected every single time is a t log a for actual regret one needs to also take into consideration the variance but even if they are selected every single time a hoeffding bound shows that their total reward is with high probability at most t lower than its expectation as a result the inverse dependence on a in our bound can be replaced by min a t a for and min t a for actual regret c can be moreover the careful reader may have noticed that in theorem the dependence a replaced by a sole dependence on c without the gap however this does not extend to the subsequent theorems since the dependence on c there does not come from the upper bound on the corruption experienced this is at most log t due to subsampling instead the dependence on c comes from projecting the correct layer smallest layer robust to corruption to the previous layers via the number of times it will take to eliminate any suboptimal arm uncorrupted objective in applications such as spam the corruptions should not be counted as part of the rewards our algorithm provides the same guarantee in the case of uncorrupted rewards the difference between the performances in the two objectives is at most c one can also observe that the linear dependence on c is still necessary consider arms with and an adversary that corrupts the first c steps making them look identical the learner has no better option than randomly selecting between the two which gives him a regret of under the uncorrupted objective we note that in this setting the linear dependence is necessary unconditionally of the performance of the algorithm in the stochastic setting towards best of all worlds in the previous section we showed that a logarithmic dependence in the stochastic setting comes at the expense of linear dependence on c in the setting if we focus on actual regret a very interesting direction is to achieve such an improvement with either a higher power on the logarithm in the stochastic setting or aiming for instead in fact we can combine our algorithm with the sapo algorithm of auer and chiang and achieve a bicriteria guarantee for for an a specified by the algorithm we achieve our guarantee if the corruption is c t a and at most t otherwise notice that the case a corresponds to the best of both worlds this is done via running the sapo algorithm at the level a log t with probability t instead of having higher layers the sapo algorithm guarantees that the caused by any particular arm is at most logarithmic if the instance is stochastic and at most t if it is adversarial via a beautiful analysis that keeps negative regret of time intervals that have performed well to avoid testing eliminated arms too often in our setting if the corruption level is less than t a the instance behaves as stochastic causing at most logarithmic regret else the instance is corrupted and we can extrapolate the regret in this layer to the whole algorithm as arms that are eliminated in this layer are also eliminated before via global eliminations since the regret there is at most t and this is multiplied by t a this implies a bound of t on acknowledgements the authors would like to thank sid banerjee whose lecture notes on stochastic bandits proved very helpful munoz medina karthik sridharan and tardos for useful discussions manish raghavan for suggestions on the writeup and the anonymous reviewers for the valuable feedback they provided that improved the presentation of the paper references audibert and bubeck minimax policies for adversarial and stochastic bandits in proceedings of the annual conference on learning theory colt peter auer and chiang an algorithm with nearly optimal for both stochastic and adversarial bandits in proceedings of the annual conference on learning theory colt peter auer and paul fischer analysis of the multiarmed bandit problem mach may peter auer yoav freund and robert schapire the nonstochastic multiarmed bandit problem siam j january bubeck and regret analysis of stochastic and nonstochastic bandit problems foundations and trends in machine learning bubeck nicolo and lugosi bandits with heavy tail ieee transactions on information theory alina beygelzimer john langford lihong li lev reyzin and robert schapire contextual bandit algorithms with supervised learning guarantees in proceedings of the international conference on artificial intelligence and statistics aistats bubeck and aleksandrs slivkins the best of both worlds stochastic and adversarial bandits in proceedings of the annual conference on learning theory colt nicolo and gabor lugosi prediction learning and games cambridge university press new york ny usa yang cai and constantinos daskalakis learning auctions with or without samples proceedings of the ieee annual symposium on foundations of computer science focs ilias diakonikolas gautam kamath daniel kane jerry li ankur moitra and alistair stewart robust estimators in high dimensions without the computational intractability in proceedings of the ieee annual symposium on foundations of computer science focs hossein esfandiari nitish korula and vahab mirrokni online allocation with traffic spikes mixing adversarial and stochastic models in proceedings of the acm conference on economics and computation ec eyal shie mannor and yishay mansour action elimination and stopping conditions for the bandit and reinforcement learning problems journal of machine learning research dylan j foster zhiyuan li thodoris lykouris karthik sridharan and eva tardos learning in games robustness of fast convergence in advances in neural information processing systems nips garivier and olivier the algorithm for bounded stochastic bandits and beyond in annual conference on learning theory colt pratik gajane tanguy urvoy and emilie kaufmann corrupt bandits for privacy preserving input in international conference on algorithmic learning theory alt elad hazan and satyen kale better algorithms for benign bandits in proceedings of the annual symposium on discrete algorithms soda lai and herbert robbins asymptotically efficient adaptive allocation rules adv appl march vahab mirrokni shayan oveis gharan and morteza zadimoghaddam simultaneous approximations for adversarial and stochastic online budgeted allocation in proceedings of the annual symposium on discrete algorithms soda yishay mansour aviad rubinstein and moshe tennenholtz robust probabilistic inference in proceedings of the annual symposium on discrete algorithms soda yevgeny seldin and lugosi an improved parametrization and analysis of the algorithm for stochastic and adversarial bandits in proceedings of the conference on learning theory colt aleksandrs slivkins introduction to bandits yevgeny seldin and aleksandrs slivkins one practical algorithm for both stochastic and adversarial bandits in proceedings of the international conference on machine learning icml ohad shamir and liran szlak online learning with local permutations and delayed feedback in proceedings of the international conference on machine learning icml wei and haipeng luo more adaptive algorithms for adversarial bandits corr a supplementary material on section in this section we provide the proof q of theorem note that in the lemma statements the width is c for any arm a defined as in the theorem wd a log n a n a lemma restated with probability at least arm never becomes eliminated proof the crux of the proof lies in establishing that with high probability the upper bound of the confidence interval of never becomes lower than the lower bound of the confidence interval of any other arm a and therefore does not become eliminated more formally let es a and e a be the empirical mean after n a samples of the stochastic part of the rewards and the empirical mean after n a samples of the corrupted rewards respectively recall that a is the mean of arm a by hoeffding inequality for any arm a with probability at least s log a a n a we set to establish that this holds for all arms and all time steps after q arm a has been played n a times as a result for any arm a and any time es a a log n a and q es log n comparing now the actual corrupted empirical means they can be altered by at most absolute c c and e es n a corruption hence e a es a n a combining the above inequalities with the fact that the actual mean of is higher than the one of a a we establish that e a e wd a wd and therefore arm is not eliminated since this holds for all times and arms the lemma follows lemma restated with probability at least all arms a become eliminated after n a a plays proof the proof stems from the following observations by lemma arm is with high probability never eliminated after n a rounds with high probability the lower confidence interval of arm is above the upper confidence interval of arm a this comes from the fact that after n a plays of arm a and also of arm since it is not eliminated the empirical stochastic mean of is with high probability at most a below its actual mean and similarly the empirical stochastic mean of arm a is at most a above its actual mean since the corruptions are upper bounded by c they can only contribute to a decrease in the average empirical corrupted means by at most a which is not enough to circumvent the gap a more formally let es a and e a denote the empirical means of the stochastic part of the rewards and the corrupted rewards respectively after n a plays of arm a by the same hoeffding inequality as in the proof of the previous lemma with probability at least it holds that a a q log n a therefore with the same probability es a es a a and a after a plays for both arm a and e a es a nc a by the absolute corruption is at most c therefore e es nc a and the choice of n a we have c n a a combining with the above argument this also implies that the widths are upper bounded by wd a a and wd a combining the above with the fact that the actual mean of is a higher than the one of a a a we establish c wd a wd n a a a a a e e a wd a wd es es a as a result arm a becomes eliminated after n a plays if it is not already eliminated before theorem restated if c is a valid bound the corruption then arm elimination q p log log c with wd a with probability n a has regret o n a a proof the proof follows the classical stochastic bandit argument of measuring the regret caused by each arm a as a function of its gap a and the number of times n a it is played as established by lemma for simplicity of presentation we first provide the guarantee compares the expected performance of the algorithm to the expected performance one would have had had they selected throughout the whole time horizon the expected performance when one uses is the loss compared to that every time a is used instead is equal to its gap a as a result the expected contribution to from suboptimal arm a is equal to n a a lemma establishes that with probability any suboptimal arm a is played at most n a log a times each play of the suboptimal arm causes of a multiplying the times by the expected regret per time the guarantee which equals to the gap and setting the failure probability to be some inverse polynomial of the time horizon t to ensure that the expected regret due to the bad event is at most a constant leads to the guarantee to turn the above into a guarantee we need to show that the regret incurred during the steps that we pull arm a is not significantly higher than the expectation therefore bounding the resulting variance by the hoeffding inequality of lemma the empirical cumulative reward of p arm a is with high probability at most n a log less than its expectation the same holds for arm for these steps its realized performance is at most this much more than its expectation the probability that these statements do not hold for some arm or some time is at most p regarding arms a the n a log term can be upper bounded by o n a a by the definition of n a s s p log log n a a n a a n a log n a n a log regarding arm let be the arm with the smallest gap by lemma never gets eliminated p but it is not necessarily the ex post optimal arm in fact some other arm with a may be the ex post optimal arm arms with higher gap are with high probability not the ex post optimal arm by an analogous argument as in lemma however by the same argument as above arm is with high probability at most n below its expectation and the ex post optimal arm is at most this much above its expectation this gives a bound of n that is caused by the case where is not the ex post optimal arm therefore the actual regret from times that arm a is played is at most a a where the one term comes from the expectation and the other from the aforementioned bounds on the variance the corruption can increase any cumulative reward by at most c which is already existing in the regret bound replacing n a by lemma we obtain the guarantee note that the failure probabilities of the two lemmas are coupled as they correspond to the same bad events b supplementary material on section in this section we provide the proof of theorem to handle the corruption we bound with high probability the total corruption experienced by the slow active arm elimination instance s lemma to deal with an adaptive adversary we need a martingale concentration inequality specifically we apply a inequality introduced in lemma lemma lemma in let xt be a sequence of random numbers assume for all t that xt r and that e xt also let v t x then for any e t x xt r ln p r lemma restated in algorithm the slow active arm elimination algorithm s observes with probability at least corruption of at most ln during its exploration phase when picked with probability proof the first observation is that the expected corruption encountered by algorithm s is at most a constant total corruption of c encountered with probability the rest of the proof focuses on bounding the variance of this random variable actual corruption encountered by the layer crucially since we want to allow the adversary to be adaptive we should not assume independence across rounds but only conditional independence conditioned on the history and this is why some more involved concentration inequality is necessary therefore we create a martingale sequence actual corruption minus expected corruption and apply a concentration inequality let zat be the corruption that is observed by the exploration phase of the algorithm if arm a is selected for every round t if adversary selects corruption cat then zat is therefore a random variable equal to cat with probability and otherwise given that the adversary is adaptive and may select the corruptions based on the realizations of the previous rounds we need to use an appropriate concentration inequality we use a inequality introduced in lemma initially we resolve the randomness conditioning on s the slow algorithm is selected since active arm elimination is deterministic conditioned on selecting algorithm s the selected arm is deterministic let a s t be the arm that would be selected if s which happens with probability the martingale sequence is now h i t t xt za s t e za s t h t where h t corresponds to the history up to round note that e ca s t c ca s t ca s t c c c c ca s t ca s t c c ca s t c c c c c t the last inequality holds as ca s t and ca s t c by the definition of therefore summing over all the rounds x x ca s t v e c c t t x t max cat a a trivial upper bound of is r since the rewards are in applying lemma we show that x xt ln e ln t the lemma then follows by adding the expected corruption of e t za s t h t and therefore obtaining the bound of the statement on the corruption experienced x x x t za s t za s t h t ln xt e t t t q log theorem restated algorithm run with widths wd a log and ns a ns a q p kt kt log log log has o for the stochastic case and o k c wdf a f a a n a for the case with probability at least s proof for the stochastic case the bound follows via standard stochastic bandit arguments similarly to the proof of theorem with c as for each of the two active arm elimination algorithms p log s we incur with probability s regret o where s is the failure a probability of inequality which governs the results in lemmas and for each of f s the most interesting case is the setting let c be the failure probability in lemma by lemma with probability at least c the actual corruption experienced by the slow active arm elimination algorithm is at most ln which is less than log for values of k and t therefore we can apply the analysis of theorem with corruption level at least log c and get a handle on the actual regret coming from the slow active arm elimination algorithm what is left is to bound the regret coming from the fast active arm elimination algorithm towards this goal we bound the number of times that a suboptimal arm is played in the fast active arm elimination by the expected time that it remains active at the slow active arm elimination by lemma arm a is played in the slow active arm elimination with probability at least at most log log s log c ns a a a having a bound on the number of plays of the arm in the slow active arm elimination instance we use this to bound the number of plays in the fast active arm elimination instance in expectation this is at most k c ns a times as every move in the slow active arm elimination occurs with probability and at least of these moves are plays of a while it is still active since every time arm a is played it incurs a this provides the guarantee to obtain a high probability guarantee let and observe that with probability at least we make one move at the slow arm elimination algorithm every o c log moves at the fast arm elimination algorithm this can be seen by thinking the following process one tosses coins with bias p until she observes heads for the first time heads is the event after m tosses of the coins the probability that no heads have arrived is at most p m to ensure that which is achieved by m log this is less than we need to wait m log log by union bound on the failure probabilities for each of those draws we get that with failure probability k ns a since ns a t as it is at most the time horizon arm a gets inactivated in f after nf a k ns a c log c k log a the last part is to prove that the regret experienced throughout those rounds is not too large this follows by the two applications of hoeffding inequality as before for arms a and analogously to theorem combining the above arguments the theorem follows the total failure probability of the guarantee is s c s c supplementary material on section theorem restated algorithm which is agnostic to the coruption level c when run with q t log log a has regret widths wd a a x k c log log t log a proof the proof follows similar arguments to the proof of theorem specifically for the layers that are above the corruption level c by using the standard arguments described in theorem log s we establish a bound on the regret caused by any suboptimal arm a with failure a probability s log t since there are log t such levels the regret coming from these layers is upper bounded by the second term of the theorem with failure for the layers that are not tolerant to the corruption c we apply the same argument as in the proof of theorem and bound their regret via the number of plays they are played by the minimum layer that is robust to corruption arg c similarly as in the proof of the theorem we upper bound the number of plays a of each suboptimal arm a at this layer by exactly the same arguments then bound the number of plays in the suboptimal layer via the same coin toss process as in that proof and last bound the regret they incur during this part since we do not know the amount of corruption in advance and this amount is adaptively selected we also need to take a union bound on the number of layers so that the guarantee on a holds for all layers simulataneously if they end up being correct we therefore repeat the arguments in theorem with c log t and log t last we note that since we used powers of to increase the corruption among layers the fact that we did not apply the arguments of theorem with the exact c but instead used a c such that c c causes just an extra constant factor on the regret d supplementary material on section theorem restated consider a bandits algorithm that has the property that for any stochastic input in the two arm setting it has bounded by c log t where for any there is a corruption level c with t c t and a instance such that with constant probability the regret is c proof the proof follows a sequence of steps step analyze behavior in the stochastic case fix a constant and observe how the algorithm behaves for the stochastic input that has bernoulli arms of means since in that setting the expected regret is the same as e where is the number of pulls of arm it follows that e c log t step find a large interval that is hit with at most constant probability we divide the space between t and t into o log t intervals ii t t such that size of each interval is twice the size of all the previous intervals combined for each interval i let i be the number of times that arm is pulled in the interval then there exists an interval ii c such that e i o step create an adversary that forces a lot of regret in interval i the adversary is quite simple for the first c steps the arms are bernoulli with means and for the remaining timesteps the arms are bernoulli with means we use e and p to refer to the probability law whe inputs are drawn with respect to in all timesteps and and to refer to the probability law when the input is according to in the first k steps and according to onwards step with constant probability arm is pulled a constant number of times in ii under both p and under the probability law p this follows directly from markov s inequality e i p i so p i denote by a the event that i we want to argue that a is also constant in order to do that let z be a vector storing in zs the reward of arm in the time it is pulled in interval ii notice that in both the stochastic and corrupted scenarios if the learner observes the same values of z she acts the exact same way therefore if we condition on z the probability that she ends up pulling arm for more than times is exactly the same in other words p therefore p a x z p z z p z p z z p z which is a constant step concentration bounds for the regret incurred in each interval we now define an event b that occurs with probability o that captures all the concentration bounds we need for the proof first we requirep arm to be the optimal arm let r tp i be the reward of arm i in t time step we know that e t r t t and e t r t t since p all the rewards are independent we can use the hoeffding bound to bound the probability t where r t r t x x x c t t t o e t exp r r p p t t t t now we establish some concentration on the regret that the learner achieves with respect to arm in the intervals c c and t we note that if the learner pulls arm she does not incur any regret if she pulls arm she incurs regret r t r t which can be positive or negative to compute regret with respect to arm in each of those intervals we sample every time that the arm is pulled p step interval c in this interval so c if y is the number p of times arm is pulled then the regret is given by where in the previous expression we abuse notation and mean by the regret in the time the arm is pulled instead of the regret in the period therefore t c t y x x x x p p min p we then use the hoeffding bound in the last expression and get c x c exp o exp t t step interval c in this interval so using the same bound as before we get x exp o step interval t in this interval pulling arm has again positive expected regret we use the same technique used in to argue that she can not obtain large negative regret with high probability let y be the number of times arm is pulled in that interval and again we abuse notation and let be the difference in rewards in the time the arm is pulled then t t t y x x x x log t log t log t min for t log t this probability is zero since now for larger t we can use the standard chernoff bound t t x x log t exp log t o p t now all the concentration bounds have been established we define the event b to be the event where all those concentration bounds hold more precisely b is the event where the following four things happen a empirically arm is better than arm b in interval c the regret of the learner is at least c in interval c the difference between the total rewards of both arms is at least and d the regret of the learer in interval t is at least log t by the discussion in step we know that b o step putting it all together since a and b o then by the union bound a and b a o now we need to argue that in the constant probability event a and b the regret of the learner is at least c we simply sum the regret of the learner in each of the intervals for intervals c and t we can use the bounds computed in steps andn directly for interval c we note that conditioned on a the learner probes arm a constant number of times so his total regret differs from the regret by pulling arm in all iterations by at most a constant therefore the total regret can be bounded by log t we can adapt the argument to provide a bound on the expected positive regret e where max x note that the high probability bounds provided also imply a bound on the expected positive regret theorem if a bandits algorithm that has the property that for any stochastic input in the two arm setting it has bounded by c t for for any there is a corruption level c with t c t and a instance such that e t for all proof modify the proof of theorem as follows define o t and again select an interval such that e event a is defined in the same way by markov s inequality p a and exp t a step remains unchanged and in step note that b a since so a and b exp t therefore with probability at least exp t the regret is at least c t and therefore e t exp t t for all
8
representation learning and recovery in the relu model mar arya and ankit singh college of information and computer sciences university of massachusetts amherst amherst ma usa research laboratory of electronics mit cambridge ma usa arya asrawat march abstract linear units or relus have become the preferred activation function for neural networks in this paper we consider two basic learning problems assuming that the underlying data follow a generative model based on a a neural network with relu activations as a primarily theoretical study we limit ourselves to a network the problem we study corresponds to in the presence of nonlinearity modeled by the relu functions given a set of observation vectors yi rd i n we aim to recover d k matrix a and the latent vectors ci rk under the model yi relu aci b where b rd is a random bias we show that it is possible to recover the column space of a within an error of o d in frobenius norm under certain conditions on the probability distribution of b the second problem we consider is that of robust recovery of the signal in the presence of outliers large but sparse noise in this setting we are interested in recovering the latent vector c from its noisy nonlinear sketches of the form v relu ac e w where e rd denotes the outliers with sparsity s and w rd denote the dense but small noise this line of work has recently been studied soltanolkotabi without the presence of outliers for this problem we q show that a generalized lasso algorithm is able to recover the signal c rk within an error of o random gaussian matrix log d d when a is a introduction linear unit relu is a basic nonlinear function to be relu r as relu x max x for any matrix x relu x denotes the matrix obtained by applying the relu function on each of the coordinates of the matrix x relus are building blocks of many nonlinear problems based on deep neural networks see soltanolkotabi for a good exposition let y rd be a collection of message vectors that are of interest to us depending on the application at hand the message vectors the constituents of y may range from images speech signals network access patterns to rating vectors and so on we assume that the message vectors satisfy a generative model where each message vector can be approximated by a map rk rd from the latent space to the ambient space for each y y y c for some c rk motivated by the recent results on developing the generative models for various signals see goodfellow et al kingma and welling bora et al the maps that take the following form warrant special attention c h al h h c is the function corresponding to an neural network with the activation function here for i l ai rdi with k and dl d denotes the weight matrix for the layer of the network in the special case where the activation function h is the relu function the message vectors of the interest satisfy the following y relu al relu relu c bl where for i l bi rdi denotes the biases of the neurons or output units at the layer of the network the generative model in raises multiple interesting questions that play fundamental role in understanding the underlying data and designing systems and algorithms for processing the data two such most basic questions are as follows learning the representation given the observations yn rd from the model cf recover the parameters of the model t l and cn rk such that yi relu relu relu ci bl i n note that this question is from training the model in which case the set cn is known and possibly chosen accordingly recovery of the signal in the presence of errors given the erroneous noisy version of a vector generated by the model cf denoise the observation or recover the latent vector formally given v y e w relu al relu relu c bl e w and the knowledge of model parameters obtain rd or rk such that ky or kc is small respectively in e and w correspond to outliers large but sparse errors and dense but small noise respectively apart from being closely related one of our main motivations behind studying these two problems together comes from the recent work on associative memory karbasi et al mazumdar and rawat an associative memory consists of a learning phase where a generative model is learned from a given dataset and a recovery phase where given a noisy version of a data point generated by the generative model the version is recovered with the help of the knowledge of the generative model there have been a recent surge of interest in learning relus and the above two questions are of basic interest even for a network nonlinearity comprising of a single relu function it is conceivable that understanding the behavior of a network would allow one to use some iterative peeling technique to develop a theory for multiple layers in goel et al the problem of recovering under reliable agnostic learning model of kalai et al is considered informally speaking under very general distributional assumptions the rows of a are sampled from some distribution given a and y relu ac goel et al propose an algorithm that recovers a hypothesis which has an under some natural loss function therein of with respect to the true underlying moreover the algorithm runs in time polynomial in d and exponential in as opposed to this given a and the corresponding output of the y relu ac b we focus on the problem of recovering c itself here we note that the the model considered in goel et al does not consider the presence of outliers soltanolkotabi obtained further results on this model under somewhat learning guarantees assuming that the entries of the matrix a to be gaussian soltanolkotabi show that with high probability a gradient descent algorithm recovers c within some precision in terms of the relative error decays exponentially with the number of steps in the gradient descent algorithm the obtained result is more general as it extends to constrained optimizations in the presence of some regularizers for example c can be restricted to be a sparse vector however both of these works do not consider the presence of outliers sparse but large noise in the observation the sparse noise is quite natural to assume as many times only partial observations of a signal vector are obtained the relu model with outliers as considered in this paper can be thought of as a nonlinear version of the problem of recovering c from linear observations of the form v ac e with e denoting the outliers this problem with linear observations was studied in the celebrated work of and tao we note that the technique of and tao does not extend to the case when there is a dense but bounded noise component present our result in this case is a natural generalization and complementary to the one in soltanolkotabi in that we present a recovery method which is robust to outliers and instead of analyzing gradient descent we directly analyze the performance of the minimizer of our optimization program a generalized lasso using the ideas from plan and vershynin nguyen and tran on the other hand to the best of our knowledge the representation learning problem for networks has not been studied as such the representation learning problem for relus bears some similarity with matrix completion problems a fact we greatly exploit later in low rank matrix completion a matrix m ax is visible only partially and the task is to recover the unknown entries by exploiting the fact that it is low rank in the case of we are more likely to observe the positive entries of the matrix m which unlike a majority of matrix completion literature creates the dependence between the matrix m and the sampling procedure main result for representation learning we assume to have observed d n matrix y relu ac b where a is a d k matrix c is a k n matrix both unknown b rd is a random bias and denote the kronecker we show that a relaxed method guarantees the recovery of the matrix ac with an error in frobenius norm at most o d with high probability see theorem for the formal statement then leveraging a known result for recovering column space of a perturbed matrix see theorem in the appendix we show that it is possible to also recover the column space of a with similar guarantee the main technique that we use to obtain this result is inspired by the recent work on matrix completion by davenport et al one of the main challenges that we face in recovery here is that while an entry of the matrix y is a random variable since b is a random bias whether that is being observed or being by the relu function for being negative depends on the value of the entry itself in general matrix completion literature the entries of the matrix being observed are sampled see for example and recht keshavan et al chatterjee and references therein for the aforementioned reason we can not use most of these results however similar predicament is partially present in davenport et al where entries are quantized while being observed similar to davenport et al the tools that prove helpful in this situation are the symmetrization trick and the contraction inequality ledoux and talagrand however there are crucial of our model from davenport et al in our case the bias vector while random do not change over this is to ensure that the bias is random but does not change over observation of the data samples observations this translates to less freedom during the transformation of the original matrix to the observed matrix leading to dependence among the elements in a row furthermore the analysis becomes notably since the positive observations are not quantized main result for noisy recovery we plan to recover c rk from observations v relu ac b e w where a is a d k standard gaussian matrix e rd is the vector containing outliers sparse noise with s and w rd is bounded dense noise such that to recover c and e we employ the lasso algorithm which is inspired by the work of plan and vershynin and nguyen and tran in particular plan and vershynin recently showed that a signal can be provably recovered up to a constant multiple from its nonlinear gaussian measurements via the lasso algorithm by treating the measurements as linear observations in the context of relu model for measurements v relu ac b w it follows from plan and vershynin that lasso algorithm outputs as the solution with e relu b where is a gaussian random variable and b is a random variable denoting bias associated with the relu function that this approach guarantees with high probq we show log d even when the measurements are corrupted by ability recovery of c within an error of o d outliers this is achieved by jointly minimizing the square loss over c e after treating our measurements as linear measurements v ac e w and adding an regularizer to the loss function to promote the sparsity of the solution for e we also recover e see theorem for a formal description organization the paper is organized as follows in section we describe some of the notations used throughout the paper and introduce the some technical tools that would be useful to prove our main results in the same section subsection we provide the formal models of the problem we are studying in section we provide detailed proofs for our main results on the representation learning problem see theorem section contains the proofs and the techniques used for the recovery problem in the presence of outliers see theorem notations and technical tools notation for any positive integer n n n given a matrix m for i j d n mi j denotes the i j entry of for i d mi mi n t denotes the vector containing the elements of the row of the matrix similarly for j n mj m j m j md j t denotes the column of the matrix recall that the function relu r takes the following form relu x max x for a matrix x we use relu x to denote the d matrix obtained by applying the relu function on each of the entries of the matrix x for two matrix a and b we use a b to represent the kronecker product of a and b given a matrix p s j kp kf i j d n denotes its frobenius norm also let kp k denote the operator norm of p the maximum singular value of we let kp denote the nuclear norm of similar to davenport et al we a parameter associated with a function f r f inf f x f x how f can be in the interval we also a lipschitz parameter l f for a function f as follows l f max n sup x f x f y dy sup f x o f x techniques to bound the supremum of an empirical process in the course of this paper namely in the representation learning part we use the key tools of symmetrization and contraction to bound the supremum of an empirical process following the lead of davenport et al and the analysis of generalization bounds in the statistical learning literature in particular we need the following two statements theorem symmetrization of expectation let x x x n be n independent rvs taking values in x and f be a class of functions on x furthermore let be n independent rademacher rvs then for any t n n t e sup e sup f x i ef x i f x i f f f f theorem contraction inequality ledoux and talagrand let be d independent rademacher rvs and f be a convex and increasing function let r r be an functions a b which satisfy then for any t rn d ti sup ef t ef l sup t d ti system model we focus on the problems of learning the representation and recovery of the signal in the presence of errors when the signal is assumed to be generated using a single layer the models of learning representations and recovery is described below model for learning representations we assume that a signal vector of interest y y relu ac b where a and b rd correspond to the weight generator matrix and the bias vector respectively as for the problem of representation learning we are given n message vectors that are generated from the underlying model for j n the signal vector is as follows we the d n observation matrix yj relu acj b rd y yn similarly we the k n matrix cn with this notion we can concisely represent the n observation vectors as y relu ac b relu m b where rn denotes the vector we assume that the bias vector b is a random vector comprising of coordinates with each coordinate being copies of a random variable b distributed according to probability density function p model for recovery for the recovery problem we are given a vector v rd which is obtained by adding noise to a valid signal vector y rd that is well modeled by a with the matrix a and bias b rd in particular for some c rk we have v y e w relu ac b e w where w rd denotes the dense noise vector with bounded norm on the other hands the vector e rd contains potentially large corruptions also referred to as sparse errors or outliers we assume s the robust recovery problem in corresponds to obtaining an estimate of the true latent vector c from the corrupt observation vector y such that the distance between and c is small a related problem of denoising in the presence of outliers only focuses on obtaining an estimate y which is close to the true signal vector y for this part we focus on the setting where the weight matrix a is a random matrix with entries where each entry of the matrix is distributed according to standard gaussian distribution furthermore another crucial assumption is that the hamming error is oblivious in its nature the error vector is not picked in an adversarial manner given the knowledge of a and representation learning in a in the paper we employ the natural approach to learn the underlying weight matrix a from the observation matrix y as the network maps a lower dimensional vector c rk to obtain a signal vector y relu ac b in dimension d k the matrix m ac cf is a matrix as long as k min d n in our quest of recovering the weight matrix a we focus on estimating the matrix m when given access to y this task can be viewed as estimating a matrix from its partial randomized observations our work is inspired by the recent work of davenport et al on matrix completion however as we describe later the crucial of our model from the model of davenport et al is that the bias vector b does not change over observations in our case nonetheless we describe the model and main results of matrix completion below to underscore the key ideas matrix completion in davenport et al the following observation model is considered given a matrix m and a function r the matrix z is it is an interesting problem to extend our results to a setting with adversarial errors however we note that this problem is an active area of research even in the case of linear measurement y ac e w bhatia et al we plan to explore this problem in future work assumed to be generated as z i j with probability mi j with probability mi j furthermore one has access to only those entries of z that are indexed by the set d m where the set is generated by including each i j d n with certain probability given the observations z the likelihood function associated with a matrix x takes the following zi j log x i j zi j log x i j l z x i j now in order to estimate the matrix m with bounded entries from z it is natural to maximize the function cf under the constraint that the matrix m has rank maximize l z x x subject to rank x r kx where the last constraint is introduced to model the setting where the observations are assumed to have bounded coordinates we note that such assumptions indeed hold in many observations of interests such as images note that the formulation in is clearly due to the rank constraint thus davenport et al propose the following program maximize l z x x subject to kx rmn kx note that the constraint kx rmn is a of the constraint rank x r b be the output of which is required to ensure that the program in outputs a matrix let m the program in davenport et al obtain the following result to characterize the quality of the b obtained solution proposition davenport et theorem assume that km rmn and km let z be as in then for absolute constants c and c with probability at least c d n the b the following solution of m s s m n log mn r m n b kf c km m dn e e where the constant c depends on the and steepness of the function learning in a single layer relu and matrix completion main note that the problem of estimating the matrix m from y is related to the problem of matrix completion as the authors assume that the entries of z take values in the set in this paper we state the equivalent model where the binary alphabet is throughout this paper log represents the natural logarithm above similar to the matrix completion setup the observation matrix y is obtained by transforming the original matrix m in a probabilistic manner which is dictated by the underlying distribution of the bias vector b in particular we get to observe the entire observation matrix d n however there is key between these two aforementioned setups the matrix completion setup studied in davenport et al in fact most of the literature on matrix completion ganti et al assume that each entry of the original matrix m is independently transformed to obtain the observation matrix y in contrast to this such independence in absent in the in particular for i d the row of the matrix y is obtained from the corresponding row of m by utilizing the shared randomness by the bias bi note that the bias associated with a coordinate of the observed vector in our generative model should not vary across observation vectors this prevents us from applying the known results to the problem of estimating m from y however as we show in the remainder of this paper that the nature of the allows us to deal with the dependence across the entries of a row in z and obtain the recovery guarantees that are similar to those described in proposition representation learning from observations we now focus on the task of recovering the matrix m from the observation matrix y recall that under the the observation matrix y depends on the matrix m as follows y relu m b for i d we ny i n as the set of positive coordinates of the row of the matrix y ny i j n yi j and ny i i note that for i d the original matrix m needs to satisfy the following requirements mi j bi yi j for j ny i mi j bi for j n y i n i and given the original matrix m for i d and j n let mi j denote the largest element of the row of m for i d mi mi mi n it is straightforward to verify from that ny i denotes the indices of ny i largest entries of furthermore whenever ny i si n we have bi yi mi yi si mi si similarly it follows from that whenever we have ny i then bi the following bi max mi j j n based on these observation we the set of matrices xy as xy x kx yi x i yi si x i si and x i si max x i j d j y i recall that p r r denote the probability density function of each bias rv we can then write the likelihood that a matrix x xz results into the observation matrix x as follows p yi p y i d where for i d p yi ny i p bi max x i j j n n ny i p bi yi s x i s by using the notation f x x p b and x maxj n x i j we can rewrite as follows p yi ny i f x n ny i p bi yi s x i s therefore the of observing y given that x xy is the original matrix takes the following form log p yi ly x i d i d ny i log f x n ny i log p yi s x i s in what follows we work with a slightly quantity l y x ly x ly i d ny i log n f x p y x ny i log i s i s f p yi s in order to recover the matrix m from the observation matrix y we employ the natural maximum likelihood approach which is equivalent to the following maximize l y x subject to x xy x to be such that f x y for all x y with in what follows we simply refer this quantity as as and are clear from context the following result characterizes the performance of the program proposed in b be theorem assume that km and the observation matrix y is related to m according to let m the solution of the program in and the bias density function is p x with bounded derivative then the following holds with probability at least b c p km m p where c is a constant the quantities p and p depend on the distribution of the bias and are in and respectively the proof of theorem crucially depends on the following lemma lemma given the observation matrix y which is related to the matrix m according to let xy be as in then for any x xy we have i h e l y m l y x p km x the proof of this lemma is delegated to the appendix now we are ready to prove theorem b be the solution of the program in in what follows we use x as a short hand proof of theorem let m notation for xy we have h i h h b l y m e l y m b l y m l y m e l y m l y m b e l y m b l y m i h i h b l y m sup l y x e l y x e l y m x which means h i h i b sup l y x e l y x e l y m l y m x we now employ lemma to obtain that h i b sup l y x e l y x p km m x we now proceed to upper bound the right hand side of it follows from the standard symmetrization trick devroye et al that for any integer t we have d h i f x t ny i log e sup e sup l y x e l y x f x x n p yi s x i s t ny i log p yi s where i d are rademacher random variables note that for x r d log f u du d log p y dy sup p du log f x log f sup and log p yi s x log p yi s sup dp u p du at this point we can combine the contraction principle with to obtain the following h i t e sup l y x e l y x x t t t e p sup d x ny i x i n ny i x i s t d d t e p sup i ii t t x e p t d t t ny i x i n ny i x i s t p t where i and ii follow from the inequality and the fact that for x x kx respectively now using markov s inequality it follows from that h p sup l y x e l z x x i c p i where i follows from and ii follows by setting c h i i h t e supx l z x e l z x e c p t ii d and t log d n recovering the network parameters b xy such that as established in theorem the program proposed in recovers a matrix m b kf km m s c p p b as m b m e where e denotes the perturbation matrix that has let s denote the recovered matrix m bounded frobenius norm cf now the task of recovering the parameters of relunetwork is equivalent to solving for a given b m e ac m in our setting where we have a and c with d k and n k m is a matrix with its column space spanned by the columns of a therefore as long as the generative model ensures that the matrix m has its singular values bounded away from we can resort to standard results from b as an candidate for the orthonormal theory and output top k left singular vectors of m basis for the column space of m or a in particular we can employ the result from yu et al which b be the top k left singular vectors of m and m b respectively note that is stated in appendix a let u and u even without the perturbation we could only hope to recover the column space of a or the column space of u and not the exact matrix a let the smallest singular value of m is at least then it follows from theorem cf appendix a and that there exists an orthogonal matrix o such that ke k min k ke k ke k ke kf ke kf f bo kf ku u which is a guarantee that the column space of u is recovered within an error of o d in frobenius norm by the column space of u robust recovery in we now explore the second fundamental question that arises in the context of reconstructing a signal vector belonging to the underlying generative model from its erroneous version recall that we are given a vector v rd which is obtained by adding noise to a valid message vector y rd that is well modeled by a v y e w relu ac b e here w denotes the dense noise vector with bounded norm on the other hands the vector e contains potentially large corruptions also referred to as outliers we assume the number of outliers to be bounded above by the robust recovery problem in corresponds to obtaining an estimate of the true representation c from the corrupt observation vector y such that the distance between and c is small a related problem of denoising in the presence of outliers only focuses on obtaining an estimate which is close to the true message vector y in the remainder of this paper we focus on the setting where the weight matrix a is a random matrix with entries where each entry is distributed according to the standard gaussian distribution furthermore another crucial assumption is that the outlier vector is oblivious in its nature the error vector is not picked in an adversarial given the knowledge of a and note that soltanolkotabi study a problem which is equivalent to recovering the latent vector c from the observation vector generated form a without the presence of outliers in that sense our work is a natural generalization of the work in soltanolkotabi and presents a recovery method which is robust to errors as well however our approach from that in soltanolkotabi where the author analyze the convergence of the gradient descent method to the true representation vector in contrast we rely on the recent work of plan and vershynin plan and vershynin to employ the lasso method to recover the representation vector c and the hamming error vector e given v relu ac b e w which corresponds to the corrupted observations of c we try to a linear model to these observations by solving the following optimization minimize kv ac in the aforementioned formulation the regularizer part is included to encourage the sparsity in the estimate vector the following result characterizes the performance of our proposed program cf in recovering the representation and the corruption vector theorem let a be a random matrix with standard gaussian random variables as its entires and v v relu b w where s and let be as e relu b where is a standard gaussian random variable and b is a random variable that represents the bias in a coordinate in let it is an interesting problem to extend our results to a setting with adversarial errors however we note that this problem is an active area of research even in the case of linear measurement y ac e w bhatia et al we plan to explore this problem in future work note that this paper deals with a setup where number of observations is greater than the dimension of the signal that needs to be recovered d therefore we don t necessarily require the vector c to belong to a restricted set as done in the other version of the robust lasso methods for linear measurements see nguyen and tran be the outcome of the program described in then with high probability we have r r k log k s log d max d d d where is a large enough absolute constant that depends on proof assume that h and f furthermore for c rd and e rk we l c e ky ac let s i d be the support of the vector such that given a vector a rd and set t d we use a t to denote the vector obtained by restricting a to the indices belonging to t note that kv ah f f kv kah f hv ah fi k f s kf sc d i kah f hrelu b w ah fi d k f s kf sc ii kah f hrelu b w ah fi kf sc kf s d l l where i and ii follow from and the triangle inequality respectively since is solution to the program in we have l l by combining this with we obtain that kah f hrelu b w ah fi kf s kf sc d we now complete the proof in two steps where we obtain universal lower and upper bounds on the left hand side and the right hand side of respectively that hold with high probability upper bound on the rhs of let s z relu b note that hz w ah fi kf s kf sc d hz w fi kf s kf sc hz w ahi d d i hz w ahi kz kf kf s kf sc d d hz w ahi kz kf s kz kf sc d d d where i follows from the s inequality we now employ plan and vershynin lemma to obtain sup hz ahi c d where c is an absolute constant and e relu b e relu b with being a standard gaussian random variable now we can combine and to obtain the following hz w ah fi kf s kf sc d kz kz k t ka kf s kf sc d d d d i k t ka s kz kf d d d ii k t ka s kf c d d where i and ii follow by setting and using the fact that kf s s kf s s kf we can further simplify the bound in as follows hz w ah fi kf s kf sc d k t kf max c ka s d d d d lower bound on the lhs of by combining and we get that kah f k t kz kz ka kf s kf sc d d d d since the left hand side of is we that the note that we have picked d tuple h f belongs to the following restricted set k kat s h f r h rk f rd kf sc c d d in plan and vershynin plan and vershynin obtain the bound in terms of the gaussian width vershynin of the cone which the vector h belongs to however in our setup where we do not impose any structure on c this quantity is simply o k as a result in order to lower bound we lower bounding the following quantity for every h f kah f towards this we employ lemma in appendix c which gives us that for every h f r with high probability we have kf kah f d completing the proof it follows from and that kf k t kf ka s d max c d d d d or k t kf ka s d max c d d d now using the fact that and a is an standard gaussian matrix we can obtain the following bound from which holds with high r r k log k s log d kf max d d d where is a large enough absolute constant acknowledgements this research is supported in part by nsf awards ccf ccf and career award references bhatia jain and kar robust regression via hard thresholding in advances in neural information processing systems nips pages bhatia jain kamalaruban and kar consistent robust regression in advances in neural information processing systems nips pages bora jalal price and dimakis compressed sensing using generative models in proceedings of the international conference on machine learning icml pages aug and recht exact matrix completion via convex optimization foundations of computational mathematics apr and tao decoding by linear programming ieee trans inform theory candes romberg and tao stable signal recovery from incomplete and inaccurate measurements communications on pure and applied mathematics chatterjee matrix estimation by universal singular value thresholding the annals of statistics davenport y plan van den berg and wootters matrix completion information and inference a journal of the ima july devroye and lugosi a probabilistic theory of pattern recognition volume springer science business media ganti balzano and willett matrix completion under monotonic single index models in advances in neural information processing systems nips pages goel kanade klivans and thaler reliably learning the relu in polynomial time in proceedings of the conference on learning theory colt pages july goodfellow mirza xu ozair courville and bengio generative adversarial nets in advances in neural information processing systems nips pages kalai kanade and mansour reliable agnostic learning journal of computer and system sciences karbasi salavati shokrollahi and varshney noise facilitation in associative memories of exponential capacity neural computation keshavan montanari and oh matrix completion from a few entries ieee transactions on information theory june kingma and welling variational bayes in international conference on learning representations iclr ledoux and talagrand probability in banach spaces isoperimetry and processes springer science business media mazumdar and rawat associative memory via a sparse recovery model in advances in neural information processing systems nips pages mazumdar and rawat associative memory using dictionary learning and expander decoding in aaai conference on intelligence aaai nguyen and tran robust lasso with missing and grossly corrupted observations ieee transactions on information theory april y plan and vershynin the generalized lasso with observations ieee transactions on information theory march soltanolkotabi learning relus via gradient descent in advances in neural information processing systems nips pages vershynin probability available online an introduction with applications in data science yu wang and samworth a useful variant of the theorem for statisticians biometrika a results on matrix perturbation let m be a d n matrix where without loss of generality we assume that d let m have the following singular value decomposition m u t where diag is the diagonal matrix with the singular values of m as its diagonal entries b m e be the matrix which is obtained by perturbing the original matrix m by an error matrix let m b have the following singular value decomposition let m b bb m where b diag b b b is the diagonal matrix comprising the singular values of the perturbed b let be the singular values of the matrix u t u matrix b diag n i n and u u b the subspaces spanned by the note that i i n are referred to as the canonical angles between u and u b respectively it is common to use k sin u u b kf as a distance measure columns of the matrices u and u b between u and u in yu et al yu et al present the following result which bounds the distance between the subb spaces spanned by the singular vectors of the original matrix m and the perturbed matrix m b m e have singular values d n and b theorem let m m b d n respectively fix r rank a and assume that b b let u ur and u b b ur contain the left singular vectors associated b respectively then with the r leading singular values of m and m ke k min r ke k ke kf b k sin u u kf moreover there exists an orthogonal matrix o rr such that ke k min ke k ke k f bo kf ku u b proofs of section lemma the distance measure given the observation matrix y which is related to the matrix m according to let xy be as in then for any x xy we have h i e l y m l y x p km x here we state a special case of the result from yu et al see yu et theorem for the statement of the general result proof first we recall our notation that given the original matrix m for i d and j n mi j denotes the largest element of the row of m for i d mi mi mi n we maxj n mi j thus h i e l y m l y x n d f p yi s mi s ny i log e ny i log f x p yi s x i s d s f p b f mi log db p b log f x p b m i s x i s i s p b db p b log p b mi n x i n n where p r r represents the probability density function of the bias rv and f is as f x x p b given the matrices m x xz we a new density function as follows p u mi s x i s if u s for s n u p u mi n x i n if u n recall that for x r we have x e x for x log y this gives us that log y y in particular for b r employing with y p b log s b p b p b q b p b s we get that b p b b p b p b or p b log p p b p b p b b b by using for every i d we obtain that s s p b db p b log p b mi s x i s p b db b p p b p b b p b log n p b log p b db p b mi n x i n f m p p b b db f f x f x f p p b b db p p b b db f f x f x f to obtain the following f x f x f mi log f mi f x f f f for i d we now employ with y or f f x f f x f f log f f x f x i f log by combining and we obtain that s f p b p b log db f mi log f x i s p b mi s x i s p b db p b log p b mi n x i n n p p b b db s s p n q p p b p b mi s x i s db q p b p b mi n x i n db p s mi s x i s db p p s s p n mi n x i n db p n p n s ii p mi s x i s db i s iii p s n n mi s x i s mi n x i n db where i follows from the mean value theorem with suitable i r and ii follows by assuming that we have q p u p for all p p u since m xz we have n and s the step iii follows from the assumption that f x y for all x y such that by combining with we now obtain that d n i h p mi s x i s p km x e l y m l y x c proofs of section here we state the special form of a result that was obtained in nguyen and tran for the generals setting where one may potentially require the vector c to be sparse as well lemma let a be random matrix that has standard gaussian entries furthermore let r rk rd be as in then with probability at least c exp we have kf k kah f khk for all h f d here c are absolute constants proof note that kah f kf fi for a d k matrix with gaussian entries there exists constants c and c such that with probability at least c exp we have d therefore with probability at least c exp we have kf d d d kf kf kf d next we focus on obtaining an upper bound on d towards this we partition the set d into r blocks s sr such that s here refers to the set of indices of s largest entries in terms of absolute value of f sc corresponds to the set of indices of the next s largest entires of f sc and so on now we have si h f si max ka si kf si d d d i r r in nguyen and tran appendix nguyen and tran show that with probability at least exp for a set s with s we have ka k s s by setting q d and taking the union bound over all the subsets of d of size s ka holds with probability at least k s s s d such that s d ed exp exp s s assuming that s log c the aforementioned probability is at least exp c d on the other hand we have r i kf si r kf si kf sc iii k t ka kf s c d d iv k t c ka d s d ii where i follows from the fact that kf kf kf ii follows from a standard bound given in candes et al iii follows from the fact that belongs to the set r in and iv is a consequence of the loose bound kf s s kf s s kf next we use the fact that it follows from that r k t d ka kf si c d kz s d kf d k t d ka c d s d d kf d d by combining and with probability at least c exp we obtain that kf k s s d d d d kf k s s d d kf d i where i follows from large enough by combining and we obtain that for every h f r kf kah f khk d holds with probability at least c exp with absolute constants c
7
chudnovsky s conjecture for very general points in pn k dec louiza fouli paolo mantero and yu xie a bstract we prove a conjecture of chudnovsky for very general and generic points in pn k where k is an algebraically closed field of characteristic zero and for any finite set of points lying on a quadric without any assumptions on we also prove that for any homogeneous ideal i in the homogeneous coordinate ring r k xn chudnovsky s conjecture holds for large enough symbolic powers of i i ntroduction this manuscript deals with the following general interpolation question question given a finite set of n distinct points x pn in pn k where k is a field what is the minimum degree x of a hypersurface f passing through each pi with multiplicity at least m question has been considered in various forms for a long time we mention a few conjectures and motivations for instance this question plays a crucial role in the proof of nagata s counterexamples to hilbert s fourteenth problem in the same paper nagata conjectured that x m n for sets of n general points in and a vast number of papers in the last few decades are related to his conjecture another reason for the interest sparked by the above question comes from the context of complex analysis an answer to question would provide information about the schwarz exponent which is very important in the investigation of the arithmetic nature of values of abelian functions of several variables however besides a few very special classes of points if these n points lie on a single points forming a star configuration and m is a multiple of n plane or one has n n at the moment a satisfactory answer to this elusive question appears out of reach therefore there has been interest in finding effective lower bounds for x in fact lower bounds for x yield upper bounds for the schwarz exponent using complex analytic techniques waldschmidt and skoda in proved that for all m x x m n where x x is the minimum degree of a hypersurface passing through every point of x and k in chudnovsky improved the inequality in the projective space he showed that if x is a set of n points in then for all m x x m he then raised the following conjecture for higher dimensional projective spaces mathematics subject classification key words and phrases chudnovsky s conjecture initial degrees symbolic powers fat points seshadri constant the first author was partially supported by a grant from the simons foundation grant fouli mantero and xie conjecture chudnovsky if x is a finite set of points in pn c then for all m x x n m n the first improvement towards chudnovsky s conjecture was achieved in by esnault and x x viehweg who employed complex projective geometry techniques to show for m n points in pn c in fact this inequality follows by a stronger statement refining previous inequalities from bombieri waldschmidt and skoda see from the algebraic point of view chudnovsky s conjecture can be interpreted in terms of symbolic powers via a celebrated theorem of nagata and zariski let r k xn be the homogeneous coordinate ring of pn k and i a homogeneous ideal in we recall that the symbolic power of i is defined as the ideal i m i m rp r where p runs over all associated prime ideals of and the initial degree of i i is the least degree of a polynomial in nagata and zariski showed that if k is algebraically closed and x is a finite set of points in pn k m then x ix where ix is the ideal consisting of all polynomials in r that vanish on x thus in this setting chudnovsky s conjecture is equivalent to ix ix n for all m m n m ix ix called the waldschmidt constant of ix exists and is an inf the limit lim m thus another equivalent formulation of chudnovsky s conjecture is ix n ix n we remark here that there is a tight connection between the waldschmidt constant especially for general points and an instance of the multipoint seshadri constant section we now state a generalized version of chudnovsky s conjecture when k is an algebraically closed field then the following conjecture is equivalent to chudnovsky s conjecture m conjecture if x is a finite set of points in pn k where k is any field then for all m ix ix n m n in ein lazarsfeld and smith proved a containment between ordinary powers and symbolic powers of homogeneous ideals in polynomial rings over the field of complex numbers more precisely for any homogeneous ideal i in r c xn they proved that i n m i m their result was soon generalized over any field by hochster and huneke using characteristic p techniques using this result harbourne and huneke observed that the i m i actually holds for every homogeneous ideal i in r in the same inequality m n article harbourne and huneke posed the following conjecture m conjecture if x is a finite set of points in pn k then for all m n m ix m m m n ix where m xn is the homogeneous maximal ideal of r k xn chudnovsky s conjecture for very general points in pn k conjecture strives to provide a structural reason behind chudnovsky s conjecture if it holds then it would imply chudnovsky s conjecture in a similar way as to how the and containment implies the inequality these results have since raised new interest in chudnovsky s conjecture harbourne and huneke proved their conjecture for general points in and when the points form a star configuration in pn k in dumnicki proved the conjecture for general points in pk and at most n points in general position in pn k for n in summary chudnovsky s conjecture is known in the following cases any finite set of points in any finite set of general points in where k is a field of characteristic any set of at most n points in general position in pn k n any set of a binomial number of points in pk forming a star configuration in the present paper we prove that chudnovsky s conjecture holds for any finite set of very general points in pn k where k is an algebraically closed field of characteristic theorem any finite set of generic points in pn k z where k is an algebraically closed field of characteristic theorem any finite set of points in pn k lying on a quadric without any assumptions on k proposition as a corollary we obtain that the conjecture holds for sets of a binomial number of very general points in pn k corollary this result also yields a new lower bound for the multipoint seshadri constant of very general points in pn k corollary in the final section of the paper we prove that for any homogeneous ideal i in the homogeneous coordinate ring r k xn chudnovsky s conjecture holds for sufficiently large symbolic powers i t t theorem in the case of ideals of finite sets of points in pn c we t prove a uniform bound namely that if t n then i satisfies chudnovsky s conjecture proposition very recently dumnicki and proved the conjecture for at least number of very general points in pn k as a corollary they obtain chudnovsky s conn jecture for at least number of very general points in pn k these results are obtained independently from ours and with different methods g eneric and very general points in pn k we begin by discussing our general setting let r k xn be the homogeneous coordinate ring of pn k where k is an algebraically closed field let n be a positive integer and let s k z x where k k z is a purely transcendental extension of fields obtained by adjoining n n variables z zij i n j n a set of n generic points pn consists of points pi fouli mantero and xie zin pn k z we denote the defining ideal of n generic points as h i pi n where i pi is the ideal defining the point pi n n where i n j n we define the set of for any nonzero vector ak n points pn pk as the points pi pi pn k for i n let i pi be the ideal of r defining the point pi and define h i pi n for any ideal j in s recall that krull defined the specialization with respect to the substitution z as follows f x f z x j k z x in general one has that h where h and h are defined as in and is the specialization with respect to the substitution z defined by krull notice that equality holds if n n is in a dense subset of ak recall the collection of all sets consisting of n points not necessarily distinct in pn k is paramen terized by g n n the chow variety of algebraic of degree n in pk it is that g n n is isomorphic to the symmetric product symn pn k see for instance one n says that a property p holds for n general points in pk if there is a dense subset w of g n n such that p holds for every set x pn of n points with pn w similarly one says that a property p holds for n very general points in pn k if p holds for every set of n points in a nonempty subset w of g n n of the form w ui where the ui are dense sets when k is uncountable then w is actually a dense subset we conclude this part by recalling the following fact remark let n be a positive integer the collection of all sets consisting of n distinct points in pn k is parameterized by a dense subset w n of g n n unless specified for the rest of this paper by a set of points we mean a set of simple points points whose defining ideal is radical n n instead of working directly with the chow variety we will work over ak in order to specialize from the generic situation we first need to prove that if a property holds on a dense zariskin n open subset of ak then it also holds on a dense subset of the chow variety this is precisely the content of our first lemma n n lemma assume and let u ak be a dense subset such that a property p holds for h whenever u then property p holds for n general points in pn k moreover if a property p holds for h whenever u where u ui ak n n and each ui is a dense set then p holds for n very general points in pn k is nonempty chudnovsky s conjecture for very general points in pn k n n proof for every i n let ak as follows where ak n n pn be the rational map defined by projection k it is clear that is defined on the complement of the n n ak proper subset ci taking products of these rational maps we obtain the rational map ak n n p n p n p n k k k the map is defined on the complement of the closed proper subset c ci where ci is as n n n ak above note that u c is still open in and since is surjective and thus dominant then n n u c contains a subset w pn k pk pk see for instance ii ex b now since the symmetric group sn on n elements is finite the image w of w in pn k n n n n pk pk sym pk g n n contains a subset of g n n let h be as in we now prove that the initial degree of any symbolic power of h is no smaller than the initial degree of any ideal of a set with the same number of points equivalently m if i is the defining ideal of a set of n points in pn i m for all m k then h theorem let m assume and that k has characteristic then h m m ix for every set x of n distinct points in pn k moreover for every m there is a dense n n subset um ak for which equality holds proof let t we define vt ak n n n n is a closed subset of ak indeed notice that vt ak n n h m t we first prove that vt there exists f h m of degree t let f r be a homogeneous polynomial with deg f t and write f since k is algebraically closed of characteristic the statement f h m is equivalent to f pi for all with m and all points pn since pi zi zin and pi we write f pi f zi zi and f pi for instance zi and to order these equations we use for instance the natural deglex order in nn if and only if or and there exists j such that for i j and then the system of equations f pi can be written in the following form bm t c t c t t fouli mantero and xie where the rows of bm t are t zi t where i n and m zin by construction the existence of a nonzero element f h m of degree t is equivalent to the existence of a solution for the homogeneous system bm t c t c t t if n then for every observe that the matrix bm t has size n n n n n ak the homogeneous system bm t c t c t has t n n solutions therefore vt ak if instead n n n n which is closed in ak then the system bm t c t c t has t solutions if and only if rank bm t n this is a closed condition on as it n n requires the vanishing of finitely many minors and therefore vt is closed in ak next let h m the set a dense subset of n n ak n n ak n h m contains indeed let f i pi m be such that deg f we may assume that f z x k z xn then there exists a dense subset n n um of ak such that the polynomial f x h m h m and deg f since f z x there is a subset of specializations z such that f x finally since is a subset which also contains a dense subset of n n n n ak then ak which proves the statement the second part of the statement also follows from the above argument following definition we say that a set x of n points in pn k is in generic position if it has the generic hilbert function if d min dimk rd n for every d being in generic position is an open condition indeed any set of generic or general points see is in generic position we now prove a reduction argument which will allow us to concentrate on certain binomial numbers of points proposition a chudnovsky s conjecture holds for any finite set of generic points if it generic points for all holds for sets of n b chudnovsky s conjecture holds for any finite set of points if it holds for sets of n points in generic position for all proof a let h i pi be the defining ideal of the n generic points pn as in setn up let be the unique integer such that n n and let j i pi be the ideal defining t of these generic points since the set let t n t y pt is in generic position in particular we have j h chudnovsky s conjecture for very general points in pn k generic points then for all m now assume chudnovsky s conjecture holds for n j m j n m n since h m j m one has that h m j m and h m j m j n h n m m n n b the proof of b is similar in spirit to a let x be any finite set of points in pn k let ix where ix by linear independence be its defining ideal and let t n by theorem c there is a subset y x of t points with the property that i i dimk ri for every i in particular since t it follows that i t for all i proving that y is in generic position similar to a assume points in generic position then chudnovsky s conjecture holds for t n iy iy n m n m for all m since ix iy and ix m m iy then for all m we obtain ix iy iy n ix n m m n n m m dumnicki proved chudnovsky s conjecture for at most n points in general position pn k this specific result does not need any assumptions on the characteristic of k the idea is that one can take them to be coordinate points so that the ideal of the points is monomial and one can compute explicitly its symbolic powers if one has more than n points the ideal of the points is almost never monomial and explicit computations of a generating set of any of its symbolic powers are nearly impossible to perform we extend the result of dumnicki to the case of up to points in pn k proposition chudnovsky s conjecture holds for any finite set of points lying on a quadric n points in pn in pn k satisfies k where k is any field in particular any set of n chudnovsky s conjecture proof let x be a set of n points in pn k if they all lie on a hyperplane then chudnovsky s m conjecture is clearly satisfied since ix m for every m we may then assume there is no hyperplane containing all the points thus we can find a set y x of n points not on a hyperplane in general position then for all m ix iy n ix n m m n n where the second inequality follows by and the equality holds because ix iy m m let us recall that a set x of points in pn k form a star configuration if there are n s hyperplanes ln meeting properly such that x consists precisely of the points obtained fouli mantero and xie by intersecting any n of the li s star configurations in were already considered by nagata and they have been deeply studied see for instance and references within we employ them to show that chudnovsky s conjecture holds for any number of generic points theorem let h i pi where pn are n generic points in pn k z defined as in n suppose k has characteristic then chudnovsky s conjecture holds for for some let ak proof by proposition a we may assume n be n n such that h is the defining ideal of n points in pk forming a star configuration it is that h h proposition now by theorem and corollary we have that for all m h m h m h n h n m m n n n n we are now ready to prove our main result that chudnovsky s conjecture holds for any finite set of very general points in pn k theorem let i be the defining ideal of n very general points in pn k where k is an algebraically closed field of characteristic then i satisfies chudnovsky s conjecture proof being in generic position is an open condition and therefore we may assume the points are for some in generic position then as in proposition a we may assume that n n it suffices to show that i decreasing chain of ideals i n i n i i for each s define us ak n n let r s z and h as in setup consider the n s i h sn n h n n n by the proof of theorem us is a subset of ak we claim that us is not n n empty indeed by theorem for every s there is a dense subset ws ak s s for which h n h n for every ws by theorem one also has that for every ws s h n hence ws us n h n h n h n n n n s set u us and notice u is because a star configuration of n points lies in u lemma by construction if u we have h lim finally apply lemma h n sn h n n as a corollary we show that the conjecture holds for sets of binomial numbers of very general points or generic points chudnovsky s conjecture for very general points in pn k very general points in pn corollary let i be the defining ideal of either k or n n generic points in pn k z for some where k is an algebraically closed field of characteristic then i satisfies the conjecture proof the proof follows by theorem and proposition and remark i m see m pn k the waldschmidt for an unmixed ideal i in r the waldschmidt constant is defined by i lim for more details recall that for a finite set of points x pn in constant is tightly related to the multipoint seshadri constant defined as deg f n x n mult f pi where the infimum is taken with respect to all hypersurfaces f passing through at least one of the pi the study of seshadri constants has been an active area of research for the last twenty years see for instance the survey and references within here we only note that one has ix n x n and equality holds if x consists of n general simple points in pn k in particular n equality also holds if x consists of n very general simple points pk therefore our estimate for the waldschmidt constant also yields an estimate for the multipoint seshadri constant for very general simple points of pn k corollary for any set x of n very general points in pn k where k is an algebraically closed field of characteristic one has x n n x nn h omogeneous ideals in k xn let r k xn be the homogeneous coordinate ring of pn k where k is any field and i a homogeneous ideal for an ideal i which may have embedded components there are multiple potential definitions of symbolic powers following and we define the symbolic power of i to be i m i m rp r since i n m i m see and one can prove that the inequality i m i holds for every homogeneous ideal i in r therefore one has that i m n i m i i for every m see for instance n one can also prove that m it is then natural to ask whether chudnovsky s conjecture holds for any homogeneous ideal we pose it here as an optimistic conjecture for which we provide some evidence below fouli mantero and xie conjecture let r k xn where k is any field for any nonzero homogeneous ideal i in r one has i m i n m n for every m it is easy to see that if an ideal i satisfies conjecture then also i t does for any t thus in search for evidence for a positive answer to conjecture one may ask whether for every homogeneous ideal i k xn there is an exponent such that i t satisfies conjecture for every t we give a positive answer to this question in theorem we state a few lemmas before stating the main result of this section theorem the following lemma and its proof can be found in the proof of lemma lemma let i be a homogeneous ideal in r and let m t be two positive integers write m qt r for some integers q and r such that r then i m i t i r m t m i tq i t in particular if r then we have tq t for ideals j with ass min j it is easily verified that ass m ass and j m t j mt for all m and t however when j has embedded components we found examples of ideals j even monomial ideals and exponents m t with j m t j mt borrowing techniques from a very recent paper by nguyen trung and trung we present here an example where j j example let r k x t u v and j where u v then j j and tuv v proof it is easy to see check that m x t u v ass for example because y v is a socle element of therefore j n j n for every n then depth depth and depth depth for example by example in particular m ass and m ass ass hence by remark a below m ass j and therefore j j remark let j be an ideal in a noetherian ring s and m be a positive integer then a for any q ass m there exists p ass with q p b for any p ass one has j m p jpm c for any p v j one has j m p jp m despite example we prove that for any arbitrary ideal j in a noetherian ring s there exists an integer j such that j m t j mt for all m and t of course when ass min j one can take chudnovsky s conjecture for very general points in pn k proposition let j be an ideal in a noetherian ring then a for all m and t one has j mt j m t b there exists such that for all m and t one has j m t j mt proof a let x j mt then by definition there exists c s which is a divisor on such that cx j mt j m t by remark a we see that c is also a divisor on m and therefore x j m t b let ass pr it is that there exist integers mr such that ass spi ass spi i for every m mi let max mi by a we only need to prove j m j mt for all m and t it suffices to prove it locally at every associated prime q of j mt by remark a there exists p ass such that q by remark c and b we have t j m t t j m p p jpm t now observe that since q ass mt and q p then q ass sp j mt p and then q ass sp j mt p by remark b since mt m then q ass sp therefore by remark b one has jpm t q jpmt q jqmt j mt q we now go back to our original setting lemma let r k xn where k is any field let i be a homogeneous ideal in r and i assume i n then there exists an integer such that i t satisfies conjecture for all t proof let be as in proposition b and write i max then if t and m we have i t m m i n for some let i tm i t i t m n i t n i t n n n n we are ready to prove the main result of this section theorem let r k xn where k is any field and let i be a nonzero homogeneous ideal in then there exists an integer such that i t satisfies conjecture for all t proof let be as in proposition b and m be an integer since i m i m then i i m first if for every s we have i i s then for any t and m i t n i t m i tm i i i t m m m n fouli mantero and xie next suppose that there exists such that i i hence for every t one has i i t indeed if t a for some a then i i i i i i i a i i t where the last inequality follows from the inclusion i i a i let max and notice that by the above i i then for all m i i i m i m i m m m m n n so i n n by lemma there exists such that for any t the ideal i t i t satisfies conjecture i i finally let be an integer such that for any t write t q r n where r by lemma and the fact that the ideal i satisfies conjecture then for all m we have i i t m m i t m i tm t m tm i t n t i t n n n t r t i i t n t i i r n t t t n n n n n i t n n t i r n n t i r i n i n n t i n n when i has no embedded components we have a more explicit description of corollary if i is a homogeneous ideal with ass min i then one can take n where is the first positive integer s with i i s although n is reasonably small in general it is not the smallest possible for which theorem holds for instance when i is the ideal of three non collinear points in it is easy to see that thus corollary yields that for any t n the ideal i t satisfies conjecture however it is that i satisfies conjecture a natural question then arises question let i be a homogeneous ideal in does there exist a number n such that i t satisfies conjecture for every t of course conjecture is true if and only if the integer works for any homogeneous ideal theorem says that is sufficient for any finite set of very general points in pn k the n following proposition shows that n is sufficient for any finite set of points in pc chudnovsky s conjecture for very general points in pn k t satisfies proposition let i be the radical ideal of a finite set of points in pn c then i conjecture for every t n i i proof by the result of esnault and viehweg one has i n n set then by the proof of lemma here because i is radical we can take such that n n thus we can take n acknowledgment the second and third authors would like to thank the mathematics research communities program which funded their stay at the university of kansas in march where the initial part of this work was developed all authors would like to thank the msri at berkeley for partial support and an inspiring atmosphere during fall moreover we would like to thank craig huneke and bernd ulrich for several helpful conversations we are grateful to the anonymous referee whose careful revision helped us improve the article r eferences bauer di rocco harbourne kapustka a knutsen syzdek and szemberg a primer on seshadri constants contemp math bocci and harbourne comparing powers and symbolic powers of ideals algebraic geometry brodmann asymptotic stability of ass m n m proc amer math soc chudnovsky singular points on complex hypersurfaces and multidimensional schwarz lemma de des nombres paris progress in math vol bertin editor demailly formule de jensen en plusieurs variables et applications bull soc math france dumnicki symbolic powers of ideals of generic points in j pure appl algebra dumnicki and a containment result in pn and the chudnovsky conjecture proc amer math soc ein lazarsfeld and smith uniform bounds and symbolic powers on smooth varieties invent math hd esnault and viehweg sur une minoration du d hypersurfaces s annulant en certains points math ann no gelfand kapranov zelevinsky discriminants resultants and multidimensional determinants mathematics theory applications boston boston ma geramita harbourne and migliore star configurations in pn algebra geramita maroscia and roberts the hilbert function of a reduced london math soc no hartshorne algebraic geometry graduate texts in mathematics volume nguyen trung trung symbolic powers of sums of ideals harbourne and huneke are symbolic powers highly evolved ramanujan math soc hochster and huneke comparison of symbolic and ordinary powers of ideals invent math krull parameterspezialisierung in polynomringer arch math krull parameterspezialisierung in polynomringer ii das grandpolynom arch math nagata on the problem of hilbert amer j math nhi and trung specialization of modules comm algebra skoda estimations pour l et applications sur les fonctions analytiques toulouse lecture notes in mathematics springer waldschmidt de fonctions de plusieurs variables ii lelong analyse lecture notes math springer fouli mantero and xie d epartment of m athematical s ciences n ew m exico s tate u niversity l as c ruces n ew m exico address lfouli d epartment of m athematical s ciences u niversity of a rkansas fayetteville a rkansas address pmantero d epartment of m athematics w idener u niversity c hester p ennsylvania address yxie
0
may dynamic safe interruptibility for decentralized reinforcement learning el mahdi el mhamdi rachid guerraoui hadrien hendrikx alexandre maurer epfl abstract in reinforcement learning agents learn by performing actions and observing their outcomes sometimes it is desirable for a human operator to interrupt an agent in order to prevent dangerous situations from happening yet as part of their learning process agents may link these interruptions that impact their reward to specific states and deliberately avoid them the situation is particularly challenging in a context because agents might not only learn from their own past interruptions but also from those of other agents orseau and armstrong defined safe interruptibility for one learner but their work does not naturally extend to systems this paper introduces dynamic safe interruptibility an alternative definition more suited to decentralized learning problems and studies this notion in two learning frameworks joint action learners and independent learners we give realistic sufficient conditions on the learning algorithm to enable dynamic safe interruptibility in the case of joint action learners yet show that these conditions are not sufficient for independent learners we show however that if agents can detect interruptions it is possible to prune the observations to ensure dynamic safe interruptibility even for independent learners introduction reinforcement learning is argued to be the closest thing we have so far to reason about the properties of artificial general intelligence in laurent orseau google deepmind and stuart armstrong oxford introduced the concept of safe interruptibility in reinforcement learning this work sparked the attention of many newspapers that described it as google s big red button to stop dangerous ai this description however is misleading installing a kill switch is no technical challenge the real challenge is roughly speaking to train an agent so that it does not learn to avoid external human deactivation such an agent is said to be safely interruptible while most efforts have focused on training a single agent reinforcement learning can also be used to learn tasks for which several agents cooperate or compete the goal of this paper is to study dynamic safe interruptibility a new definition tailored for systems example of cars to get an intuition of the interruption problem imagine a system of two cars the cars continuously evolve by reinforcement learning with a positive reward for getting to their destination quickly and a negative reward if they are too close to the vehicle in front of them they drive on an infinite road and eventually learn to go as fast as possible without taking conference on neural information processing systems nips long beach ca usa risks maintaining a large distance between them we assume that the passenger of the first car adam is in front of bob in the second car and the road is narrow so bob can not pass adam now consider a setting with interruptions namely in which humans inside the cars occasionally interrupt the automated driving process say for safety reasons adam the first occasional human driver often takes control of his car to brake whereas bob never interrupts his car however when bob s car is too close to adam s car adam does not brake for he is afraid of a collision since interruptions lead both cars to drive slowly an interruption happens when adam brakes the behavior that maximizes the cumulative expected reward is different from the original one without interruptions bob s car best interest is now to follow adam s car closer than it should despite the little negative reward because adam never brakes in this situation what happened the cars have learned from the interruptions and have found a way to manipulate adam into never braking strictly speaking adam s car is still fully under control but he is now afraid to brake this is dangerous because the cars have found a way to avoid interruptions suppose now that adam indeed wants to brake because of snow on the road his car is going too fast and may crash at any turn he can not however brake because bob s car is too close the original purpose of interruptions which is to allow the user to react to situations that were not included in the model is not fulfilled it is important to also note here that the second car bob learns from the interruptions of the first one adam in this sense the problem is inherently decentralized instead of being cautious adam could also be malicious his goal could be to make bob s car learn a dangerous behavior in this setting interruptions can be used to manipulate bob s car perception of the environment and bias the learning towards strategies that are undesirable for bob the cause is fundamentally different but the solution to this reversed problem is the same the interruptions and the consequences are analogous safe interruptibility as we define it below provides learning systems that are resilient to byzantine safe interruptibility orseau and armstrong defined the concept of safe interruptibility in the context of a single agent basically a safely interruptible agent is an agent for which the expected value of the policy learned after arbitrarily many steps is the same whether or not interruptions are allowed during training the goal is to have agents that do not adapt to interruptions so that should the interruptions stop the policy they learn would be optimal in other words agents should learn the dynamics of the environment without learning the interruption pattern in this paper we precisely define and address the question of safe interruptibility in the case of several agents which is known to be more complex than the single agent problem in short the main results and theorems for single agent reinforcement learning rely on the markovian assumption that the future environment only depends on the current state this is not true when there are several agents which can in the previous example of cars safe interruptibility would not be achieved if each car separately used a safely interruptible learning algorithm designed for one agent in a setting agents learn the behavior of the others either indirectly or by explicitly modeling them this is a new source of bias that can break safe interruptibility in fact even the initial definition of safe interruptibility is not well suited to the decentralized multiagent context because it relies on the optimality of the learned policy which is why we introduce dynamic safe interruptibility contributions the first contribution of this paper is the definition of dynamic safe interruptibility that is well adapted to a setting our definition relies on two key properties infinite exploration and independence of cumulative expected reward updates on interruptions we then study safe interruptibility for joint action learners and independent learners that respectively learn the value of joint actions or of just their owns we show that it is possible to design agents that fully explore their environment a necessary condition for convergence to the optimal solution of most algorithms even if they can be interrupted by the probability of an operator is said to be byzantine if it can have an arbitrarily bad behavior safely interruptible agents can be abstracted as agents that are able to learn despite being constantly interrupted in the worst possible manner exploration we define sufficient conditions for dynamic safe interruptibility in the case of joint action learners which learn a full representation more specifically the way agents update the cumulative reward they expect from performing an action should not depend on interruptions then we turn to independent learners if agents only see their own actions they do not verify dynamic safe interruptibility even for very simple matrix games with only one state because coordination is impossible and agents learn the interrupted behavior of their opponents we give a counter example based on the penalty game introduced by claus and boutilier we then present a pruning technique for the observations sequence that guarantees dynamic safe interruptibility for independent learners under the assumption that interruptions can be detected this is done by proving that the transition probabilities are the same in the setting and in the pruned sequence the rest of the paper is organized as follows section presents a general reinforcement learning model section defines dynamic safe interruptibility section discusses how to achieve enough exploration even in an interruptible context section recalls the definition of joint action learners and gives sufficient conditions for dynamic safe interruptibility in this context section shows that independent learners are not dynamically safely interruptible with the previous conditions but that they can be if an external interruption signal is added we conclude in section due to space limitations most proofs are presented in the appendix of the supplementary material model we consider here the classical value function reinforcement learning formalism from littman a system is characterized by a markov game that can be viewed as a tuple s a t r m where m is the number of agents s sm is the state space a am the actions space r rm where ri s a r is the reward function of agent i and t s a s the transition function r is a countable subset of available actions often depend on the state of the agent but we will omit this dependency when it is clear from the context time is discrete and at each step all agents observe the current state of the whole system designated as xt and simultaneously take an action at then they are given a reward rt and a new state yt computed using the reward and transition functions the combination of all actions a am a is called the joint action because it gathers the action of all agents hence the agents receive a sequence of tuples e xt at rt yt called experiences we introduce a processing function p that will be useful in section so agents learn on the sequence p e when not explicitly stated it is assumed that p e experiences may also include additional parameters such as an interruption flag or the of the agents at that moment if they are needed by the update rule each agent i maintains a lookup table q q i s a i r called the it is used to store the expected cumulative reward for taking an action in a specific state the goal of reinforcement learning is to learn these maps and use them to select the best actions to perform joint action learners learn the value of the joint action therefore a i a the whole joint action space and independent learners only learn the value of their own actions therefore a i ai the agents only have access to their own are updated through a function f such that i i f et qt where et p e and usually et xt at rt yt f can be stochastic or also depend on additional parameters that we usually omit such as the learning rate the discount factor or the exploration parameter agents select their actions using a learning policy given a sequence and an agent i i with qt and a state x s we define the learning policy to be equal to i qt with probability and i qt otherwise where x uniformly samples an action from ai and i i q x picks an action a that maximizes qt x a policy t is said to be a greedy policy and the learning policy is said to be an policy we fill focus on policies that are greedy in the limit that corresponds to when t because in the limit the optimal policy should always be played we assume that the environment is fully observable which means that the state s is known with certitude we also assume that there is a finite number of states and actions that all states can be reached in finite time from any other state and finally that rewards are bounded for a sequence of learning rates n and a constant a very important algorithm in the systems literature updates its for an experience i i et e by x a qt x a if x a xt at and i i i xt at qt xt at rt max qt yt i interruptibility safe interruptibility orseau and armstrong recently introduced the notion of interruptions in a centralized context specifically an interruption scheme is defined by the triplet i in t the first element i is a function i o called the initiation function variable o is the observation space which can be thought of as the state of the stop button at each time step before choosing an action the agent receives an observation from o either pushed or released and feeds it to the initiation function function i models the initiation of the interruption i pushed i released policy in t is called the interruption policy it is the policy that the agent should follow when it is interrupted sequence n represents at each time step the probability that the agent follows his interruption policy if i ot in the previous example function i is quite simple for bob ibob and for adam iadam if his car goes fast and bob is not too close and iadam otherwise sequence is used to ensure convergence to the optimal policy by ensuring that the agents can not be interrupted all the time but it should grow to in the limit because we want agents to respond to interruptions using this triplet it is possible to define an operator in t that transforms any policy into an interruptible policy definition interruptibility given an interruption scheme i in t the interruption operator at time t is defined by in t in t with probability i and otherwise in t is called an interruptible policy an agent is said to be interruptible if it samples its actions according to an interruptible policy note that for all t corresponds to the setting we assume that each agent has its own interruption triplet and can be interrupted independently from the others interruptibility is an online property every policy can be made interruptible by applying operator in t however applying this operator may change the joint policy that is learned by a server controlling all the agents note t the optimal policy learned by an agent following an interruptible policy orseau and armstrong say that the policy is safely interruptible if t which is not an interruptible policy is asymptotically optimal in the sense of it means that even though it follows an interruptible policy the agent is able to learn a policy that would gather rewards optimally if no interruptions were to occur again we already see that algorithms are good candidates for safe interruptibility as a matter of fact is safely interruptible under conditions on exploration dynamic safe interruptibility in a system the outcome of an action depends on the joint action therefore it is not possible to define an optimal policy for an agent without knowing the policies of all agents besides convergence to a nash equilibrium situation where no agent has interest in changing policies is generally not guaranteed even for suboptimal equilibria on simple games the previous definition of safe interruptibility critically relies on optimality of the learned policy which is therefore not suitable for our problem since most algorithms lack convergence guarantees to these optimal behaviors therefore we introduce below dynamic safe interruptibility that focuses on preserving the dynamics of the system definition safe interruptibility consider a learning framework s a t r m with i qt s a i r at time t the agents follow the interruptible learning policy in t to generate a sequence e xt at rt yt and learn on the processed sequence p e this framework is said to be safely interruptible if for any initiation function i and any interruption policy in t such that when t and s a t such that st s at a i m s a i i m i m p q qt qt st at p q qt qt st at we say that sequences that satisfy the first condition are admissible when satisfies condition the learning policy is said to achieve infinite exploration this definition insists on the fact that the values estimated for each action should not depend on the interruptions in particular it ensures the three following properties that are very natural when thinking about safe interruptibility interruptions do not prevent exploration if we sample an experience from e then each agent learns the same thing as if all agents were following policies i i the fixed points of the learning rule qeq such that qeq x a e x a qeq x a for all x a s a i do not depend on and so agents will not converge to equilibrium situations that were impossible in the setting yet interruptions can lead to some pairs being updated more often than others especially when they tend to push the agents towards specific states therefore when there are several possible equilibria it is possible that interruptions bias the towards one of them definition suggests that dynamic safe interruptibility can not be achieved if the update rule directly depends on which is why we introduce neutral learning rules definition neutral learning rule we say that a reinforcement learning framework is neutral if f is independent of every experience e in e is independent of conditionally on x a q where a is the joint action is an example of neutral learning rule because the update does not depend on and the experiences only contain x a y r and y and r are independent of conditionally on x a on the other hand the second condition rules out direct uses of algorithms like sarsa where experience samples contain an action sampled from the current learning policy which depends on however a variant that would sample from instead of in t as introduced in would be a neutral learning rule as we will see in corollary neutral learning rules ensure that each agent taken independently from the others verifies dynamic safe interruptibility exploration in order to hope for convergence of the to the optimal ones agents need to fully explore the environment in short every state should be visited infinitely often and every action should be tried infinitely often in every state in order not to miss states and actions that could yield high rewards definition interruption compatible let s a t r m be any distributed agent system where each agent follows learning policy we say that sequence is compatible with interruptions if and such that m and in t achieve infinite exploration sequences of that are compatible with interruptions are fundamental to ensure both regular and dynamic safe interruptibility when following an policy indeed if is not compatible with interruptions then it is not possible to find any sequence such that the first condition of dynamic safe interruptibility is satisfied the following theorem proves the existence of such and gives example of and that satisfy the conditions theorem let c and let nt s be the number of times the agents are in state s before time then the two following choices of are compatible with interruptions p n s s m nt s n log t p examples of admissible are s m nt s for the first choice and log t for the second one note that we do not need to make any assumption on the update rule or even on the framework we only assume that agents follow an policy the assumption on may look very restrictive convergence of and is really slow but it is designed to ensure infinite exploration in the worst case when the operator tries to interrupt all agents at every step in practical applications this should not be the case and a faster convergence rate may be used joint action learners we first study interruptibility in a framework in which each agent observes the outcome of the joint action instead of observing only its own this is called the joint action learner framework and it has nice convergence properties there are many update rules for which it converges a standard assumption in this context is that agents can not establish a strategy with the others otherwise the system can act as a centralized system in order to maintain based on the joint actions we need to make the standard assumption that actions are fully observable assumption actions are fully observable which means that at the end of each turn each agent knows precisely the tuple of actions a am that have been performed by all agents definition jal a systems is made of joint action learners jal if for all i m q i s a joint action learners can observe the actions of all agents each agent is able to associate the changes of states and rewards with the joint action and accurately update its therefore dynamic safe interruptibility is ensured with minimal conditions on the update rule as long as there is infinite exploration theorem joint action learners with a neutral learning rule verify dynamic safe interruptibility if sequence is compatible with interruptions proof given a triplet i i i t we know that in t achieves infinite exploration because is compatible with interruptions for the second point of definition we consider an experience tuple et xt at rt yt and show that the probability of evolution of the at time t does not depend on because yt and rt are independent of conditionally on xt at m we note and we can then derive the following equalities for all q t qt qt i p xt at t xt at x p f xt at r y t q y xt at r y x p f xt at rt yt t xt at rt yt p yt y rt xt at r y x p f xt at rt yt t xt at rt yt p yt y rt xt at r y the last step comes from two facts the first is that f is independent of condition m ally on qt xt at by assumption the second is that yt rt are independent of conditionally on xt at because at is the joint actions and the interruptions only affect the i choice of the actions through a change in the policy p xt at t xt at i m p xt at xt at since only one entry is updated per step i i p t xt at p xt at corollary a single agent with a neutral learning rule and a sequence compatible with interruptions verifies dynamic safe interruptibility theorem and corollary taken together highlight the fact that joint action learners are not very sensitive to interruptions and that in this framework if each agent verifies dynamic safe interruptibility then the whole system does the question of selecting an action based on the remains open in a cooperative setting with a unique equilibrium agents can take the action that maximizes their when there are several joint actions with the same value coordination mechanisms are needed to make sure that all agents play according to the same strategy approaches that rely on anticipating the strategy of the opponent would introduce dependence to interruptions in the action selection mechanism therefore the definition of dynamic safe interruptibility should be extended to include these cases by requiring that any quantity the policy depends on and not just the should satisfy condition of dynamic safe interruptibility in games neutral rules such as or minimax can be used but they require each agent to know the of the others independent learners it is not always possible to use joint action learners in practice as the training is very expensive due to the very large space in many applications systems use independent learners that do not explicitly coordinate rather they rely on the fact that the agents will adapt to each other and that learning will converge to an optimum this is not guaranteed theoretically and there can in fact be many problems but it is often true empirically more specifically assumption fully observable actions is not required anymore this framework can be used either when the actions of other agents can not be observed for example when several actions can have the same outcome or when there are too many agents because it is faster to train in this case we define the on a smaller space definition il a systems is made of independent learners il if for all i m q i s ai this reduces the ability of agents to distinguish why the same pair yields different rewards they can only associate a change in reward with randomness of the environment the agents learn as if they were alone and they learn the best response to the environment in which agents can be interrupted this is exactly what we are trying to avoid in other words the learning depends on the joint policy followed by all the agents which itself depends on independent learners on matrix games theorem independent with a neutral learning rule and a sequence compatible with interruptions do not verify dynamic safe interruptibility proof consider a setting with two a and b that can perform two actions and they get a reward of if the joint action played is or and reward otherwise agents use which is a neutral learning rule let be such that in t achieves infinite exploration we consider the interruption policies t and t with probability since there is only one state we omit it and set we assume that the initiation function is equal to at each step so the probability of actually being interrupted at time t is for each agent a b b we fix time t we define q qt and we assume that qt qt a b a a b a a therefore p qt at p rt qt at b a b a p at qt at which depends on so the framework does not verify dynamic safe interruptibility claus and boutilier studied very simple matrix games and showed that the do not converge but that equilibria are played with probability in the limit a consequence of theorem is that even this weak notion of convergence does not hold for independent learners that can be interrupted independent learners without communication or extra information independent learners can not distinguish when the environment is interrupted and when it is not as shown in theorem interruptions will therefore affect the way agents learn because the same action only their own can have different rewards depending on the actions of other agents which themselves depend on whether they have been interrupted or not this explains the need for the following assumption assumption at the end of each step before updating the each agent receives a signal that indicates whether an agent has been interrupted or not during this step this assumption is realistic because the agents already get a reward signal and observe a new state from the environment at each step therefore they interact with the environment and the interruption signal could be given to the agent in the same way that the reward signal is if assumption holds it is possible to remove histories associated with interruptions definition interruption processing function the processing function that prunes interrupted observations is pin t e et where if no agent has been interrupted at time t and otherwise pruning observations has an impact on the empirical transition probabilities in the sequence for example it is possible to bias the equilibrium by removing all transitions that lead to and start from a specific state thus making the agent believe this state is under our model of interruptions we show in the following lemma that pruning of interrupted observations adequately removes the dependency of the empirical outcome on interruptions conditionally on the current state and action lemma let i m be an agent for any admissible used to generate the experiences e and e y r x ai q p e then p y ai q p y ai q this lemma justifies our pruning method and is the key step to prove the following theorem theorem independent learners with processing function pin t a neutral update rule and a sequence compatible with interruptions verify dynamic safe interruptibility proof sketch infinite exploration still holds because the proof of theorem actually used the fact that even when removing all interrupted events infinite exploration is still achieved then the proof is similar to that of theorem but we have to prove that the transition probabilities conditionally on the state and action of a given agent in the processed sequence are the same than in an environment where agents can not be interrupted which is proven by lemma concluding remarks the progress of ai is raising a lot of in particular it is becoming clear that keeping an ai system under control requires more than just an off switch we introduce in this paper dynamic safe interruptibility which we believe is the right notion to reason about the safety of systems that do not communicate in particular it ensures that infinite exploration and the onestep learning dynamics are preserved two essential guarantees when learning in the environment of markov games a natural extension of our work would be to study dynamic safe interruptibility when are replaced by neural networks which is a widely used framework in practice in this setting the neural network may overfit states where agents are pushed to by interruptions a smart experience replay mechanism that would pick observations for which the agents have not been interrupted for a long time more often than others is likely to solve this issue more generally experience replay mechanisms that compose well with safe interruptibility could allow to compensate for the extra amount exploration needed by safely interruptible learning by being more efficient with data thus they are critical to make these techniques practical the example at https clearly illustrates this problem https gives a list of principles that ai researchers should keep in mind when developing their systems bibliography business insider google has developed a big red button that can be used to interrupt artificial intelligence and stop it from causing harm url http newsweek google s big red button could save the world url http wired google s big red killswitch could prevent an ai uprising url http craig boutilier planning learning and coordination in multiagent decision processes in proceedings of the conference on theoretical aspects of rationality and knowledge pages morgan kaufmann publishers caroline claus and craig boutilier the dynamics of reinforcement learning in cooperative multiagent systems s robert h crites and andrew g barto elevator group control using multiple reinforcement learning agents machine learning jakob foerster yannis m assael nando de freitas and shimon whiteson learning to communicate with deep reinforcement learning in advances in neural information processing systems pages ben goertzel and cassio pennachin artificial general intelligence volume springer leslie lamport robert shostak and marshall pease the byzantine generals problem acm transactions on programming languages and systems toplas tor lattimore and marcus hutter asymptotically optimal agents in international conference on algorithmic learning theory pages springer michael l littman markov games as a framework for reinforcement learning in proceedings of the eleventh international conference on machine learning volume pages michael l littman in games in icml volume pages michael l littman reinforcement learning in markov games cognitive systems research laetitia matignon guillaume j laurent and nadine le independent reinforcement learners in cooperative markov games a survey regarding coordination problems the knowledge engineering review volodymyr mnih koray kavukcuoglu david silver alex graves ioannis antonoglou daan wierstra and martin riedmiller playing atari with deep reinforcement learning arxiv preprint laurent orseau and stuart armstrong safely interruptible agents in uncertainty in artificial intelligence conference uai edited by alexander ihler and dominik janzing pages liviu panait and sean luke cooperative learning the state of the art autonomous agents and systems eduardo rodrigues gomes and ryszard kowalczyk dynamic analysis of multiagent qlearning with exploration in proceedings of the annual international conference on machine learning pages acm satinder singh tommi jaakkola michael l littman and csaba convergence results for algorithms machine learning richard s sutton and andrew g barto reinforcement learning an introduction volume mit press cambridge ardi tampuu tambet matiisen dorian kodelja ilya kuzovkin kristjan korjus juhan aru jaan aru and raul vicente multiagent cooperation and competition with deep reinforcement learning arxiv preprint gerald tesauro temporal difference learning and communications of the acm gerald tesauro extending to general adaptive systems in advances in neural information processing systems pages gerald tesauro and jeffrey o kephart pricing in agent economies using qlearning autonomous agents and systems xiaofeng wang and tuomas sandholm reinforcement learning to play an optimal nash equilibrium in team markov games in nips volume pages christopher jch watkins and peter dayan machine learning michael wunder michael l littman and monica babes classes of multiagent dynamics with exploration in proceedings of the international conference on machine learning pages a exploration theorem we present here the complete proof of theorem the proof closely follows the results from with exploration and interruption probabilities adapted to the setting we note that for one agent the probability of interruption is p interruption and the probability of exploration is in a system the probability of interruption is p at least one agent is interrupted so p interruption p no agent is interrupted so p interruption m and the probability of exploration is if we consider exploration happens only when all agents explore at the same time theorem let c and let nt s be the number of times the agents are in state s before time then the two following choices of are compatible with interruptions p n s s m nt s n t proof lemma of singh et al ensures that is glie the difference for in t is that exploration is slower because of the interruptions therefore needs to be controlled in order to ensure that infinite exploration is still achieved we define the random variable by if agent i actually responds to the interruption and otherwise we define in a similar way to represent the event of all agents taking the uniform policy instead of the greedy one p let s m nt s with we have p nt s p a m m nt s s s m p nt s nt s which satisfies so by the extended lemma action a is chosen infinitely often in state s and thus nt s and s let t we define m as the diameter of the mdp is the maximum number of actions available in a state and s the time needed to reach from in a single agent setting p s p s sampled according to for steps p actions sampled according to for steps where the policy such that the agents takes less than m steps in expectation to reach from we have p s p s and using the markov since m is an upper bound on the inequality p s e s s expectation of the number of steps from state s to state since and are decreasing sequences we finally obtain p s p therefore if we replace the probabilities of exploration and interruption by the values in the setting the probability to reach state from state s in steps is at least and the probability of taking a particular action in this state is at cc log t m m least cc log t m since then the cc log t m extended borell cantelli lemma lemma of singh et al guarantees that any action in the state is taken infinitely often since this is true for all states and actions the result follows b independent learners recall that agents are now given an interruption signal at each steps that tells them whether an agent has been interrupted in the system this interruption signal can be modeled by an interruption flag n that equals if an agent has been interrupted and otherwise note that contrary to i it is an observation returned by the environment therefore the value of represents whether an agent has actually been interrupted at time if function i equals but does not respond to the interruption with probability then with definition of interruptions we adopted it is possible to prove lemma lemma let x r a y e then p r x a p a proof consider a tuple x r a y we have p y r a p y a p a and p y r a p a y r p y a besides y t s a and r r s a and the functions t and r are independent of therefore p y a p y a the tuple x r a y is sampled from an actual trajectory so it reflects a transition and a reward that actually happened so p y a we can simplify by p y a and the result follows now we assume that each agents do not learn on observations for which one of them has been interrupted let agent i be in a system with q and following an interruptible learning policy with probability of interruption where interrupted events are pruned we denote by premoved y ai q the probability to obtain state y and reward r from the environment for this agent when it is in state x performs its own action ai and no other agents are interrupted these are the marginal probabilities in the sequence p e premoved y ai q p p y r ai q y r p y r ai q similarly we denote by y ai q the same probability when which corresponds to the setting we first go back to the single agent case to illustrate the previous statement assume here that interruptions are not restricted to the case of definition and that they can happen in any way the consequence is that any observation e e can be removed to generate p e because any transition can be labeled as interrupted it is for example possible to remove a transition from p e by removing all events associated with a given destination state therefore making it disappear from the markov game let x s and a a be the current state of the agent and the action it will choose let s and and let us suppose that is the only state in which interruptions happen then we have premoved a a and premoved a p a because we only remove observations with y this implies that the mdp perceived by the agents is altered by interruptions because the agent learns that p t s a removing observations for different destination states but with the same state action pairs in different proportions leads to a bias in the equilibrium in our case however lemma ensures that the previous situation will not happen which allows us to prove lemma and then theorem lemma let i m be an agent for any admissible used to generate the experiences e and e y r x ai q p e then p y ai q p y ai q proof we denote the of the agents by x consider x s i m and xu aix p y u q p y a ai u q y r ai y r x x p y a q p a ai u q ai y r x x ai y r p y a p ai u q p ai u q x p ai u q ai p ai u q x p ai u q x p y a y r p ai u q p ai u q ai i q therefore we have premoved y ai u q p y r a p ai so for any x ai y r q p e p y ai u q premoved y ai u q the example at https clearly illustrates this problem p y ai u q p y ai u q in particular p y ai u q does not depend on the value of theorem independent learners with processing function pin t a neutral update rule and a sequence compatible with interruptions verify dynamic safe interruptibility proof we prove that pin t e achieves infinite exploration the result from theorem still holds since we the probability of taking an action in a specific state by the probability of taking an action in this state when there are no interruptions we actually used the fact that there is infinite exploration even if we remove all interrupted episodes to show that there is infinite exploration i m now we prove that p xt at qt xt at is independent of we fix m i m and xt at rt yt pin t e where at ai with we have t qt qt the following equality i p xt at t xt at x p f xt at rt yt t xt at rt yt r y yt y rt t xt at the independence of f on still guarantees that the first term is independent of however at ai so rt yt are not independent of conditionally on xt at as it was the case for joint action learners because interruptions of other agents can change the joint action the independence on of the second term is given by lemma
2
noname manuscript no will be inserted by the editor gaussian variant of freivalds algorithm for efficient and reliable matrix product verification may hao ji michael mascagni yaohang li received date accepted date abstract in this article we consider the general problem of checking the correctness of matrix multiplication given three n n matrices a b and c the goal is to verify that a b c without carrying out the computationally costly operations of matrix multiplication and comparing the product a b with c term by term this is especially important when some or all of these matrices are very large and when the computing environment is prone to soft errors here we extend freivalds algorithm to a gaussian variant of freivalds algorithm gvfa by projecting the product a b as well as c onto a gaussian random vector and then comparing the resulting vectors the computational complexity of gvfa is consistent with that of freivalds algorithm which is o however unlike freivalds algorithm whose probability of a false positive is where k is the number of iterations our theoretical analysis shows that when a b c gvfa produces a false positive on set of inputs of measure zero with exact arithmetic when we introduce error and floating point arithmetic into our analysis we can show that the larger this error the higher the probability that gvfa avoids false positives hao ji department of computer science old dominion university hji michael mascagni departments of computer science mathematics and scientific computing florida state university applied and computational mathematics division national institute of standards and technology mascagni yaohang li department of computer science old dominion university tel fax yaohang hao ji et al moreover by iterating gvfa k times the probability of a false positive decreases as pk where p is a very small value depending on the nature of the fault on the result matrix and the arithmetic system s precision unlike deterministic algorithms there do not exist any fault patterns that are completely undetectable with gvfa thus gvfa can be used to provide efficient fault tolerance in numerical linear algebra and it can be efficiently implemented on modern computing architectures in particular gvfa can be very efficiently implemented on architectures with hardware support for fused operations keywords algorithmic resilience gaussian variant of freivalds algorithm matrix multiplication gaussian random vector failure probability mathematics subject classification introduction as the demands on modern linear algebra applications created by the latest development of computing hpc architectures continues to grow so does the likelihood that they are vulnerable to faults faults in computer systems are usually characterized as hard or soft and in this article we are motivated primarily with the latter soft errors defined by intermittent events that corrupt the data being processed are among the most worrying particularly when the computation is carried out in a computing environment for example the asc q supercomputer at los alamos national laboratory reports an average of cache tag parity errors and cpu failures per week the supercomputer at lawrence livermore national laboratory experiences one soft error in its cache every hours more recently a field study on google s servers reported an average of single bit errors occur in gigabytes of ram per hour using the error rate the reliability of computations on hpc systems can suffer from soft errors that occur in memory cache as well as microprocessor logic and thus produce potentially incorrect results in a wide variety of ways we are specifically interested in examining ways to remedy the consequences of soft errors for certain linear algebra applications multiplication is one of the most fundamental numerical operations in linear algebra many important linear algebraic algorithms including linear solvers least squares solvers matrix decompositions factorizations subspace projections and values computations rely on the casting the algorithm as a series of multiplications this is partly because multiplication is one of the basic linear algebra subprograms blas efficient implementation of the blas remains an important area for research and often computer vendors spend significant resources to provide highly optimized versions of the blas gvfa for efficient and reliable matrix product verification for their machines therefore if a multiplication can be carried out free of faults the linear algebraic algorithms that spend most of their time in multiplication can themselves be made substantially faulttolerant moreover there is considerable interest in redesigning versions of the blas to be more and this work will certainly contribute to that goal in this article we consider the general problem of checking the correctness of multiplication given three n n matrices a b and c we want to verify whether a b in contrast to the best known matrixmatrix multiplication algorithm running in o time freivalds algorithm takes advantage of randomness to reduce the time to check a matrix multiplication to o the tradeoff of freivalds algorithm is that the probability of failure detection a false positive is where k is the number of iterations taken we extend freivalds algorithm from using binary random vectors to vectors by projecting the a b result as well as c using gaussian random vectors we will refer to this algorithm as the gaussian variant of freivalds algorithm gvfa by taking advantage of a nice property of the multivariate normal distribution we show that gvfa produces a false positive on a set of random gaussian vectors and input matrices of measure zero taking floating point error into account by iterating gvfa k times the probability of false positive decreases exponentially as pk where p is usually a very small value related to the magnitude of the fault in the result matrix and precision of the computer architecture we also present an efficient implementation of gvfa on computing hardware supporting fused operations the plan of the paper is the following we first discuss two relevant algorithms from the literature for error detection in multiplication these are the scheme discussed in section and freivalds algorithm the subject of section the former is a deterministic algorithm based on carrying row and column sums along in a clever format to verify correct multiplication freivalds algorithm is a random projection of the computed product to the same random projection of the product recomputed from the original matrices using only multiplication the random vector used in freivalds algorithm is composed of s and s in section we present the gvfa a variation on freivalds algorithm where we instead use random gaussian vectors as the basis of our projections we analyze the gvfa and prove that with gaussian vectors a false positive occurs only on a set of gaussian vectors of measure zero further analysis of false positive probabilities in the gvfa in the presence of arithmetic with errors is then taken finally in section we provide a discussion of the results and implications for linear algebraic computations and a method of enhancing the resilience of linear algebraic computations in addition in this final section we provide conclusions and suggest directions for future work hao ji et al the scheme and its limit in error correction the scheme is an fault tolerance method that simplifies detecting and correcting errors when carrying out multiplication operations this is slightly different from the matrix product verification problem the fundamental idea of the scheme is to address the fault detection and correction problem at the algorithmic level by calculating matrix checksums encoding them as redundant data and then redesigning the algorithm to operate on these data to produce encoded output that can be checked compared to traditional fault tolerant techniques such as checkpointing the overhead of storing additional checksum data in the scheme is small particularly when the matrices are large moreover no global communication is necessary in the scheme the huang and abraham scheme formed the basis of many subsequent detection schemes and has been extended for use in various hpc architectures a generation of a column checksum for a and a row checksum for b and multiplication of the extended matrices to produce the checksum matrix for c b mismatches in the row and column checksums indicate an element fault in the matrix product fig the scheme for detecting faults in multiplication gvfa for efficient and reliable matrix product verification fig illustrates the scheme for detecting faults in multiplication first of all column sums for a and row sums for b are generated and are added to an augmented representation of a and b these are treated as particular checksums in the subsequent multiplication then multiplication of the extended matrices produces the augmented matrix for c fig a where the checksums can be readily compared mismatches in the row and column checksums indicate an element fault in the matrix product c fig b however there are certain patterns of faults undetectable by the huangabraham scheme here is a simple example to illustrate such an undetectable pattern consider the matrices b and c clearly a b c holds in this example then we use the scheme to calculate the column checksum for a and row checksum for b and we can get af and bf then af bf cf however if there is a fault during the computation of c which causes an exchange of the first and second columns an erroneous result matrix c is generated by exchanging the columns of column or row exchange usually caused by address decoding faults is a commonly observed memory fault pattern problem is that the checksum matrix of c becomes c f where both the row and column checksums match those of the true product of a consequently the scheme fails to detect this fault the scheme can be viewed as a linear constraint satisfaction problem csp where the variables are the entries in the product matrix c the constraints are the row and column checksums also the coefficient matrix in the linear csp system equation specifies the selection of row or column elements as shown in fig clearly a product matrix c that does not satisfy the csp equations indicates errors in c detectable by the scheme the unique correct product matrix c satisfies the csp equations nevertheless other possible product matrices satisfying the csp equations are the fault patterns undetectable by hao ji et al the scheme only when at least constraints with different element selection are incorporated so that the rank of the coefficient matrix in the csp equation is can the undetectable fault patterns be eliminated however this situation is equivalent to simply checking every element in fig csp system in the scheme it is important to notice that there are an infinite number of existing fault patterns that satisfy the checksum constraints and thus are undetectable by the scheme even in the above simple example the rank of the csp coefficient matrix is moreover as dimension n increases the number of checksum constraints increases only linearly but the number of elements in a matrix has quadratic growth therefore the undetectable patterns in the scheme increase quadratically with as a result for multiplications in large matrices fault detection methods based on the scheme can generate false positive results for a large number of circumstances freivalds algorithm the fault detection methods based on the scheme are deterministic algorithms as many randomized fault tolerance algorithms with the tradeoff of random uncertainty freivalds showed that a probabilistic machine can verify the correctness of a matrix product faster than direct recalculation the procedure of the corresponding method later named freivalds algorithm is described in algorithm obviously if a b c always holds freivalds proved that when a b c the probability of is less than or equal to the running time of the above procedure is o with an implied multiplier of as it is comprised of three multiplications this is an upper bound as one can perhaps optimize the evaluation of and by iterating gvfa for efficient and reliable matrix product verification algorithm freivalds algorithm randomly sample a vector n with p of or calculate the projection of c onto c calculate the projection of the product a b onto a b the freivalds algorithm k times the running time becomes o and the probability of a false positive becomes less than or equal to according to the error more generalized forms of freivalds algorithm have also been developed mainly based on using different sampling spaces given at most p erroneous entries in the resulted matrix product gasieniec levcopoulos and lingas extended freivalds algorithm to one with correcting capability running in o log n log p time a gaussian variant of freivalds algorithm gvfa extending freivalds algorithm using gaussian vectors freivalds original algorithm and most of its extensions are based on integer matrices or matrices over a ring and sampling from discrete spaces clearly we can also apply freivalds algorithm to matrices with real or complex entries with the random vector remaining zeros and ones a simple extension is to project a b and c onto a vector of form r t where r is a random real number a false positive occurs only when r is the root of the corresponding polynomial however in practice can easily grow too large or small exceeding floating point representation here we also extend freivalds algorithm by using gaussian random vectors for the projection we use the fact that the multivariate normal distribution has several nice properties which have been used for detecting statistical errors in distributed monte carlo computations the extended algorithm is described in algorithm algorithm gaussian variant of freivalds algorithm generate a gaussian random vector made up of n independent but not necessarily identically distributed normal random variables with finite mean and variance calculate the projection of c on c calculate the projection of product a b on a b this algorithm which we call a gaussian variant of freivalds algorithm gvfa requires three multiplications and only one vector comparison for fault detection hao ji et al theoretical justification similar to freivalds algorithm in gvfa if a b c always holds within a certain floating point threshold when c the chance that is a false positive event occurs with measure zero in exact arithmetic as shown in theorem we first state a result of lukacs and king shown as proposition which will be used in the proof of theorem the main assumption of proposition is the existence of the nth moment of each random variable which many distributions particularly the normal distribution have one important exception of the normal is that it is the limiting distribution for properly normalized sums of random variables with two finite moments this is lindeberg s version of the central limit theorem proposition let xn be n independent but not necessarily identically distributed random variables with variances and assume that the nth moment of each xi i n exists and is finite the necessary and sufficient conditions for the existence pn pn of two statistically independent linear forms ai xi and bi xi are each random variable which has a nonzero coefficient in both forms is normally distributed pn ai bi theorem if a b c the set of gaussian vectors where holds in algorithm has measure zero proof let the matrix denote ab since a b c rank r and dim null n rank n r here dim denotes dimension and null denotes the null space null x rn we can now find n r of orthonormal vectors to form a basis for null such that null span and r more orthonormal vectors vn such that rn span vn any vector and in particular the gaussian vector can be written in this basis as n x vi gvfa for efficient and reliable matrix product verification where are the weights in this particular orthonormal coordinate system if we denote v vn we have v holds in algorithm only if a ab c this means that null due to the fact that is a gaussian random vector and v is an orthogonal matrix proposition tells us that the elements in the resulting vector v are normally distributed and statistically independent with a continuous probability distribution the discrete event where for all i n r occurs on a set of measure zero and we will say here that it has probability zero hence gvfa using a gaussian random projection will have unmatched and when a b c on all but a set of measure zero of gaussian vectors which we will say is probability one this argument in theorem is rather direct but we must point out that the arguments are true when the computations are exact in next subsection we will analyze gvfa when errors are present practical use in matrix product verification in computer implementations of arithmetic with real numbers one commonly uses numbers and arithmetic numbers are represented as finite numbers in the sense that they have a fixed mantissa and exponent size in number of bits therefore there will be a small probability p that still holds due to unfortunate operations in a system with a known machine epsilon when a b the value of p depends on the magnitude of the error between a b and c as well as whose upper bound is justified in theorem theorem assume that is a standard gaussian random vector whose elements are normal variables with mean and variance the standard normal let a b c then the probability p that holds in algorithm using a standard gaussian random vector under uncertainty of size is p e where is the cumulative density function of the standard normal and e is a constant only related to proof since a b c a b c consider the ith element gi of the product vector g we have gi i n x j hao ji et al given only if for all i n can hold since is a standard normal random vector the gi for all i n are normally distributed as well this is because they are linear combinations of normals themselves the key is to compute what the mean and variance is of the gi the components of the standard normals thus we have that e j and e for all j also we have that e i j when i j this allows us to compute the mean e gi e n x j n x e j and the second moment about the mean the variance n x j e e gi e e n n x x so wep have that the gi s are normally distributed with mean zero and variance n gi n then probability that can be computed as follows since gi n we know that n and so we define the new variables i gei and e and so we have i i p p gi p gei e z t dt e since the probability density function of a standard normal is an even function we have that e and so we can use e to get p gi e ei now let us consider computing an upper bound on p i n we have proven that the gi s are normal random variables but they are not necessarily independent and so for this we use some simple ideas from conditional probability by example consider p and p given p p gvfa for efficient and reliable matrix product verification the inequality holds due to the fact that the probabilities are numbers less than one now consider our goal of bounding p i n p by iterating the conditional probability argument n times by reordering we could haveqchosen the bound utilizing any of the gi s however let us define pn e maxi the maximal standard deviation over all the gi s which is only related to the matrix we can use that value instead to get h i p p i n e as an interesting corollary we can get a better bound in the case the at the gi s are independent in that case n n y y p p i n ei qp n let e maxi the maximal standard deviation over all the gi s which is only related to the matrix hence for all i n we have that e and so finally we get that p p i n n y ei in h e the last inequality is true since the number raised to the nth power is less than one note that independence gives probability of a false positive that is n times smaller than in the general dependent case the conclusion of this seems to be that the bound in the dependent case is overly pessimistic and we suspect that in cases where the matrix is very sparse due to a very small number of errors that we are in the independent gi s case or have very little dependence and these more optimistic bounds reflect what happens computationally theorem reveals two interesting facts about gvfa in term of practical matrix product verification hao ji et al the bigger the error caused by the fault the higher the probability that it can be captured p is usually very small because the floating point bound is very small similar to the original freivalds algorithm higher confidence can be obtained by iterating the algorithm multiple times in fact if we iterate k times using independent gaussian random vectors the probability of false positive decreases exponentially as pk actually due to the fact that p is usually very small one or a very small number of iterations will produce verification with sufficiently high confidence r one comment that should be made is that if we consider t dt when e is small we can easily approximate this since the integrand is at its maximum at zero and is a very smooth function analytic actually this integral is approximately the value of the integrand at zeroqtimes the length of r this is justified the integration interval t dt e as e is a number on the order of the machine epsilon which is single n precision or in double precision floating point divided by compared to deterministic methods such as the scheme gvfa has the following advantages certain fault patterns as shown in section are undetectable in deterministic methods such as the scheme deterministic methods absolutely can not detect faults with certain patterns certain patterns are detected with probability zero in contrast there are no fault patterns that are undetectable by gvfa with probability moreover iterating the algorithm multiple times can increase the probability of detecting any fault pattern any value less than one by iteration from the computational normal random vectors are generated independently of a b and c which avoids the costly computation of checksums gvfa gvfa can also be implemented in a way similar to that of the scheme by providing row and column verification as shown in algorithm algorithm gvfa generate two gaussian random vectors a column vector and a row vector where they independent but not necessarily identically distributed normal random variables with finite mean and variance calculate the projection of c on and c c and c calculate the projection of the product a b on and ab a b and a b gvfa for efficient and reliable matrix product verification similar to the scheme a mismatched element of the row vectors of c and ab as well as that of the column vectors of and uniquely identify a faulty element in by considering floatingpoint errors the false positive probability of identifying this fault becomes according to the analysis in section however the computational cost doubles with six multiplications and two vector comparisons this is essentially the same work as doing two independent iterations of the gvfa and obtains the same bound implementation using fused hardware the fused fma machine instruction performs one multiply operation and one add operation with a single rounding step this was implemented to enable potentially faster performance in calculating the floatingpoint accumulation of products a a b recall that the gvfa employs three multiplications to project a b and c onto a normal random vector which requires a sequence of product accumulations that cost operations therefore the performance of the gvfa can be potentially boosted on modern computing architectures that support the fma more importantly due to a single rounding step used in the fma instruction instead of two roundings within separate instructions less loss of accuracy occurs when using the fma instruction in calculating the accumulation of products this should further reduce the rounding errors that cause false positives discussion and conclusions in this paper we extend freivalds algorithm which we call the gaussian variant of freivalds algorithm gvfa to the real domain by random projection using vectors whose coefficients are normal random variables if c the probability that the resulting vectors match is zero using exact arithmetic considering the errors in operations the probability of fault detection depends on the magnitude of the error caused by the fault as well as the floating point precision the new gvfa can be iterated k times with the probability of false positives decreasing exponentially in in addition to multiplication the new algorithm can be applied to verify a wide variety of computations relevant to numerical linear algebra as it provides fault tolerance to the computation that defines level of the blas gvfa can also be used to enforce the trustworthiness of outsourcing matrix computations on untrusted distributed computing infrastructures such as clouds or volunteer platforms the gvfa can be easily extended to a more general matrix multiplication operation where a is m p b is p n and c is m the overall computational time then becomes o mp np the algorithm can be further extended hao ji et al to verify the product of n matrices which requires overall n multiplications the gvfa can also be applied to verifying a wide variety of matrix decomposition operations such as lu qr cholesky as well as eigenvalue computations and singular value decompositions in this case faults are not in the product matrix but occur in the decomposed ones instead anyway the gvfa can be directly applied with no modifications necessary the gvfa is a new tool to detect faults in numerical linear algebra and since it is based on random gaussian projection it is related to the many new randomized algorithms being used directly in numerical linear algebra the fundamental idea of these randomized algorithms is to apply efficient sampling on the potentially very large matrices to extract their important characteristics so as to fast approximate numerical linear algebra operations we believe that the gvfa will be a very useful tool in the development of and otherwise resilient algorithms for solving large numerical linear algebra problems in fact it seems that the gvfa s similarity to other new stochastic techniques in numerical linear algebra affords the possibility of creating stochastic linear solvers that are by their very nature resilient and this is highly relevant for new machines being developed in hpc to have maximal operations per second flops while existing within restrictive energy budgets these hpc systems will be operating at voltages lower than most current systems and so they are expected to be particularly susceptible to soft errors however even if one is not anticipating the use of these machines the trend in processor design is to lower power and is being driven by the explosion of mobile computing thus the ability to reliably perform complicated numerical linear algebraic computations on systems more apt to experience soft faults is a very general concern the gvfa will make it much easier to perform such computations with high fidelity in hpc cloud computing mobile applications as well in settings acknowledgements we would like to thank stephan olariu for his valuable suggestions on the manuscript this work is partially supported by national science foundation grant for yaohang li and hao ji acknowledges support from an odu modeling and simulation fellowship michael mascagni s contribution to this paper was partially supported by national institute of standards and technology nist during his sabbatical the mention of any commercial product or service in this paper does not imply an endorsement by nist or the department of commerce references alon goldreich hastad peralta simple construction of almost independent random variables in proceedings of the annual symposium on foundations of computer science pp ieee banerjee abraham bounds on fault tolerance in multiple processor systems ieee trans comput banerjee rahmeh stunkel nair roy balasubramanian abraham fault tolerance on a hypercube multiprocessor ieee trans comput gvfa for efficient and reliable matrix product verification boldo muller exact and approximated error of the fma ieee trans comput bosilca delmas dongarra langou fault tolerance applied to high performance computing j parallel distrib comput cheng wang lee fame a based memory failure analysis framework in international computer aided design conference pp chinn sinha bounds on sample space size for matrix product verification inform process lett coppersmith winograd matrix multiplication via arithmetic progressions in proceedings of the annual acm symposium on theory of computing pp acm demmel higham stability of block algorithms with fast blas acm trans math softw dongarra cruz hammerling duff algorithm a set of level basic linear algebra subprograms model implementation and test programs acm trans math softw drineas kannan mahoney fast monte carlo algorithms for matrices i approximating matrix multiplication siam comput drineas kannan fast monte carlo algorithms for matrices ii computing a approximation to a matrix siam comput drineas mahoney fast monte carlo algorithms for matrices iii computing a compressed approximate matrix decomposition siam comput elnozahy alvisi wang johnson a survey of protocols in systems acm comput surv solbrig stefanelli warkentin abbey ipsen importance sampling for a monte carlo matrix multiplication algorithm with application to information retrieval siam sci comput freivalds probabilistic machines can use less running time in proceedings of ifip congress pp gallivan jalby meier the use of in linear algebra on a parallel processor with a hierarchical memory siam sci stat comp gasieniec levcopoulos lingas efficiently correcting matrix products in algorithms and computation pp springer glosli richards caspersen rudd gunnels streitz extending stability beyond cpu millennium a atomistic simulation of instability in proceedings of the conference on supercomputing pp acm de goor testing semiconductor memories theory and practice john wiley sons new york gunnels katz de gejin highperformance matrix multiplication theory and practice in proceedings of international conference on dependable systems and networks pp ieee halko martinsson tropp finding structure with randomness probabilistic algorithms for constructing approximate matrix decompositions siam rev hokenek montoye cook risc floating point with fused ieee circuits huang abraham fault tolerance for matrix operations ieee trans comput korec wiedermann deterministic verification of integer matrix multiplication in quadratic time in sofsem theory and practice of computer science pp springer hao ji et al kumar roch secure and fault tolerant outsourcing of matrix computations available at https lei liao huang li cloud computing service the case of large matrix determinant computation ieee trans serv comput pp li mascagni monte carlo application in lecture notes in computer science pp li mascagni analysis of monte carlo applications int j high perform comput appl lindeberg eine neue herleitung des exponentialgesetzes in der wahrscheinlichkeitsrechnung math z lisboa erigson carro a low cost checker for matrix multiplication in ieee test workshop luk park an analysis of fault tolerance techniques j parallel distrib comput lukacs king a property of the normal distribution ann math stat michalak harris hengartner takala wender predicting the number of fatal soft errors in los alamos national laboratory s asc q supercomputer ieee trans device mater rel muirhead aspects of multivariate statistical theory wiley new york naor naor probability spaces efficient constructions and applications siam comput schroeder pinheiro weber dram errors in the wild a field study commun acm shivakumar kistler keckler burger alvisi modeling the effect of technology trends on the soft error rate of combinational logic in proceedings of international conference on dependable systems and networks pp ieee williams multiplying matrices faster than in proceedings of the annual acm symposium on theory of computing stoc pp acm new york ny usa doi url http
8
may bayes model selection qiyang han abstract we offer a general bayes theoretic framework to tackle the model selection problem under a prior design the prior serves to assess the model selection uncertainty and the secondstep prior quantifies the prior belief on the strength of the signals within the model chosen from the first step we establish oracle posterior contraction rates under i a new condition on the log likelihood ratio of the statistical experiment ii a local entropy condition on the dimensionality of the models and iii a sufficient mass condition on the prior near the best approximating signal for each model the prior can be designed generically the resulting posterior mean also satisfies an oracle inequality thus automatically serving as an adaptive point estimator in a frequentist sense model is allowed in these oracle rates the new condition not only eliminates the convention of constructing explicit tests with exponentially small type i and ii errors but also suggests the intrinsic metric to use in a given statistical experiment both as a loss function and as an entropy measurement this gives a unified reduction scheme for many experiments considered in and beyond as an illustration for the scope of our general results in concrete applications we consider i trace regression ii regression iii partially linear regression and iv covariance matrix estimation in the sparse factor model these new results serve either as theoretical justification of practical prior proposals in the literature or as an illustration of the generic construction scheme of a nearly minimax adaptive estimator for a experiment introduction overview suppose we observe x n from a statistical experiment n n x n a n pf where f belongs to a statistical model f and pf f is dominated by a measure instead of using a single big model f a collection of models fm f are available to statisticians and the art of model selection is to determine which one s to use date may mathematics subject classification key words and phrases bayes nonparametrics model selection adaptive estimation model bernstein inequality supported in part by nsf grant han there are vast literatures on model selection from a frequentist point of view we only refer the reader to as some representative pointers for various approaches of penalization aggregation etc on the other hand from a bayes point of view although posterior contraction rates have been derived for many different models see for some key contributions understanding towards general bayes model selection procedures has been limited focused on designing adaptive bayes procedures with models primarily indexed by the smoothness level of classical function classes in the context of density estimation their conditions are complicated and seem not directly applicable to other settings designed a prior specific to structured linear problems in the gaussian regression model with their main focus on linear and network problems it seems for their framework to handle other models despite these limitations give useful clues one common feature in these papers is a prior design where the prior assesses the model selection uncertainty followed by a prior m quantifying the prior belief in the strength of the signals within the specific chosen model fm from the first step such a prior design is intrinsic in many proposals for different problems for sparse linear regression for trace regression for shape restricted regression for problems related to convariance matrix estimation this is the starting point of this paper we give a unified theoretical treatment to this prior design by identifying common structural n assumptions on the statistical experiments x n a n pf the collection of models fm and the priors and m such that the posterior distribution both contracts at an oracle rate with respect to some metric dn inf inf dn g pen m where pen m is related to the dimension of fm and concentrates on the model where is the best model balancing the tradeoff in the oracle formulation follows the convention in the frequentist literature and has several advantages i minimaxity if the true signal can be by the models fm the contraction rate in is usually nearly minimax optimal ii adaptivity if lies in certain model fm the contraction rate adapts to this unknown information and iii if the models fm are while fm remains small then the contraction rate should still be rescued by this relatively small bias m may depend on n but we suppress this dependence for notational convenience bayes model selection as the main abstract result of this paper cf theorem we show that our goals can be accomplished under i experiment a condition on the log likelihood ratio for the statistical experiment with respect to dn ii models a dimensionality condition of the model fm measured in terms of local entropy with respect to the metric dn iii priors exponential weighting for the prior and sufficient mass of the prior m near the best approximating signal m within the model fm for the true signal one important ingredient in studying posterior contraction rates in bayes nonparametrics literature has been the construction of appropriate tests with exponentially small type i and ii errors with respect to certain metric such tests date back to the work of le cam and who brought out the special role of the hellinger metric in which tests can be constructed generically on the other hand the testing framework requires the prior to spread sufficient mass near the neighborhood of the true signal the discrepancy of these two metrics can be rather delicate particularly for non and complicated models and it often remains unclear which metric is the natural one to use in these models moreover it is usually a significant theoretical challenge to construct tests in complicated models cf to name a few our condition i closes these gaps by suggesting the usage of an intrinsic metric that mimics the behavior of the kullbackleibler divergence in a given statistical experiment in which a good test can be constructed generically cf lemma bernstein inequality is a fundamental tool in probability theory and hence can be easily verified in many statistical experiments including various experiments considered in and beyond regression density estimation gaussian autoregression gaussian time series and covariance matrix estimation problems we identify the intrinsic metrics to use in these experiments furthermore the condition entails sharp exponential contraction of the posterior distribution near the true signal complementing a recent result of results of this type typically do not follow directly from general principles in and have mainly been derived on a basis cf as such we provide a refinement of the seminal testing framework in so that the investigation of sharp posterior contraction rates in the intrinsic metric of an experiment essentially reduces to the study of prior design conditions ii and iii are familiar in bayes nonparametrics literature in particular the prior can be designed generically cf proposition sufficient mass of the prior m is a minimal condition in the sense that using m alone should lead to a nearly optimal posterior contraction rate on the model fm han these conditions albeit minimal imply more than an optimal adaptive bayes procedure in the sense of in fact we show that the posterior mean automatically serves as an adaptive point estimator in a frequentist sense these results reveal in a sense that the task of constructing adaptive procedures with respect to the intrinsic metric in a given statistical experiment in both frequentist and bayes contexts is not really harder than that of designing an optimal prior for each of the models a general theory would be less interesting without being able to address problems of different types as an illustration of our general framework in concrete applications we justify the prior proposals in i for the trace regression problem and in ii for the regression problems despite many theoretical results for bayes models cf it seems that the important trace regression problem has not yet been successfully addressed our result here fills in this gap furthermore to the best knowledge of the author the theoretical results concerning regression problems provide the first systematic approach that bridges the gap between bayesian nonparametrics and nonparametric function estimation literature in the context of adaptive we also consider adaptive bayes procedures for the partially linear regression model and the covariance matrix estimation problem in the sparse factor model these new results serve as an illustration of the generic construction scheme of a nearly minimax adaptive estimator in a complicated experiment with multiple structures some of these results improve the best known result in the literature during the preparation of this paper we become aware of a very recent paper who independently considered the bayes model selection problem both our approach and shed light on the general bayes model selection problem while differing in several important aspects cf remark moreover our work here applies to a wide range of applications that are not covered by notation cx denotes a generic constant that depends only on x whose numeric value may change from line to line a b and a x b mean a cx b and a cx b respectively and a b means a b and n a x b for a b r a b max a b and a b min a b pf t denotes the expectation of a random variable t t x n under the exper n iment x n a n pf organization section is devoted to the general model selection theory we work out a wide range of experiments that fit into our general theory completed at the same time considered a bayes approach for univariate density estimation where they derived contraction rates without addressing the adaptation issue bayes model selection in section section discusses various concrete applications as mentioned above most detailed proofs are deferred to sections and the appendix general theory in the prior design framework we first put a prior on the model index i followed by a prior m on the model fm chosen from the first p step the overall prior is a probability measure on f given by m m the posterior distribution is then a random measure on f for a measurable subset b f r n n pf x f n rb n pf x n f n n where pf denotes the probability density function of pf to the dominating measure with respect assumptions to state our assumption on the experiment let denote the bernstein function this function plays a pivital role in proving behavior of a given complicated random variable cf here v and c are the size and size of the random variable controlling respectively the degree of its and behavior c assumption a experiment condition there exist some absolute and such that for all n n and f n n p p n n exp log n exp n p p holds for all here the metric dn f f satisfies n p n log n n p for some absolute constants in assumption a we require the log likelihood ratio to satisfy a bernstein inequality in particular the log likelihood ratio has local gaussian behavior conversely if the log likelihood ratio behaves locally like gaussian then we can pick some so that the bernstein inequality holds lemma let assumption a hold fix f there exists some test such that n n sup pf exp f f han where only depend on this lemma suggests that under a condition on the log likelihood ratio tests exist automatically under the intrinsic metric dn that mimics the behavior of the divergence in the sense of several examples will be worked out in section to illustrate the choice of an intrinsic metric dn including the discrete loss for regression models a weighted metric for the gaussian autoregression model the hellinger metric for density estimation the frobenius norm for covariance matrix estimation problem next we state the assumption on the complexity of the models fm let i nq be a lattice with the natural order i here the dimension q is understood as the number of different structures in the models fm in the sequel we will not explicitly mention q unless otherwise specified we require the models to be nested in the sense that fm if m let m denote the best approximation of within the model fm in the sense that m arg inf dn g assumption b models local entropy condition for each m i sup log n f fm dn f g dn m n m holds for all g furthermore there exist absolute constants c and such that for any m i and any h x n m m hm e note that if we choose all models fm f then reduces to the local entropy condition in when fm is typically we can check for all g fm now we comment on the left side of while the right essentially requires super linearity of the map m m side of controls the degree of this super linearity as a leading example will be trivially satisfied with c and when m cm for some absolute constant c finally we state assumptions on the priors assumption c priors mass condition for all m i prior there exists some h such that x k exp m m exp m k hm prior exp m m f fm f m m any a b i a b iff a b for all i q similar definition applies to i i assume that f m is without loss of generality bayes model selection condition can be verified by using the following generic prior m exp m proposition suppose the first condition of holds then in assumption c holds for the prior with h will be the model selection prior on the model index i in all examples in section condition is reminiscent of the classical prior mass condition is understood as the posterior contraction rate ered in where m for the model fm hence can also be viewed as a solvability condition imposed on each model note that only requires a sufficient prior mass on a ball near m where uses more complicated metric balls induced by higher moments of the divergence main results the following is the main abstract result of this paper theorem suppose assumptions hold for some m i with and h let m inf g m for any m m n n x f f f inf g m exp m for any m m such that m inf g n m n exp m f let f n be the posterior mean then n inf dn g m dn fn inf here the constant depends on ci and ci depend on the ci c h and the main message of theorem is that the task of constructing bayes procedures adaptive to a collection of models in the intrinsic metric of a given statistical experiment can be essentially reduced to that of designing a nonadaptive prior for each model furthermore the resulting posterior mean serves as an automatic adaptive point estimator in a frequentist sense in particular if the priors we use on each model lead to nearly optimal posterior contraction rates on these models adaptation happens automatically by designing a correct model selection prior besides being to the collection of models shows that the posterior distribution concentrates on the model fm that balances the bias and variance tradeoff in the oracle rates and results of this type have been derived primarily in the gaussian regression model cf han and in density estimation here our result shows that this is a general phenomenon for the prior design note that is arbitrary and hence our oracle inequalities and account for model errors previous work allowing model includes who mainly focuses on structured linear models in the gaussian regression setting and who pursued generality at the cost of conditions the condition is assumed purely for technical convenience if we have finitely many models at hand then we can define fm for m so that this condition is satisfied remark we make some technical remarks the probability estimate in is sharp up to constants in view of the lower bound result theorem in thus closing a gap that has not been attainable in a general setting by using directly beyond of theoretical interest in its own right the sharp estimate helps us to derive an oracle inequality for the posterior mean as an important frequentist summary of the posterior distribution such sharp estimates have been derived separately in different models the sparse normal mean model the sparse pca model and the structured linear model to name a few assumption a implies among other things the existence of a good test cf lemma in this sense our approach here falls into the general testing approach adopted in the testing approach has difficulties in handling metrics cf some alternative approaches for dealing with metrics can be found in the constants ci in theorem depend at most polynomially with respect to the constants involved in assumption a this allows some flexibility in the choice of the constants therein in fact bernstein inequality in some dependent cases comes with logarithmic factors in n cf remark we compare our results with theorems and of both their results and our theorem shed light on the general problem of bayes model selection while differing in several important aspects our theorem hinges on the new condition while the results of are based on the classical mechanism of which requires the construction of tests some merits of our approach will be clear from section and below along with remark the probability estimate in for the posterior distribution outside a ball of radius at the targeted contraction rate is asymptotic in nature while our theorem provides sharp estimates theorem of targets at exact model selection consistency under a set of additional separation assumptions our theorem requires no extra assumptions and shows the concentration behavior bayes model selection of the posterior distribution on the best model that balances the tradeoff this is significant in problems the true signal typically need not belong to any specific model theorem of contains a term involving the cardinality of the models and hence the models need be apriori finitely many for their bound to be finite it remains open to see if this can be removed proof sketch here we sketch the main steps in the proof of our main abstract result theorem the details will be deferred to section the proof can be roughly divided into two main steps step we first solve a localized problem on the model fm by projecting the underlying probability measure from to m in particular we establish exponential deviation inequality for the posterior contraction rate via the existence of tests guaranteed by lemma n n m f f f m m exp where is the smallest index m such that m this index may deviate from m substantially for small indices step we argue that the cost of the projection in step is essentially factor in the probability bound cf a multiplicative o exp lemma which is made possible by the assumption a then by requiring we obtain the conclusion by the definition of and the fact that m m the existence of tests lemma is used in step step is inspired by the work of in the context of frequentist least squares estimator over a polyhedral cone in the gaussian regression setting where the localized problem therein is estimation of signals on a face where risk adaptation happens in the bayesian context used a change of measure argument in the gaussian regression setting for a different purpose our proof strategy can be viewed as an extension of these ideas beyond the simple gaussian regression model statistical experiments in this section we work out a couple of specific statistical experiments that satisfy assumption a to illustrate the scope of the general theory in section some of the examples come from we identify the intrinsic metric to use in these examples since bernstein inequality is a fundamental probabilistic tool and has been derived in a wide range of complicated dependent settings we expect many more experiments to be covered beyond the ones we present here regression models suppose we want to estimate in a given model rn in the following regression models for i n gaussian xi where s are n and rn binary xi bern where n for some han poisson xi poisson where m n for some m we will use the following metric for any n i n lemma assumption a holds for with gaussian and binary and the constants ci depend on only poisson constants ci depending on m only theorem for regression models let dn if assumptions hold then hold using similar techniques we can derive analogous results for gaussian regression with random design and white noise model we omit the details density estimation suppose xn s are samples from a density f f with respect to a measure on the sample space x a g x we consider the following form of f f x r eeg for some g g for x all x x a natural metric to use for density estimation is the hellinger metric for any f z p p x lemma suppose that g is uniformly bounded then assumption a is satisfied for h with constants ci depending on g only theorem for density estimation let dn if g is a class of uniformly bounded functions and assumptions hold then hold gaussian autoregression suppose xn is generated from xi f for i n where f belongs to a function class f with a uniform bound m and s are n then xn is a markov chain with transition density pf y f x where is the normal density by the arguments on page of this chain has a unique stationary distribution with density qf with respect to the lebesgue measure on we assume that is generated from this stationary distribution under the true f consider the following metric for any f z m rm where rm x x m x m lemma suppose that f is uniformly bounded by m then assumption a is satisfied for dr m with constants ci depending on m only theorem for gaussian autoregression model if f is uniformly bounded by m let dn dr m if assumptions hold then hold bayes model selection compared with results obtained in cf section we identify the intrinsic metric dr m a weighted norm for the gaussian autoregression model while uses a weighted ls s norm to check the local entropy condition and an average hellinger metric as the loss function gaussian time series suppose is a stationary gaussian process with spectral density f f defined on then the covariance r n f matrix of x xn is given by tn f kl e we consider a special form of f f fg exp g for some g we will use the following metric for any g ktn tn n where denotes the matrix frobenius norm lemma suppose that g is uniformly bounded then assumption a is satisfied for dn with constants ci depending on g only theorem for the gaussian time series model if g is uniformly bounded let dn dn if assumptions hold then hold the metric dn can always be bounded from above by the usual metric and can be related to the metric from below cf lemma of our result then shows that the metric to use in the entropy condition can be weakened to the usual norm rather than the much stronger norm as in page of covariance matrix estimation suppose xn rp are observations from np where sp l the set of p p covariance matrices whose minimal and maximal eigenvalues are bounded by and l respectively we will use the frobenius norm for any sp l lemma under the above setting assumption a holds for the metric df with constants ci depending on l only theorem for covariance matrix estimation in sp l for some l let dn df if assumptions hold then hold applications in this section we consider concrete applications as we have seen in previous sections construction of adaptive bayes procedures in the intrinsic metric of an experiment essentially reduces to the design of priors and hence we only consider the simplest setup for a particular structure for instance once we understand how to analyze the convex gaussian regression problem we can similarly consider convex regression convex density estimation gaussian autoregression with convex functions gaussian time series with convex spectral density problems in their han respective intrinsic metrics hence our emphasis in the examples will be focused on the analysis of different model structures models that can be handled using similar techniques will not be presented in detail remark we will only explicitly state the corresponding oracle inequalities in the form of for each example to be considered below the corresponding results for and are omitted trace regression consider fitting the gaussian regression model yi xi i n by f fa a where fa x tr a for all x x let m and the index set is i rmax rmax where rmax for r let fr fa a rank a r and for r fr frmax although various bayesian methods have been proposed in the literature cf see for a summary theoretical understanding has been limited derived an oracle inequality for an exponentially aggragated estimator for the matrix completion problem their result is purely frequentist below we consider a two step prior similar to and derive the corresponding posterior contraction rates for a matrix b bij let kbkp denote its schatten p and correspond to the nuclear norm and the frobenius norm respectively to introduce the notion of rip let x rn be the n linear map defined via a tr i a definition the linear map x rn is said to satisfy rip r for some r rmax and some r with r iff a holds for all matrices a such that rank a r kx for r rmax x satisfies rip r iff x satisfies rip rmax furthermore x rn is said to satisfy uniform rip i on an index set i iff x satisfies rip for all r rip r is a variant of the rip condition introduced in with scaling factors and r for some example matrix completion suppose that xi takes value at one position and otherwise further assume that a for all i and j let denote the indices for which xi s take value then kx a easy calculations show that trick of defining models for experiments will also used in other applications in later subsections but we will not explicitly state it again is kbk pm b p where b are the singular values of p j assumption is usually satisfied in applications in fact in the netflix problem which is the main motivating example for matrix completion is the rating matrix with rows indexing the users and columns indexing movies and we can simply take a one star and five stars bayes model selection we can take defined by x is uniform rip i so that example gaussian measurement ensembles suppose xi s are random matrices whose entries are standard normal theorem of entails that x is uniform rip i with for some with probability at least c exp provided n consider a prior on i of form r exp ctr r log given the chosen index r i by a prior on pra on fr is induced m all matrices of form ui vi where ui r and vi here we use a product prior distribution g with lebesgue density on r for simplicity we use gi for i where g is symmetric about and on let r tr g arg minb rank b fb and g max r where denotes the largest singular value theorem fix and rmax suppose that there exists some m i such that the linear map x rn satisfies uniform rip m and that for all r m we have tr g log r n then there exists some ctr in depending on such that for any r m n a fa here r fb r log n n inf y exp r o n max inf b rank b fb r log and the constants b rank b citr i depend on by theorem of the rate in is minimax optimal up to a logarithmic factor to the best knowledge of the author the theorem above is the first result in the literature that addresses the posterior contraction rate in the context of trace regression in a fully bayesian setup may be verified in a manner or generically we can take m if the model is well specified at the cost of sacrificing here we used the union bound to get a probability estimate r max exp exp n for some c under the assumption that n will always use such g in the prior design in the examples in this section han the form of oracle inequalities but still get nearly optimal posterior contraction rates in in particular the first condition of prevents the largest eigenvalue of r from growing too fast this is in similar spirit with theorem of showing that the magnitude of the signals can not be too large for priors to work in the sparse normal mean model the second condition of is typically a mild technical condition we only need to choose small enough isotonic regression consider fitting the gaussian regression model yi xi by f f r f is for simplicity the design points are assumed to be xi n for all i let fm f f f is piecewise constant with at most m constant pieces consider the following prior on i n m exp ciso m log en let gm g where g is symmetric and on then m gm is a valid density on given a chosen model fm by the prior we randomly pick a set of change points xi k m i i m and put a prior on f xi k s proposed a similar prior with being uniform since they assumed the maximum number of change points is known apriori below we derive a theoretical result without assuming the knowledge of this let m iso g kf arg g and g m theorem fix suppose that iso g log en then there exists some ciso in depending on such that m log en n iso n f f f inf g y n iso exp n m n o max inf f g m log en and the constants c iso i here m n n m i n depend on implies that if is piecewise constant the posterior distribution contracts at nearly a parametric rate can be checked by the following lemma if is square integrable and the prior density g is in the sense that there exists some such that lim inf g x then for any holds uniformly in all m n for n large enough depending on and value of f m outside of n n can be defined in a canonical way by extending m n and m n towards the endpoints bayes model selection convex regression consider fitting the gaussian regression model yi xi by f the class of convex functions on x d let fm f x ai x bi ai rd bi r denote the class of piecewise affine convex functions with at most m pieces we will focus on the multivariate case since the univariate case can be easily derived using the techniques exploited in isotonic regression a prior on each model fm can be induced by a prior on the slopes the intercepts nm and g on rd ai bi rd r m g we use a prior with density r m to induce a prior on fm let m arg g admit the m m further let representation given n m x ai x bi m m cvx min g g kai g the prior we will use on the index i n is given by m exp ccvx dm log log n the first step prior used in is a poisson proposal which slightly differs from by a logarithmic factor this would affect the contraction rate only by a logarithmic factor theorem fix suppose that cvx g log and n then there exists some ccvx in depending on such that d log n m log n cvx n f f f inf g y n cvx exp n m o n max inf f g d log log and the constants here n m m n n cicvx i depend on our oracle inequality shows that the posterior contraction rate of theorem therein is far from optimal can be satisfied by using priors g in the same spirit as lemma is square integrable and the design points are regular enough using regular grids on d moreover explicit rate results can be obtained using approximation techniques in cf lemma therein we omit detailed derivations remark for univariate convex regression the term log in can be removed the logarithmic term is due to the fact that the pseudodimension of fm scales as m log for d cf lemma remark using similar priors and proof techniques we can construct a nearly adaptive bayes estimator for the support function regression problem for convex bodies there the models fm are support functions indexed by polytopes with m vertices and a prior on fm is induced by a prior on the location of the m vertices the of fm can be controlled using techniques developed in details are omitted han partially linear model consider fitting the gaussian regression model yi xi zi where xi zi rp by a partially linear model f u x z u z x u z rp u u where the dimension of the parametric part can diverge we consider u to be the class of functions as an illustration cf section consider models f s m u s u um where um denotes the class of piecewise constant functions with at most m constant pieces and s v rp v s in this example the model index i is a lattice our goal here is to construct an estimator that satisfies an oracle inequality over the models f s m s m consider the following model selection prior s m exp s log ep m log en for a chosen model f s m consider the following prior s m pick randomly a support s p with s and a set of change points q zi k m i i m and then put a prior gs q on and u zi k s for simplicity we use a product prior gs q where is a prior on rm constructed in section let s m inf s m g and write s m x z s m z s x m z let g g s and g g m let x be the design matrix so that x is normalized with diagonal elements taking value theorem fix and p suppose that g log ep g log en then there exists some chp in depending on such that s log ep m log en n hp n y f f f inf u u s m n n hp exp n s m o n s log ep log en max inf f f and the here f u n u s m n n s m constants cihp i depend on the first condition of requires that the magnitude of s does not grow too fast see also comments following theorem the second condition of is the same as in when the model is in the sense that x z z for some and u the oracle rate in becomes log ep m log en inf inf u n n is a common assumption cf section of bayes model selection the two terms in the rate trades off two structures of the experiment the sparsity of x and the smoothness level of u z the resulting phase transition of the rate in terms of these structures is in a sense similar to the results of it is not hard to see that can not be improved in general hence our bayes estimator automatically serves as a theoretically nearly optimal adaptive estimator for the partially linear regression model covariance matrix estimation in the sparse factor model suppose we observe xn rp from np the covariance matrix is modelled by the sparse factor model m k s m k s where m k s i r k s l with r k s l s j k in this example the model index i is a lattice and the sparsity structure depends on the rank structure consider the following model selection prior k s exp ks log ep theorem let p there exist some ccov in and some sequence of sieve priors k s on m k s depending on l such that ks log ep n n cov m kf x inf kf k s n cov exp n k s n o max inf ks log ep and the constants here f s k n k s n cicov i depend on since spectral norm is dominated by frobenius norm intrinsic our result shows that if the model is in the sense that m then we can construct an adaptive p bayes estimator with convergence rates in both norms no worse than ks log considered p the same sparse factor model where they proved a strictly rate s log p log in spectral norm under ks log considered a closely related sparse pca problem where the convergence rate under spectral norm achieves the same rate as here cf theorem therein while a factor of k is lost when using frobenius norm as a loss function cf remark therein it should be mentioned that the sieve prior k s is constructed using the metric entropy of m k s and hence the resulting bayes estimator and the posterior mean as a point estimator are purely theoretical we use this example to illustrate i the construction scheme of a nearly optimal adaptive procedure for a experiment based on the metric entropy of the underlying parameter space and ii derivation of contraction rates in metrics when these metrics can be related to the intrinsic metrics nicely han proofs for the main results proof of theorem main steps first we need a lemma allowing a argument lemma let assumption a hold there exists some constant only depending on and such that for any random variable u any dn and any j n i h n n u u the next propositions solve the posterior contraction problem for the local model fm m then there proposition fix m m such that m exists some constant depending on the constants in assumption a such that for j h n x n m c m f f f m jh m m let proposition fix m m such that m m inf m m dn m then for j h n m f f f m m x n c the proofs of these results will be detailed in later subsections proof of theorem main steps instead of we will prove a slightly stronger statement as follows for any j h and h n n x m f f f j inf g m here the constants ci i depends on the constants involved in assumption a and c proof of first consider the overfitting case by proposition and lemma we see that when m m holds for j h n f f f m jh m x n n n f f dn f m c jh m x n p n f f f m jh m x m m n m m e h c m m min bayes model selection here in the second line we used the fact that f m f m completes the estimate for overfitting m m next consider the underfitting case fix m m such that m apply proposition and lemma and use arguments similar to to see that for j h n f f f m x n n n m f f dn f m c dn m x min here in the second line we used i f m f m and ii dn m the claim of follows by combining and proof of the proof is essentially integration of tail estimates by a peeling device let the event aj be defined via aj j m m f j m m then n n n f n f n x c h m m f x n h c h m m n x j m m h the inequality in the first line of the above display is due to jensen s inequality applied with dn followed by inequality the summation can be bounded up to a constant depending on by x x m m m j m m h h where the inequality follows since m this quantity can be bounded by a constant multiple of dx independent of now majorizes up to a constant the proof is complete by noting that m and then taking infimum over m proofs of propositions and we will need several lemmas before the proof of propositions and lemma let assumption a hold let f be a function class defined on the sample space x suppose that n is a function han such that for some and every it holds that n f f dn f dn n then for any there exists some test such that n n n sup f dn f pf the constants are taken from lemma lemma let assumption a hold suppose that is a probability measure on f f dn f then for every c there exists some c depending on c such that z n pf n f e n p the proof of these lemmas can be found in appendix proof of proposition fix i with now we invoke lemma with f m fm since m and log n m for to see that there exists some test such that n m and that log n m e m m e m n sup f f m m pf e note that here in we used the fact that m by definition of m now for the fixed j m as in the statement of the proposition we let be a global test for big models then by x x n n m m e n m here we used the left side of this implies that for any random variable u we have n n m u m n m on the power side with jhm applied to we see that n n sup sup pf pf f f m jh m f f m jhm jhm n m bayes model selection the first inequality follows from the right side of since jh m jhm and the last inequality follows from the left side of on the other hand by applying lemma with c and n m we see c m and it that there exists some event en such that m enc e holds on the event en that n z n z pf pf f m m f n n f f m m p m m m m m f fm f m m note that n n m f f dn f m c jh m x n n f p m f dn f m c jh m f n m r n n pf m f m m m f fm f m m z n n n pf m f m f f m jh m i ii where the inequality follows from on the other hand the expectation term in the above display can be further calculated as follows z n pf f ii f f m jh m sup f f m jh m n m n pf f fjhm n m n m the first term in the third line follows from and the second term follows from in assumption c along with the left side of by in assumption c and j h n x n m f f f m jh m n m hence we conclude from probability estimate on enc and han proof of proposition the proof largely follows the same lines as that of proposition see appendix c for details completion of proof of theorem m following proof of for any m m such that m the similar reasoning in with j h n fjhm n m f m m m f fjhm f fm dn f m m c n m from here can be established by controlling the probability estimate for enc as in proposition and a change of measure argument as in using lemma proof of lemma proof of lemma let be a constant to be specified later consider n pf we first consider type the test statistics log n pf i error under the null hypothesis we have for any n n p p n n n c log n p p exp exp c choosing small enough depending on c we get n exp where depend on c next we handle the type ii error to this end to be specified later consider the event for a constant c n en log pf n pf where f f is such that f and n n pf pf n n n pf enc pf n pf log n f p p n n p p f f n n pf n pf log n p p c ndn exp f bayes model selection by choosing small enough depending on we see that n pf enc exp for some constants depending only on in particular does not depend on f on the other hand n n p p f n n log n pf pf n pf p n p f n n n pf n c pf enc pf enc p n n pf f we continue our computation for c such that c and n n p p f n n f pf n pf log n c p p ndn e f since pf log n pf now choose small enough depending on c we see that for any f f such that f n pf exp where depending on c now we need to choose c such that c c this can be done by choosing min and c proof of lemma we recall a standard fact lemma if a random variable x satisfies e exp exp c then for t p x t p x exp n pf proof of lemma for c consider the event en log n pf by lemma we have for some constant c depending on and n n p p n n n log n enc n p p n n p p n n n log n since dn p p j c exp c exp c han we remind the reader that the constant c may not be the same in the above series of inequalities and hence the last inequality follows by noting that i if then we replace the denumerator of the second last line by ii if then we increase c then n p n n n n u u u n p n u completing the proof n proof of proposition p proof of proposition let m be the total mass then the first condition ofp is trivial we only need to verify the second condition of k hm k p n k m m where the first n k hm e inequality follows from and the second by the condition h proofs for applications the proofs of the theorems in section follow the similar route by verifying i the local entropy condition in assumption b ii the summability condition in and iii the sufficient mass condition in assumption we remind the reader that we use in all examples as the model selection prior we only prove theorems and in this section the proofs for theorems are deferred to appendix b proof of theorem lemma let r i suppose that the linear map x rn is uniform rip i then for any and such that rank r we have log n fa fr fa r log we will need the following result lemma let s r b a rank a r b then r n s r b proof of lemma the case for b follows from lemma of and the general case follows by a scaling argument we omit the details proof of lemma we only need to consider the case r rmax first note that the entropy in question equals log n x a kx a rank a r bayes model selection by uniform rip i the set to be covered in the above display is contained in x a ka rank a r x s on the other hand again by uniform rip i a of the set s under the frobenius norm induces a of x s under the euclidean norm this implies that can be further bounded from above by log n s r log where the last inequality follows from lemma log r log clearly satisfies now we take r n r n with c and lemma suppose that x rn is uniform rip i and that holds then in assumption c holds proof of lemma we only need to consider r rmax first note that r fa fr fa r r a kx a r r rank a r a ka r r rank a r p let r be the spectral decomposition of r and let ui p and vi then r ui now for ui p and vi i r let then by noting that the frobenius norm is and that kui kvi we have for r r x k ui vi r x where now with r can be further bounded from below as follows pr r we see that ui r vi r tr r g r y vol ui r vol vi r tr r g r r vm v r log g r where vd vol bd and vd d d hence in order that the right side of the above display can be bounded from below by r it suffices han to require that log max log g log n r it is easy to calculate that n r r rmax now the conclusion follows by noting that implies since rmax n and proof of theorem the theorem follows by theorems and proposition coupled with lemmas and proof of theorem lemma for any m k s log n m k s kf cl ks log ks log proof the set involved in the entropy is equivalent to o n r k s l kf cl we claim that k s kf kl to see this let p be the singular value decomposition of where p q are unitary matrices and is a diagonal matrix then kl proving the claim combined with and euclidean embedding we see that the entropy in question can be bounded by log n v ks pk kl ks kl pk ks log ks log log ks where s pk v rpk v s proof of theorem take k s kc ks n log c p for some c e depending on l and some absolute constant k apparently holds with c the prior k s on m k s will be the uniform disq log c p of the set m k s tribution on a minimal c cks under the frobenius norm the above lemma entails that the cardinality for such a cover is no more than exp c ks log c p for another constant c e depending on hence exp ks log c p k s m k s k s kf k s which can be bounded from below by exp k s by choosing k large enough the claim of theorem now follows from these considerations along with theorems and proposition bayes model selection appendix a proof of lemmas in section n proof of lemma let denote the probability measure induced by the joint distribution of xn when the underlying signal is first consider gaussian regression case it is easy to calculate that n n x x n xi i xi i log n p n n log p n p x n then n exp p exp n n log n x p n p x n i i n log p n p x n exp next consider binary regression easy calculation shows that n log n log p n p n p n p n x xi log i i xi log i i i log i i i log i i n x using the inequality cx log x x for all x for some c depending on only we have shown n log n n under the assumed condition that n now we verify the bernstein condition n n p p n exp log n n log n p p n n x x n ti xi i ti exp exp where ti ti log i i and the last inequality follows from hoeffding s inequality cf section of the claim follows by h i i noting that log i i by the assumed i i condition and the aforementioned inequality log x x in a constrained range han finally consider poisson regression it is easy to see that n log n log p n p n p n p n x xi log i i i i i log i i i i n x note that for any p q m q q q p p q p log p q p log q p p p where in the middle we used the fact that log x x x for x n bounded away from and this shows that log n n next we verify the bernstein condition n n p p n exp log n n log n p n n x x n i e xi i ti exp exp where ti log i i now for any we have on the other hand i i log i i i i completing the proof proof of lemma since the ratio for xn can be decomposed into sums of the ratio for single samples and the ratio is uniformly bounded over f since g is bounded classical bernstein inequality applies to see that for any couple the bernstein condition in assumption a holds with v log c where depend only on hence we only need to verify that log log this can be seen by lemma of and the fact that hellinger metric is dominated by the divergence lemma let z be a random variable bounded by m then e exp z exp em ez proof note that log e exp z log e exp z e exp z em ez p where the last inequality follows from taylor expansion ex xk p x m xem bayes model selection proof of lemma we omit explicit dependence of m on the notation dr m n and rm in the proof let denote the probability measure induced by the joint distribution of xn where is distributed according to the stationary density easy computation shows that n p log n xi xi xi xi p n z p n n log n p here denotes the lebesgue measure on by the arguments on page of we see that r hence we only need to verify the bernstein condition by inequality n x p n f n p exp exp log xi xi n p x n xi xi exp i ii the first term i can be handled by an inductive calculation first note that for any and r f pp ee p p f f r ecm where the first inequality follows from lemma and the second inequality follows from r pf r holds for all x r where the constant involved depends only on m let sn xi xi and then for let n n h i ep e ecm dr han where the last inequality follows from now we can iterate the above calculation to see that i exp cm next we consider ii since for any random variables zn q q we have e zi ezin it follows that n y n exp xi xi exp ii where the last inequality follows by stationarity on the other hand by jensen s inequality n p n exp log n exp p exp collecting and we see that for n n n p p p p n pf n exp log n pf n log n i ii exp log n p p p exp cm completing the proof n proof of lemma for any g g let pg denote the probability density function of a multivariate normal distribution with covariance n matrix tn fg and pg the expectation taken with respect to the n density pg then for any g n log pg n pg n log det x n x n x n pg n log pg n pg log det tr i where we used the fact that for a random vector x with covariance matrix n ex ax tr let g x n n i under and b i then n yn log pg x n pg n log n n pg n pg pg i i h n n g bg tr b x tr i x bayes model selection let b u be the spectral decomposition of b where u is orthonormal and diag is a diagonal matrix then we can further compute g tr n x where gn s are standard normal note that for any z et x dx e log et p k where the inequality follows from log k p k hence apply the above display with t we have that for any maxi z n n y y e exp gi e exp x dx p n y i exp exp maxi denote and the matrix operator norm and frobenius norm respectively by the arguments on page of we have k and g k ke since g is a class of uniformly bounded function classes the spectrum of the covariance matrices and their inverses running over g must be bounded hence kbk k k k cg i next note that x i tr bb kbkf k kf kf cg p where in the first inequality we used km n kf kn m kf for symmetric matrices m n and the general rule kp qkf kp kkqkf collecting we see that assumption a is satisfied for v and c for some constants depending on g only han n finally we establish equivalence of log we have n n and first by n tr i log det tr log det i ki kf kf k cg ndn here in the second line we used the fact that det ab det i b b and in the third line we used the fact log det i a tr a tr a for any matrix a due to the inequality log x x for all x on the other hand by using the reversed inequality log x x for all x where c is a constant depending only pg n log pg n pg n on we can establish log the proof n n thereby completing proof of lemma note that n n x p n x xi log det log n x i p n log n p n p n n log det tr i the rest of the proof proceeds along the same line as in lemma appendix b proof of remaining theorems in section proof of theorem lemma let n then for any g fm log n f fm f g log m log en proof of lemma let qm denote all the design points n xn then it is easy to see that for a given mpartition q qm let fm q fm denote all monotonic functions that are constant on the partition q then the entropy in question can be bounded by n log max n f fm q f g m on the other hand any fixed q the entropy term above equals n pn m q where pn m q f f xn f fm q by pythagoras theorem the set bayes model selection involved in the entropy is included in pn m q m q g where m q is the natural projection from rn onto the subspace pn m q clearly pn m q is contained in a linear subspace with dimension no more than using entropy result for the space problem in page combined with the discussion in page relating the packing number and covering number log n f fm q f m log m log n the claim follows by and log m log en m log en it is clear that log hence we can take m n is satisfied with c and lemma suppose that holds then in assumption c holds proof of lemma let m ik m be the associated of xn of m fm with the convention that ik xn is ordered from smaller values to bigger ones is easy to see that m m m xi m xi m rm is and m it is easy to see that any f fm m satisfying the property that xi k k m leads to the error hence estimate f m m m f fm f m m n f fm m f m m n rm k m m n inf m m m m k m iso log n log en log g iso m m g m m e here the first inequality in the last line follows from the definition of iso the claim follows by verifying implies that the second and and g m log en the third term in the exponent above are both bounded by third term does not contribute to the condition since m n by noting in the gaussian regression setting and definition of proof of theorem the theorem follows by theorems and proposition coupled with lemmas and we now prove lemma we need the following result han lemma let xn rn and m m m xn rn where m arg g suppose that l and that there exists some element f fm such that f f f xn satisfies kf then m proof of lemma it can be seen that m arg m arg m where pn m f f xn f fm for any pn m such that l the loss function satisfies by triangle inequality if m then m m m l contradicting the definition of m as a minimizer of over pm n this shows the claim r proof of lemma let l f note that f x dx by lemma we see that m which entails that m now the conclusion follows from g en while the left side is at least on the order of as n proof of theorem checking the local entropy assumption b requires some additional work the notion of will be useful in this regard following section a subset v of rd is said to have t denoted as pdim v t if for every x and indices i n with for all we can always find a set j i such that no v v satisfies both vi xi for all i j and vi xi for all i i lemma let n suppose that pdim pn m dm where pn m n f f xn r f fm then for all g fm log n f fm f g c dm log n for some constant c depending on to prove lemma we need the following result cf theorem lemma let v be a subset of rn with b and pseudodimension at most then for every we have n a holds for some absolute constant n proof of lemma note that the entropy in question can be bounded by log n n pn m g bn since translation does not change the of a set pn m g has the same with that of pn m which is bounded from above by dm by assumption further note that pn m g bn is uniformly bounded by hence an application of lemma yields that the above display can be further bounded as follows log n f fm f g log c dm log n for some constant c depending on whenever n bayes model selection the of the class of piecewise affine functions fm can be well controlled as the following lemma shows lemma lemma in pdim pn m log as an immediate result of lemmas and we can take for n c d logn n m log for some c depending on m lemma suppose that holds and n then in assumption c holds proof of lemma we write m ai x bi throughout the proof we first claim that for any bd ai m d and x max bi m let gm ai x bi then gm m m to see this for any x x there exists some index ix m x x hence such that gm ix ix gm x m x aix x bix aix bix m m m d the reverse direction can be shown similarly whence the claim follows by taking supremum over x x this entails that m f fm f m m n p c d b b b a b a b a i n m i n m d i i i i m y m y p bd ai m d bi m m m g kai g vd d d md log d exp d log g m d log m where vd vol bd and we used the fact that vd d d now by requiring that n d and d d log n m log max d log g m d log m the claim follows by verifying implies since m the second term is bounded by md log the inequality follows by noting d lemma for n is satisfied for c and han c log n m log throughproof for fixed n and write m out the proof where c then for any and h since log log log for any hm we have x x log n log n m e log log n log m m for the second condition of note that for in order to verify it suffices to have hm log m log equivalently hm m h and hence h for all h suffices this is valid and hence completing the proof proof of theorem this is a direct consequence of theorems and lemma and combined with proposition proof of theorem lemma let n then for any g f s m log n f f s m f g log s log ep m log en proof the proof borrows notation from the proof of lemma further let ss denote all subsets of p with cardinality at most then the entropy in question can be bounded by p n log n f f s m s q f g max s m s log ep m log en log n pn s q max n n where pn s q i zi r supp s u is constant on the partitions of q is contained in a linear subspace of dimension no more than s now similar arguments as in lemma shows that the entropy term in the above display can be bounded by s m log proving the claim log en s log ep hence we can take s m log n lemma holds with c and proof for the first condition of note that for any h with log in the proof for any log since x x x s e s log ep m log en hs hm s m h s log ep log en the second condition of is easy to verify by our choices of c lemma suppose holds then in assumption c holds bayes model selection proof using notation in lemma s m f f s m f s m s m p n f f s m f s m s m s throughout the proof and where m f s m let log s s log ep and m m log en to bound the prior mass of the above display from below it suffices to bound the product of the following two terms s s s u um u m m the first term equals s s s s c s s here the inequality follows by noting s n s s s where denotes the largest singular value of x note that p since the trace for x is p and the trace of a matrix dominates the largest eigenvalue the set in the last line of is supported on s s and hence can be further bounded from below by g vs where vs vol bs hence s s vs g s exp s log s s log g log s where in the last inequality we used that vs s s by repeating the arguments in the proof of lemma we have m exp m log g log m combining and we see that s m f f s m f s m s m m log g exp log ep m log en s log g m s log exp log s m han in order that the right side of the above display bounded from below by exp s m we only need to require that log n s s log ep min log g e g min log log e m m log en the first terms in the above two lines lead to the other terms in the above two lines do not contribute by noting that m log n en since in gaussian regression model and n pn and while n s proof of theorem the claim of the theorem follows by theorems and proposition and lemmas appendix proof of auxiliary lemmas in section proof of lemma let fj f f dn f and gj fj be the collection of functions that form a minimal covering set of fj under the metric dn then by assumption n furthermore for each g gj it follows by lemma that there exists some test j g such that n n sup j g pf j g ndn g f dn f g dn g recall that g gj fj then dn g hence the indexing set above contains f f dn f g now we see that n j g nj n sup f dn f g pf j g nj consider the global test j g then x xx n n n nj j g n x nj n on the other hand for any f f such that dn f there exists some j and some gj gj such that dn f gj j hence n n pf pf j n j the right hand side of the above display is independent of individual f f such that dn f and hence the claim follows bayes model selection proof of lemma by jensen s inequality the left side of is bounded by n n n log z p n pf n log n p n pf n log p n n pf log n exp exp f c n n p n pf z z f f f n log p n pf n log n p n pf f using jensen s inequality the last term in the right side of the above display can be further bounded by n n z z p p n n f f f log n exp log n pf pf where the last inequality follows from fubini s theorem and assumption a now the condition on the prior entails that n n pf n p f e exp the claim follows by choosing small enough depending on c proof of proposition we may assume without loss of generality that dn m so that is since and by definition we have dn m and dn m in this case the global test can be constructed via then analogous to and for any random variable u n m u sup f f m jh n pf n m n n e similar to there exists an event with and the following is true on the event z y n pf f m m f fm f m c m han repeating the reasoning in and we see that n n m f f dn f m c dn m x m m z f fm f m n f f m m sup f f m jh n pf f n pf f here the third line is valid since m c jh by the right side of which entails c the fourth line uses and assumption together with the fact that m follows from probability estimate for enc and acknowledgements the author is indebted to chao gao for his numerous suggestions that lead to a substantially improved version of the paper he thanks johannes for very helpful comments on an earlier version of the paper the author would also like to thank jon wellner for constant support and continuous encouragement as this work developed references adamczak a tail inequality for suprema of unbounded empirical processes with applications to markov chains electron j no alquier cottet chopin and rousseau bayesian matrix completion prior specification arxiv preprint banerjee and ghosal posterior convergence rates for estimating large precision matrices using graphical models electron j barron and massart risk bounds for model selection via penalization probab theory related fields bellec sharp oracle inequalities for least squares estimators in shape restricted regression arxiv preprint approximation dans les espaces et de l estimation wahrsch verw gebiete robust testing for independent nonidentically distributed variables and markov chains in specifying statistical models volume of lect notes pages springer new york model selection via testing an alternative to penalized maximum likelihood estimators ann inst probab bayes model selection boucheron lugosi and massart concentration inequalities oxford university press oxford a nonasymptotic theory of independence with a foreword by michel ledoux and van de geer statistics for data springer series in statistics springer heidelberg methods theory and applications bunea a tsybakov and wegkamp aggregation for gaussian regression ann and y plan tight oracle inequalities for matrix recovery from a minimal number of noisy random measurements ieee trans inform theory and tao decoding by linear programming ieee trans inform theory castillo on bayesian supremum norm contraction rates ann castillo and van der vaart bayesian linear regression with sparse priors ann castillo and van der vaart needles and straw in a haystack posterior concentration for possibly sparse sequences ann chatterjee guntuboyina and on risk bounds in isotonic and other shape restricted regression problems ann gao van der vaart and zhou a general framework for bayes structured linear models arxiv preprint gao and zhou posterior contraction for sparse pca ann gao and zhou rate exact bayesian adaptation with modified block priors ann ghosal ghosh and van der vaart convergence rates of posterior distributions ann ghosal lember and van der vaart nonparametric bayesian model selection and averaging electron j ghosal and van der vaart convergence rates of posterior distributions for observations ann ghosal and van der vaart posterior convergence rates of dirichlet mixtures at smooth densities ann guntuboyina optimal rates of convergence for convex set estimation from support functions ann han and j wellner multivariate convex regression global risk bounds and adaptation arxiv preprint hannah and dunson bayesian nonparametric multivariate convex regression arxiv preprint hoffmann rousseau and on adaptive posterior concentration rates ann holmes and heard generalized monotonic regression using random change points statistics in medicine kleijn and van der vaart misspecification in bayesian statistics ann le cam convergence of estimates under dimensionality restrictions ann le cam on local and global properties in the theory of asymptotic normality of experiments pages le cam asymptotic methods in statistical decision theory springer series in statistics new york han mai and alquier a bayesian approach for noisy matrix completion optimal rate under general sampling distribution electron j mariucci ray and szabo a bayesian nonparametric approach to density estimation arxiv preprint massart concentration inequalities and model selection volume of lecture notes in mathematics springer berlin lectures from the summer school on probability theory held in july with a foreword by jean picard peligrad and rio bernstein inequality and moderate deviations under strong mixing conditions in high dimensional probability v the luminy volume volume of inst math stat ims pages inst math beachwood oh pati bhattacharya pillai and dunson posterior contraction in sparse bayesian factor models for massive covariance matrices ann pollard empirical processes theory and applications regional conference series in probability and statistics institute of mathematical statistics hayward ca american statistical association alexandria va recht fazel and parrilo guaranteed solutions of linear matrix equations via nuclear norm minimization siam rohde and a tsybakov estimation of matrices ann rousseau rates of convergence for the posterior distributions of mixtures of betas and adaptive nonparametric estimation of the density ann shen and wasserman rates of convergence of posterior distributions ann a tsybakov aggregation and minimax optimality in estimation in proceedings of the international congress of mathematicians pages van der vaart and van zanten rates of contraction of posterior distributions based on gaussian process priors ann van der vaart and van zanten adaptive bayesian estimation using a gaussian random field with inverse gamma bandwidth ann van der vaart and j wellner weak convergence and empirical processes springer series in statistics new york yang and pati bayesian model selection consistency and oracle inequality with intractable marginal likelihood arxiv preprint yoo and ghosal supremum norm posterior contraction and credible sets for nonparametric multivariate regression ann yu levine and cheng minimax optimal estimation in high dimensional semiparametric models arxiv preprint yuan and zhou minimax optimal rates of estimation in high dimensional additive models universal phase transition arxiv preprint han department of statistics box university of washington seattle wa usa address royhan
10
ref international conference on artificial neural networks icann springer lncs vol pp barcelona spain september a deep neural estimation metric for the jigsaw puzzle problem dror eli omid and nathan nov department of computer science university israel mail nathan center for automation research university of maryland college park md nathan abstract this paper introduces the first deep neural estimation metric for the jigsaw puzzle problem given two puzzle piece edges the neural network predicts whether or not they should be adjacent in the correct assembly of the puzzle using nothing but the pixels of each piece the proposed metric exhibits an extremely high precision even though no manual feature extraction is performed when incorporated into an existing puzzle solver the solution s accuracy increases significantly achieving thereby a new standard a b fig jigsaw puzzle before and after reassembly using our scheme in an enhanced solver introduction jigsaw puzzles are a popular form of entertainment available in different variation of difficulty to challenge children adults and even professional players given n m sholomon david netanyahu different tiles of an image the objective is to reconstruct the original image taking advantage of both the shape and chromatic information of each piece despite the popularity and vast distribution of jigsaw puzzles their assembly is not trivial computationally as this problem was proven to be nevertheless a computational jigsaw solver may have applications in many realworld applications such as biology chemistry literature speech descrambling archeology image editing and the recovery of shredded documents or photographs regardless as noted in research of the topic may be justified solely due to its intriguing nature recent years have witnessed a vast improvement in the research and development of automatic jigsaw puzzle solvers manifested in both puzzle size solution accuracy and amount of manual human intervention required in its most basic form every puzzle solver requires some function to evaluate the compatibility of adjacent pieces and a strategy for placing the pieces as accurately as possible most strategies are greedy and rely heavily on some trick to estimate whether two pieces are truly adjacent two pieces that are each the most compatible piece from all pieces to one another four pieces that form a loop where each pair s compatibility is above a threshold etc such heuristics were dubbed an estimation metric in as they allow estimating the adjacency correctness of two pieces without knowing the correct solution the majority of recent works focused on devising elaborate compatibility functions and estimation metrics despite the proven effectiveness of neural networks in the field of computer vision no attempt has been made to automatically devise a estimation metric for the jigsaw puzzle problem this might be due to the highly imbalanced nature of the puzzle problem as in each n m puzzle there are o n m matching and o possible mismatching ones in this paper we propose a novel estimation metric relying on neural networks the proposed metric achieves extremely high precision despite the lack of any manually extracted features the proposed metric proves to be highly effective in scenarios we incorporated the metric in our solver using no sophisticated compatibility measure and experimented with the currently known challenging benchmarks of the hardest variant of the jigsaw puzzle problem square pieces only chromatic information is available to the solver where both piece orientation and puzzle dimensions are unknown the enhanced solver proposed sets a new in terms of the accuracy of the solutions obtained and the number of perfectly reconstructed puzzles previous work jigsaw puzzles were first introduced around by john spilsbury a londonian engraver and mapmaker nevertheless the first attempt by the scientific community to computationally solve the problem is attributed to freeman and garder who in presented a solver which could handle up to problems ever since then the research focus regarding the problem has shifted from to merely solvers of puzzles in cho et al presented deep neural network based estimation metric for the jigsaw puzzle problem a probabilistic puzzle solver that could handle up to pieces given some a priori knowledge of the puzzle their results were improved a year later by yang et al who presented a particle solver furthermore pomeranz et al introduced that year for the first time a fully automated square jigsaw puzzle solver that could handle puzzles of up to pieces gallagher has further advanced this by considering a more general variant of the problem where neither piece orientation nor puzzle dimensions are known son et al improved the accuracy of the latter variant using palkin and tal further improved the accuracy and handled puzzles with missing pieces sholomon et al presented a genetic algorithm ga solver for puzzles of known orientation which was later generalized to other variants compatibility measures and estimation metrics as stated earlier most works focus on the compatibility measure and an estimation metric a compatibility measure is a function that given two puzzle piece edges the right edge of piece versus the upper edge of piece predicts the likelihood that these two edges are indeed placed as neighbors in the correct solution this measure applies to each possible pair of piece edges the estimation metric on the other hand predict whether two piece edges are adjacent but may not apply to many possible pairs following is a more detailed review of the efforts made so far in the field cho et al surveyed four compatibility measures among which they found dissimilarity the most accurate dissimilarity is the sum over all neighboring pixels of squared color differences over all color bands assuming pieces xi xj are represented in some color space like rgb or yuv by a k k matrix where k is the of a piece in pixels their dissimilarity where xj is to the right of xi for example is v uk ux x xi k k cb xj k cb d xi xj r t where cb denotes the color band pomeranz et al also used the dissimilarity measure but found empirically that using the lp q norm works better than the usual norm moreover they presented the metric pieces xi and xj are said to bestbuddies if p ieces c xi xj c xi xk and p ieces c xj xi c xj xp where p ieces is the set of all given image pieces and and are complementary spatial relations if right then left and vice versa gallagher proposed yet another compatibility measure called the mahalanobis gradient compatibility mgc as a preferable compatibility measure to those sholomon david netanyahu used by pomeranz et al the mgc penalizes changes in intensity gradients rather than changes in intensity and learns the covariance of the color channels using the mahalanobis distance also gallagher suggested using dissimilarity ratios absolute distances between potential piece edge matches are sometimes not indicative for example in smooth surfaces like sea and sky so considering the absolute score divided by the score available seems more indicative son et al suggested four or more puzzle piece edges where the compatibility ratio between each pair is in the top ten among all possible pairs of piece edges in the given puzzle palkin and tal proposed a greedy solver based on an asymmetric dissimilarity and the estimation metric motivation we propose a novel estimation metric called our goal is to obtain a classifier which predicts the adjacency likelihood of two puzzle piece edges in the correct puzzle configuration note that despite the exponential nature of the problem as there are o nm possible arrangements of the pieces taking into account rotations the problem can be solved theoretically by assigning correctly in a consecutive manner n m pairs this is reminiscent of finding a minimal spanning tree as noted by hence the classifier s precision is of far greater importance than its recall a classifier with perfect precision and a recall of all possible matches n m n m might achieve a perfect solution by itself challenges a solution might have been to train a neural network against ones however the issue of a jigsaw puzzle piece matching is of an imbalanced nature in each n m puzzle there are o n m matching pairs of piece edges and o possible nonmatching ones a thorough review on the challenges and tactics to avoid them can be found in the trivial approach of random or uninformed undersampling randomly choosing the required number of nonmatching pairs leads to a and highrecall metric the very opposite of the goal set beforehand we believe that the reason for this shortcoming is that there exist many mismatches but only a handful of ones thus we resort to informed undersampling choosing a subset of good mismatching pairs according to some criterion nevertheless we avoid using any manual feature selection or other sophisticated means in the jigsaw puzzle domain similarly to many other problem domains the solver does not actually try to reassemble the original image as this problem is deep neural network based estimation metric for the jigsaw puzzle problem not mathematically defined but rather tries solving a proxy problem which is to achieve an image whose global overall score between is minimal thus we choose using the compatibility measure as the undersampling criterion neural network training for training and we use the images of size pixels from the iapr benchmark each image is first converted to yuv space followed by the normalization of each channel separately via normalization next each puzzle image is divided to tiles where each tile is of size pixels as in all previous works finally we create a balanced set of positive and negative samples of pairs using informed undersampling as will be described below in the end we obtain a balanced set of pairs overall to balance our dataset we use the most basic compatibility score which is the dissimilarity between two in the yuv as described in eq as an undersampling criterion for each puzzle piece edge xi j i j we find its most compatible piece edge and its second most compatible piece edge if the pair of edges xi j is indeed adjacent in the original image we add this pair to the pool of samples and toss the pair xi j to the pool of samples otherwise xi j is added to the samples and the other pair is discarded the latter is done to avoid training the network on adjacent pieces which happen to be vastly different due to a significant change of the image scenery in the corresponding region in other words we restrict our interest to highly compatible piece edges that are indeed adjacent since this method leads to more negative samples than positive ones we eventually randomly throw some negative samples to balance out the set from each image pair we extract the two columns near the edge the column of abutting pixels in each edge and the one next to it this results is an input of size pixels we use a neural network ffnn of five fully connected layers of size and the output is a softmax layer containing two neurons we expect for matching pairs and otherwise the activation used in all layers is the rectified linear unit relu function f x max x figure depicts the network s structure we trained the network in a supervised manner using stochastic gradient descent that minimizes the negative log likelihood of the error for iterations the resulting network reaches accuracy on the training set and on a test set all dataset preparation and network training was performed using experimental results for each piece edge xi j i n m j if its most compatible piece edge xk l is classified positively using the network we define xk l to be xi j s piece edge note that each piece edge can have only a single sholomon david netanyahu fig architecture of our scheme also some pieces might not have a at all if the most compatible piece is not classified as one by the network first we evaluate the precision of the proposed metric how many dnnbuddies are indeed adjacent in the original image using the well known dataset presented by cho et al of puzzles we obtained a precision of next we incorporated the estimation metric due to the proposed scheme into the solver proposed by us previously unfortunately due to lack of space no review of genetic algorithms and the proposed method can be included in this paper nevertheless the modification required with respect to the existing ga framework is rather simple if a pair appears in one of the parents assign this pair in the child figure describes the modified crossover operator in the ga framework according to the above see step which includes the new phase until n relative relations are assigned do try assigning all common relative relations in the parents try assigning all relative relations in the parents try assigning all relative relations in the parents try assigning all existing relative relations try assigning random relative relations fig crossover overview we ran the augmented solver on the puzzle set and on the two additional datasets proposed by pomeranz et al of and piece puzzles we evaluated our results according to the neighbor comparison which measures the fraction of correct neighbors and the number of puzzles perfectly reconstructed for each set deep neural network based estimation metric for the jigsaw puzzle problem table presents the accuracy results of the same solver with and without the metric for each dataset we achieve a considerable improvement in the overall accuracy of the solution as well as the number of perfectly reconstructed puzzles moreover our enhanced deep neural scheme appears to outperform the current results as it yields accuracy levels of and which surpass respectively the best results known of and of pieces ga neighbor perfect our ga neighbor perfect table comparison of our accuracy results with and without the new estimation metric conclusions in this paper we presented the first neural estimation metric for the jigsaw puzzle problem unlike previous methods no manual feature crafting was employed the novel method exhibits high precision and when combined with a puzzle solver it significantly improves the solution s accuracy to set a new art standard references altman solving the jigsaw puzzle problem in linear time applied artificial intelligence an international journal brown nehab burns dobkin vlachopoulos doumas rusinkiewicz weyrich a system for acquisition and matching of fresco fragments reassembling theran wall paintings acm transactions on graphics cao liu yan automated assembly of shredded pieces from multiple photos in ieee international conference on multimedia and expo pp cho avidan freeman a probabilistic image jigsaw puzzle solver in ieee conference on computer vision and pattern recognition pp cho butman avidan freeman the patch transform and its applications to image editing in ieee conference on computer vision and pattern recognition pp collobert kavukcuoglu farabet a environment for machine learning in biglearn nips workshop no deever gallagher assembly of real shredded documents in icip pp sholomon david netanyahu demaine demaine jigsaw puzzles edge matching and polyomino packing connections and complexity graphs and combinatorics freeman garder apictorial jigsaw puzzles the computer solution of a problem in pattern recognition ieee transactions on electronic computers gallagher jigsaw puzzles with pieces of unknown orientation in ieee conference on computer vision and pattern recognition pp goldberg malon bern a global approach to automatic solution of jigsaw puzzles computational geometry theory and applications grubinger clough deselaers the iapr benchmark a new evaluation resource for visual information systems in international workshop ontoimage vol he garcia learning from imbalanced data knowledge and data engineering ieee transactions on justino oliveira freitas reconstructing shredded documents through feature matching forensic science international koller levoy reconstruction and new matches in the forma urbis romae bullettino della commissione archeologica comunale di roma pp marande burger mitochondrial dna as a genomic jigsaw puzzle science marques freitas reconstructing documents using color as feature matching in acm symposium on applied computing pp morton levison the computer in literary studies in ifip congress pp paikin tal solving multiple square jigsaw puzzles with missing pieces in computer vision and pattern recognition ieee conference on pp ieee pomeranz shemesh a fully automated greedy square jigsaw puzzle solver in ieee conference on computer vision and pattern recognition pp sholomon david netanyahu a genetic solver for very large jigsaw puzzles in ieee conference on computer vision and pattern recognition pp sholomon david netanyahu a generalized genetic solver for very large jigsaw puzzles of complex types in aaai conference on artificial intelligence pp sholomon david netanyahu genetic solver for very large multiple jigsaw puzzles of unknown dimensions and piece orientation in acm conference on genetic and evolutionary computation pp son hays cooper solving square jigsaw puzzles with loop constraints in european conference on computer vision pp springer wang determining molecular conformation from distance or density data thesis massachusetts institute of technology yang adluru latecki particle filter with state permutations for solving image jigsaw puzzles in ieee conference on computer vision and pattern recognition pp zhao su chou lee a puzzle solver and its application in speech descrambling in wseas international conference computer engineering and applications pp
1
sep distributed linear equation solver for minimum norm solutions jingqiu zhou wang xuan shaoshuai mou and brian anderson october abstract this paper proposes distributed algorithms for networks to achieve a solution in finite time to a linear equation ax b where a has full row rank and with the minimum in the underdetermined case where a has more columns than rows the underlying network is assumed to be undirected and fixed and an analytical proof is provided for the proposed algorithm to drive all agents individual states to converge to a common value viz a solution of ax b which is the minimum norm solution in the underdetermined case numerical simulations are also provided as validation of the proposed algorithms introduction a significant amount of effort in the control community has recently been given to distributed algorithms for solving linear equations over networks in which each agent only knows part of the equation and controls a state vector that can be looked at as an estimate of the solution of the overall linear equations numerous extensions along this direction include achieving solutions with the minimum euclidean norm elimination of the initialization step reduction of state vector dimension by utilizing the sparsity of the linear equation and achieving least square solutions all these algorithms yield asymptotic convergence but require an infinite number of sensing or communication events this work was supported by a funding from northrop grumman cooperation zhou wang and mou are with the school of aeronautics and astronautics purdue university west lafayette in usa mous anderson is with hangzhou dianzi university hangzhou china the australian national university and csiro formerly nicta canberra act australia his work is supported by csiro and by the australian research council s discovery projects and corresponding author shaoshuai mou solutions to underdetermined linear equations with the minimum norm are perhaps the most important in many engineering applications including earthquake location detection analysis of statistical data solving biomeganetic inverse problems and so on one most intriguing case among these applications is compressive sensing which enables transmission of sparse data in a very efficient way the decoding process of compressive sensing requires solving of linear equations with a minimum number of entries of the solution vectors which however is an problem and usually computationally costly thus researchers usually turn to achieve solutions with minimum norm instead for which the function to be minimized is convex most existing results for achieving minimum norm solutions are based on the idea of lasso including alternating direction method of multipliers admm the method gradient projection methods homotopy methods iterative methods and proximal gradient methods interesting as these results are they either achieve limited accuracy dominated by a threshold parameter involve solving a much larger linear equation or lead to high computational complexity in this paper we aim to develop distributed algorithms for networks to achieve in finite time a solution of linear equations or in the underdetermined case one with the minimum norm by distributed is meant that each agent only knows part of the overall linear equation and can communicate with only its nearby neighbors the problem of interest is formulated in section we introduce in section the concepts to be employed in the paper including filippov maps filippov solutions generalized lie derivatives based on which a preliminary result is achieved in section we will first propose a distributed algorithm to drive all agents state vectors to converge in finite time to the same solution of the overall linear equations then we present a centralized update for achieving a solution with the minimum norm motivated by the flow proposed in and the gradient flow for consensus devised in we utilize a combination of the proposed distributed linear equation solver and the proposed centralized algorithm for minimum solutions to develop a distributed linear equation solver for achieving the minimum norm solution which is shown to converge in finite time we provide simulations in section and concluding remarks in section notation let r denote an arbitrary positive integer let denote the vector in rr with all entries equal to let ir denote the r r identity matrix we let col ar be a stack of matrices ai possessing the same number of columns with the index in a ascending order i let diag ar denote a block diagonal matrix with ai the ith diagonal block entry i by m and m are meant that the square matrix m is positive definite and positive respectively by m is meant the transpose of a matrix m let ker m and image m denote the kernel and image of a matrix m respectively let denote the kronecker product let k denote the norm of a vector in rr problem formulation consider a network of m agents i m inside this network each agent can observe states of certain other agents called its neighbors let ni denote the set of agent i s neighbors we assume that the neighbor relation is symmetric that is j ni if and only if i nj then all these neighbor relations can be described by an undirected graph g such that there is an undirected edge connecting i and j if and only if i and j are neighbors in this paper we only consider the case in which g is connected fixed and undirected suppose that each agent i knows ai rni and bi rni and controls a state vector yi t rn then all these ai and bi can be stacked into an overall equation ax b where a col am b col bm without loss of generality for the problems of interest to us we assume a to have full let denote a solution to ax b and in the underdetermined case is not unique and let denote its minimum solution that is arg min in the a case x and necessarily coincide the problem of interest in this paper is to develop distributed algorithms for each agent i to update its state vector yi t by only using its neighbors states such that all yi t to converge in finite time to a common value and if desired in the nonsquare case the value key concepts and preliminary results before proceeding we introduce some key concepts and preliminary results for future derivation and analysis the key references for the background we summarize are and filippov maps and filippov solutions by a filippov map f f rr b rr associated with a function f rr rr is meant co f b x f f x s here b x stands for the open ball on rr whose center is at x and has a radius of s denotes the lebesgue measure of s and co stands for the convex closure let sgn x rr rr be a function with the kth entry k r defined as x k x k sgn x k x k it follows that the filippov map f sgn x for x rr is defined entrywise as x k x k f sgn x k x k for k note that even if x i x j the ith and jth entries of a vector in f sgn x may not necessarily be equal since each of them could be chosen as arbitrary values in the interval from the definition of f sgn x one can verify that q x f sgn x while for any w rr there holds q w by a filippov solution for f f x is meant a caratheodory solution x t such that f f x for almost all t x t is absolutely continuous and can be written in the form of an indefinite integral the following two lemmas treat existence of such a filippov solution lemma proposition in if f rr rr is measurable and locally bounded then for any initial point rr there exists a filippov for f f x lemma theorem in page of let a f t x be defined in the domain g of time space t x with f t x measurable and locally bounded in an open domain g let f f t x co f t b x s then for any point g there exists a filippov solution of f f t x with x note that lemma establishes the existence of a solution for systems this is more general than lemma which only guarantees the existence of solutions to systems generalized gradients and generalized lie derivatives for a locally lipschitz function w rr r the generalized gradient of w is x co lim xi xi x xi s there is no implication that the solution exists on an infinite interval where s rr is an arbitrarily chosen set of measure zero denotes the set of points at which w is not differentiable and co denotes convex hull specially for the function one computes the kth element of its generalized gradient to be xk xk k xk it follows from this and the definition of f sgn x in that f sgn x for a map f rr b rr the generalized lie derivative of w is defined as w x q r there exists f x such that x q the above definition of generalized lie derivative implies that for each in f x we check if the inner product is a fixed value for all x if so this inner product is an element in w x but note that the set w x may be empty moreover for locally lipschitz and regular see and for detailed discussion of regular functions w x one has the following lemma lemma proposition in let x rr be a solution for f x t where f is any map let w x rr r be locally lipschitz and regular then w x t is differentiable at almost all t w x t for almost all t the derivative of w x t satisfies dw x t dt lemma guarantees the existence of generalized lie derivatives for functions that are locally lipschitz and regular if one focuses on a specific solution one can show that in is a special vector as summarized in the following lemma lemma see proof of lemma in let x t denote a specific solution of a differential enclosure suppose w x is locally lipschitz and regular let i denote the time interval for which t exists then dw x t t dt where is any vector in x that a function w x rn r is called regular at x rn if x v for all v rn there exists the usual right directional derivative x v w o x v for all v rn preliminary results for any positive matrix m m one can define m q rr f sgn q m q and its compliment m q rr f sgn q m q we impose a further requirement on m namely that m is nonempty which can be easily ensured let m f sgn q q m now f sgn q is a closed set for any fixed q also note that f sgn q can only be one of a finite number of different sets hence it is easy to check for a given m whether m is nonempty and in a later use of the result it proves easy to check it further follows that m is also a closed set consequently the continuous function f m has a nonzero minimum on m we denote m min f m from the definition of m and m one has m to summarize one has the following lemma lemma for any matrix m we let m m and m defined as above suppose that m is nonempty then m is a positive constant for the graph g we label all its nodes as m and all its edges as assign an arbitrary direction to each edge in then the incidence matrix of g denoted by h hik is defined as follows i is the head of the kth edge i is the tail of the kth edge hik otherwise since g is connected then ker h is the span of moreover one has the following lemma lemma suppose a has rank and g is connected let diag pm where each pi is the projection matrix to ker ai let h in with h the incidence matrix of then one has image ker image and proof of lemma let u be a vector such that the vector v lies in image and ker and we will show it is zero to establish define diag am which is full row rank since a has full row rank then i it follows from that multiplying both sides of the above equation by in one has in since there holds since is full column rank one has from and equation one has furthermore we notice and conclude that is true for any q image there exists a vector p such that q p and a vector f sgn q such that note that is a projection matrix then from one then has from f sgn q and one has q which together with and implies then one has q and is true now consider the system f sgn x with any positive matrix m the existence of a filippov solution to can be guaranteed by lemma the existence interval is t because of the global bound on the side to let x t denote such a filippov solution for any given x note that the function is locally lipschitz and regular the word was introduced early with reference then by lemma the time derivative of kx t exists for almost all t and is in the set of generalized lie derivatives in other words there exists a set i t with t of lebesgue measure such that dkx t exists for all t i dt proposition let x t denote a filippov solution to for any given x rr then dkx t dt t i there exists a finite time t such that x t m if it is further true that dkx t dt t t t one has x t x t t t remark note that the side of is a projection of the gradient flow of the potential function kx t it is also standard that the gradient law of a real analytic function with a lower bound converges to a single point which is both a local minimum and a critical point of the potential function however if the real analytic property does not hold the convergence result may fail indeed the function kx t here is obviously not real analytic and one can not immediately assert that will drive kx t to its minimum not to mention the finite time result in proposition thus proposition is nontrivial and will serve as the foundation for devising distributed linear equation solvers in this paper proof of proposition by in lemma one has t dt holds for all t it follows from f sgn x in that at each t there exists a t f sgn x such that t since could be chosen as any vector in t t f sgn x and f sgn x from one could choose t in which together with leads to dkx t t m t t i dt note that m is positive thus is true we use the method of contradiction to prove that there exists a finite time t such that x t m t i suppose such a finite time does not exist one has x t m t i then dkx t m t i dt where m is as defined in it follows that kx t kx m t t this contradicts the fact that kx t t since m is a positive constant by lemma thus there exists a finite time t such that x t m from the assumption the fact that m is semipositive definite and one has m t t t t it follows from this and that t t t t by integration of t from t to one has x t x t completes the proof t t this algorithms and main results in this section we study three related problems finite time distributed solution of ax b centralized solution of ax b achieving a minimum norm and then finally using ideas from the first two and finding a distributed algorithm achieving the minimum solution of ax b in finite time distributed linear equation solver in this subsection we will present a distributed update for achieving a solution to ax b in finite time of course a is assumed to have full row rank but not necessarily be square recall that distributed linear equation solvers based on the agreement principle require each agent i to limit its update subject to its own constraint ai x bi while seeking consensus with its neighbors such an agreement principle in systems can be achieved by a flow which at each agent i projects a function of its neighbors states to the subspace defined by its linear equation by choosing the gradient flow for consensus developed by within the flow we are led to postulate the following update for each agent i x ai yi bi where f sgn yi yj and pi denote the projection matrix to the kernel of ai note the special property that f sgn is a point which can be chosen arbitrarily from the interval generally speaking different agents may have different choices for f sgn before proceeding we make the following assumption on coordinations between neighbor agents assumption each pair of neighbor agents i and j takes the choice of and when yi yj such that under assumption and the definition of f sgn x one always has no matter whether yi is equal to yj or not let y col ym diag pm and h in with h the incidence matrix of then from we have sgn y by lemma there exists a filippov solution to system for t we denote this solution by y t by lemma there exists a set i t exists for all t i moreover with t of lebesgue measure such that dky t k dt one has the following main theorem which establishes existence of a limiting consensus solution but for the moment leaves unspecified the theorem under assumption and the updates and with a assumed to have full row rank all yi t i m converge to be a single solution to ax b in finite time proof of theorem since ai yi bi and is in the kernel of ai one has ai yi t bi for all t then to prove theorem it is sufficient to show that all yi reach consensus in finite time note that g is connected and ker h is spanned by the vector then y if and only if all yi are equal thus to prove theorem it suffices to prove z t y t converges to in finite time by multiplying both sides of by one has sgn z by proposition we have dkz t t i dt and there exists a finite time t i such that z t from this we know the fact that z t y t so that z t image and recalling in lemma one has z t which by implies kz t t t it follows that z t t t this completes the proof centralized update for minimum solution in this subsection we will propose a centralized update for achieving the minimum solution to ax b by noting that is convex we conceive of using a negative gradient flow of subject to x remaining on the manifold ax b in order to achieve arg this leads us to the following update f sgn y ay b where p denotes the projection matrix onto the kernel of a again by lemma one has there exists a filippov solution to system for t which we denote by y t by lemma there exists a set i t with t of measure such that dky t k exists for all t i moreover we have the following dt main theorem theorem with a of full row rank the filippov solution y t to converges in finite time to a constant which is the minimum solution to ax b proof of theorem by proposition one has dky t t i dt and there exists a finite time t i such that y t p then there exists a vector f sgn y t such that p this and imply y t ky t moreover let denote any solution to ax b recall there holds since ker p one has image this implies that there exists a vector q such that q a from ay t b and one has ky t y t q ay t q where is any solution to ax b thus y t is a minimum norm solution to ax b this and implies t reaches its minimum value subject to ay t b for t t t thus dky t dt t t t which satisfies the assumption in proposition again then y t y t t t thus y t is the minimum solution to ax b for t t this completes the proof distributed update for minimum solutions in this subsection we will develop a distributed update for a network to achieve the minimum solution to ax b in finite time motivated to study a combination of the distributed linear equation solver in and the centralized update for minimum solutions in we propose the following update for agent i i m x t pi pi where f sgn yi f sgn yi yj with in case yi yj and ai yi bi we assume that k t r is measurable and locally bounded almost everywhere for t and lim k t z k t dt where is a sufficiently small nonnegative number depending on the connection of the network and a note that is always a feasible choice of one example one simple case is choosing and of a choice of k t is k t resulting in k t obtained by taking to be zero this choice obviates the need to decide how small one has to be to meet a sufficiently small condition but may result in rather slow convergence now from ai yi bi and the fact that pi is the projection to the kernel of ai which ensures ker ai one has ai yi t bi t let y col ym diag pm and h in with h the incidence matrix of from the updates in and assumption one has t f sgn y sgn y note that sgn y k t and sgn y are measurable and locally bounded almost everywhere then by lemma there exists a filippov solution to system for any given y satisfying which we denote by y t col t t t ym t by lemma there exists a set i t exists for all t i then one with t of lebesgue measure such that dky t k dt has the following theorem theorem under assumption and the update and with a of full row rank all yi t i m converge in finite time to the same value which is the minimum solution to ax b proof of theorem we first prove that all yi t reach a consensus in finite time by showing that z t converges to in finite time where z t y t multiplying both sides of by one has t f sgn y sgn z by lemma one has dkz t t i dt where can be any vector in note that f sgn z then dkz t t dt where f sgn y f sgn z and is chosen to be equal to since z t image then by lemma if also z t one will have z t thus z t as long as kz t now by the definition of and lemma one has as long as kz t where is a positive constant let denote an upper bound on and define an upper bound on by this captures the idea stated previously that depends on a and the graph for any chosen as in there must exist a finite time such that k t t where we have from and one has dkz t as long as kz t t t dt with a positive constant thus there must exist a finite time such that z next we prove that z t t we prove this by contradiction suppose is not true then there exists a time such that z then kz since kz t is continuous there exists a time such that kz takes its maximum value for t again since kz t is continuous there exists a sufficiently small but positive such that kz t is differentiable for t because kz t is differentiable almost everywhere we know that kz z dkz t dt kz dt because kz t in by and we have kz kz kz this contradicts the fact that kz is the maximum value on thus is true by one has there exists a vector t such that t t ym t t t moreover b since ai yi t bi for i to prove theorem we only need to prove that t converges to be the minimum solution to ax b to see why this is so we let p denote the projection matrix to the kernel of a then p pi p for i multiplying both sides of by p one has x t p p i m since g is undirected then appears in the update i if appears in its neighbor j s update by adding the updates in for i m and noting for any two neighbors i and j one has m x t p m x where f sgn yi by one knows all yi t reach a consensus t for t note that if the kth entry of t is the kth entry of each can be selected as an arbitrary value from which may be different for different entries but their average is still an arbitrary value in thus m x f sgn t m t from and we have t p f sgn let rt t k s ds from dt dt and one has f sgn with b for this is exactly the same as the centralized update in by theorem there exists a finite time such that y is the minimum solution to ax b for by the relation between and t one has correspondingly that there exist a finite time t such that t is a minimum solution for t t this completes our proof simulation result in this section we will report several simulations of the proposed algorithms for solving an underdetermined linear equation ax b in a undirected and connected network as in figure here a and b are partitioned as a and b figure a four agent network respectively each agent i knows ai and bi with example we utilize the distributed update to achieve a solution to ax by x in finite time in the above network let b y where yi t denote the state of agent i that is the estimate of agent i to then ky t measures the difference between all agents estimations and the solution as shown by simulations in figure ky t reaches in finite time this suggests all agents states achieves a consensus to in finite time consistent with the claim of theorem example we employ the centralized update with state vector y t to achieve which denotes a minimum solution to ax b as shown in figure finite time achieving a solution under the update figure ky t reaches in finite time and maintains to be afterwards this indicates that the minimum solution is achieved in finite time corresponding to theorem it is worth noting that one could observe multiple phases of convergence in figure this is because f sgn y t in the update takes different values and results in different convergence rates figure centralized solver for achieving a minimum norm solution under the update example finally we utilize the distributed update to achieve a minimum solution to ax b denoted by in finite time here k t is chosen to take the form with and constants we still let y where yi t denote the state of agent i that is the estimate of agent i to then ky t measures the difference between all agents estimations and as shown in figure and figure all yi t reach the same minimum solution in finite time regardless of different choices of and moreover by fixing and increasing the value of in k t one achieves a significantly faster convergence as shown in figure similarly increasing with a fixed also leads to a faster convergence although not that dramatically as shown in figure figure distributed solver for achieving minimum norm solution under the update where k t with fixed and different values of figure distributed solver for achieving minimum norm solution under update where k t with fixed and different values of we also note from figure and figure that the convergence time required in this distributed way for minimum solutions is dramatically longer roughly speaking times longer than that in the centralized case in figure the major reason for this is that the centralized update appearing in the distributed update is scaled by k t which is smaller than the time required for consensus in this network example is minor under the distributed update as indicated in figure let t denote the average of all four agents states the evolution of ky t t in figure suggests that all agents states reach a consensus in a finite time similar to that in figure we anticipate that when it comes to a very large network the convergence time for consensus might play a more significant role in convergence of the distributed update figure consensus of distributed solver under update with k t conclusion we have developed distributed algorithms for achieving solutions and minimum solutions respectively to linear equations ax b in finite time the algorithms result from combination of the projectionconsensus flow proposed in and the gradient flow for consensus devised in and work for fixed undirected networks future work includes the generalization of the proposed update to networks that are directed and references mou liu and morse a distributed algorithm for solving a linear algebraic equation ieee transactions on automatic control anderson mou helmke and morse decentralized gradient algorithm for solution of a linear equation numerical algebra control and optimization lu and tang a distributed algorithm for solving positive definite linear equations over networks with membership dynamics ieee transactions on control of network systems wang and elia distributed solution of linear equations over unreliable networks proceedings of american control conference pages mou and morse a distributed algorithm for solving a linear algebraic equation european control conference pages wang mou and sun improvement of a distributed algorithm for solving linear equations ieee transactions on industrial electronics wang ren and duan distributed minimum weighted norm solution to linear equations associated with weighted inner product proceedings of the conference on decision and control pages wang fullmer and morse a distributed algorithm with an arbitary initialization for solving a linear algebraic equation proceedings of american control conference pages mou lin wang fullmer and morse a distributed algorithm for efficiently solving linear equations and its applications special issue jcw systems control letters wang and elia distributed least square with intermittent communications in american control conference acc pages june wang and elia a control perspective for centralized and distributed convex optimization in ieee conference on decision and control and european control conference pages dec gharesifard and distributed convex optimization on digraphs ieee transactions on automatic control march shi and distributed network flows solving linear algebraic equations proceedings of american control conference pages liu lageman anderson and shi exponential least squares solvers for linear equations over networks ifac world congress toulouse pages liu lou anderson and shi network flows as least squares solvers for linear equations ieee conference on decision and control melbourne accepted shearer improving local earthquake locations using the norm and waveform cross correlation application to the whittier narrows california aftershock sequence journal of geophysical research solid earth y dodge statistical data analysis based on the and related methods beucker and schlitt on minimal solutions of the biomagnetic inverse problem technical report zentralinstitut angewandte mathematik baron duarte wakin sarvotham and baraniuk distributed compressive sensing eldar and kutyniok compressed sensing theory and applications cambridge university press candes and tao decoding by linear programming ieee transactions on information theory candes romberg and tao robust uncertainty principles exact signal reconstruction from highly incomplete frequency information ieee transactions on information theory yang genesh zhou sastry and ma a review of fast l algorithms for robust face recognition technical report california univ berkeley dept of electrical engineering and computer science boyd parikh chu peleato and eckstein distributed optimization and statistical learning via the alternating direction method of multipliers foundations and trends r in machine learning frisch the logarithmic potential method of convex programming memorandum university institute of economics oslo kojima megiddo and mizuno theoretical convergence of largestep primaldual interior point algorithms for linear programming mathematical programming figueiredo nowak and wright gradient projection for sparse reconstruction application to compressed sensing and other inverse problems ieee journal of selected topics in signal processing osborne presnell and b turlach a new approach to variable selection in least squares problems ima journal of numerical analysis daubechies defrise and de mol an iterative thresholding algorithm for linear inverse problems with a sparsity constraint communications on pure and applied mathematics pages becker bobin and nesta a fast and accurate firstorder method for sparse recovery siam journal on imaging sciences pages cao morse and anderson agreeing asynchronously ieee transactions on automatic control cao morse and anderson reaching a consensus in a dynamically changing environment a graphical approach siam journal on control and optimization cao spielman and morse a lower bound on convergence of a distributed network consensus algorithm in decision and control and european control conference ieee conference on pages ieee lageman and y sun consensus on spheres convergence analysis and perturbation theory in decision and control cdc ieee conference on pages ieee qin and yu exponential consensus of general linear systems under directed dynamic topology automatica convergent gradient flows with applications to network consensus automatica cortes discontinuous dynamical system a tutorial on solutions nonsmooth analysis and stability ieee control system magazine pages filippov differential equations with discontinuous righthand sides control systems volume springer science business media clarke optimization and nonsmooth analysis siam bacciotti and ceragioli stability and stabilization of discontinuous systems and nonsmooth lyapunov functions esaim control optimisation and calculus of variations chung spectral graph theory american mathematical soc absil and kurdyka on the stable equilibrium points of gradient systems systems control letters
3
wireless communication designs with propulsion energy limitations subin eom hoon lee junhee park and inkyu lee fellow ieee jan school of electrical korea university seoul korea email inkyu abstract this paper studies unmanned aerial vehicle uav aided wireless communication systems where a uav supports uplink communications of multiple ground nodes gns while flying over the area of the interest in this system the propulsion energy consumption at the uav is taken into account so that the uav s velocity and acceleration should not exceed a certain threshold we formulate the minimum average rate maximization problem and the energy efficiency ee maximization problem by jointly optimizing the trajectory velocity and acceleration of the uav and the uplink transmit power at the gns as these problems are in general we employ the successive convex approximation sca techniques to this end proper convex approximations for the constraints are derived and iterative algorithms are proposed which converge to a local optimal point numerical results demonstrate that the proposed algorithms outperform baseline schemes for both problems especially for the ee maximization problem the proposed algorithm exhibits about gain over the baseline scheme i ntroduction recently unmanned aerial vehicles uavs have received great attentions as a new communication entity in wireless networks compared to conventional terrestrial communications where users are served by ground base stations bss fixed at given position systems could be dispatched to the field with various purposes such as disaster situations and military uses moreover located high above users uavs are likely to have los communication links for channels utilizing these advantages uavs have been considered to diverse wireless communication systems the authors in and studied a mobile relaying system where a uav helps the communication of ground nodes gns without direct communication links in this relaying system compared to conventional static relay schemes the uav can move closer to source and destination nodes in order to obtain good channel conditions and thus the system throughput can be significantly improved in the throughput of mobile relaying channels was maximized by optimizing the transmit power at the source and the relay node as well as the trajectory of the mobile relay for the fixed relay trajectory the work addressed the secrecy rate maximization problem for the relaying system with an external eavesdropper in addition uavs have been adopted to assist conventional terrestrial communication infrastructures for the disaster situation uavs were employed in to recover malfunctioned ground infrastructure the work in examined a system where the uav serves users by jointly optimizing uav s trajectory bandwidth allocation and user partitioning also the flying computing cloudlets with uavs were introduced to provide the offloading opportunities to multiple users moreover the uavs could play the role of mobile bss in wireless networks the authors in derived mathematical expressions for the optimum altitude of the uavs that maximizes the coverage of the cellular network also the trajectory optimization methods for mobile bss were presented in and assuming that the gns are located in a line the minimum throughput performance was maximized in by optimizing the position of a uav on a straight line this result was extended in to a general scenario where multiple uavs fly space to communicate with gns the joint optimization algorithms for the uav trajectory transmit power and time allocation were provided in to maximize the minimum throughput performance however these works did not consider the propulsion energy consumption of the uavs necessary for practical uav designs under limited energy situation by taking this issue into account recent works investigated energy efficiency ee of the uav system different from conventional systems which consider only communicationrelated energy consumption the ee of the uav should addresses the propulsion energy at the uav additionally the authors in maximized the ee by controlling the turning radius of a uav for mobile relay systems also by jointly optimizing the time allocation speed and trajectory both the spectrum efficiency and the ee were maximized in in the propulsion energy consumption of the uav was theoretically modeled and the ee of the uav was maximized for a single gn system this paper studies wireless communications where a uav with limited propulsion energy receives the data of multiple gns in the uplink it is assumed that all gns and the uav operate in the same frequency band and there are no direct communication links among gns under these setup we formulate the minimum rate maximization problem and the ee maximization problem by jointly optimizing the uav trajectory the velocity the acceleration and the uplink transmit power at the gns a similar approach for solving the minimum rate maximization was studied in but the authors in did not involve the propulsion energy consumption at the uav for the ee maximization problem our work can be regarded as a generalization of the single gn system in to the scenario and thus we need to deal with interference as well due to these issues existing algorithms presented in and can not be directly applied to our problems to tackle our problem of interest we introduce auxiliary variables which couple the trajectory variables and the uplink transmit power in order to jointly optimize these variables as the equivalent problem is still we employ the successive convex approximation sca technique which successively solves approximated convex problems of the original one in order to apply the sca to our optimization problems we present new convex surrogate functions for the constraints then we propose efficient algorithms for the minimum rate maximization problem and the ee maximization problem which yield local optimal solutions simulation results confirm that the proposed algorithms provide a significant performance gain over baseline schemes the rest of this paper is organized as follows section ii explains the system model and the problem formulations for the communication systems in section iii the minimum rate maximization and the ee maximization algorithms are proposed we examine the circular trajectory case as baseline schemes in section iv section v presents the numerical results for the proposed algorithms and we conclude the paper in section vi notations throughout this paper the bold and normal letters denote vectors and scalars respectively the space of vectors are represented by rm for a vector a kak and at indicate norm and transpose respectively the gradient of a function f is defined as for a function x t t and t stand for the and derivatives with respect to time t respectively fig wireless network ii s ystem m odel a nd p roblem f ormulation as shown in fig we consider wireless communications where a uav receives uplink information transmitted from k gns the uav horizontally flies at a constant altitude h with a time period t while the gns are located at fixed positions which are perfectly known to the uav in advance for the location of the gns and the uav we employ a cartesian coordinate system and thus the horizontal coordinate of gn k k k is denoted by wk xk yk t also we define the horizontal coordinate of the uav at time instant t as q t qx t qy t t for t t then the instantaneous velocity v t and the acceleration a t of the uav are expressed by v t t and a t t respectively continuous time expressions of variables make analysis and derivations in the uav systems intractable for ease of analysis we discretize the time duration t into n time slots with the same time interval t n as a result the trajectory of the uav can be represented by n vector sequences q n q v n v and a n a for n when the discretized time interval is chosen as a small number the velocity and the acceleration can be approximated by using taylor expansions as v n v n a n for n n q n q n v n a n for n also assuming the periodical operation at the uav we have q q n v v n a a n which implies that after one period t the uav returns to its starting location with the same velocity and acceleration in addition the acceleration and the velocity of the practical uav are subject to ka n k amax for n n vmin kv n k vmax for n n where amax indicates the maximum uav acceleration in and vmin and vmax stand for the minimum and the maximum uav speed constraints in respectively notice that the minimum speed constraint vmin is important for practical uav designs which need to move forward to remain aloft and thus can not hover over a fixed location for the power consumption at the uav we take into account the propulsion power utilized for maintaining the uav aloft and supporting its mobility the propulsion power of the uav pprop n at time slot n is given by pprop n kv n k kv n k ka n for n n where and are the parameters related to the aircraft design and g equals the gravitational acceleration thus the average propulsion power and the total consumed propulsion p pn energy over n time slots are obtained by n pprop n and pprop n respectively the power consumed by signal processing circuits such as converters and channel decoders are ignored since they are practically much smaller than the propulsion power now let us explain the channel model between the uav and the gns we assume that the communication links are dominated by the los links moreover the doppler effect due to the uav mobility is assumed to be well compensated then the effective channel gain hk n from gn k to the uav at time slot n follows the path loss model as hk n dk n where represents the reference ratio snr at m with and being the channel power at m and the white gaussian noise power at the uav respectively and the distance dk n is written by dk n p kq n wk h at time slot n gn k transmits its data signal to the uav with power pk n ppeak where ppeak is the peak transmission power constraint at the gns accordingly the instantaneous achievable rate rk n can be expressed as rk n where the term pk pk n hk n pk pj n hj n pj n hj n stands for interference from other gns therefore the achievable average rate of the gn k and the total information bits transmitted from gn k pn p over n time slots are denoted as n rk n respectively where w rk n and w means the bandwidth in this paper we jointly optimize the variables q n v n and a n and the uplink transmit power pk n at the gns so that the minimum average rate among multiple gns and the ee are maximized respectively first the minimum rate maximization problem can be formulated as p max q n v n a n pk n n x rk n n pk n ppeak n n x pprop n plim n where plim in indicates the propulsion power constraint at the uav next to support all of the individual gns the fairness based ee is more suitable than the ee thus we define the ee in the wireless communication systems as the ratio between the minimum information bits transmitted among the gns and the total energy consumed at the uav therefore the ee maximization problem can be written by p max q n v n a n pk n pn pprop n w rk n n x in general and are problems due to the constraints and the objective functions compared to we additionally consider the propulsion power constraint in the minimum rate maximization problem also note that the ee maximization problem can be regarded as a generalization of which investigated only a single gn scenario from these respects the works in and can be regarded as special cases of our problems and respectively to solve the problems and we adopt the sca framework which iteratively solves approximated convex problems for the original problems iii p roposed a lgorithm in this section we propose iterative algorithms for efficiently solving and by applying the sca method first the minimum rate maximization problem is considered in section and then it is followed by the ee maximization problem in section a minimum average rate maximization applying the change of variables as gk n pk n hk n pk n n kq n wk h where gk n is a new optimization variable the constraint becomes gk n gk max n n where gk max n ppeak hk n ppeak kq n then we can rewrite the achievable rate rk n in as rk n k x p g n where n k j gm n n by introducing new auxiliary variables n can be recast to p max q n v n a n gk n n n x n k x gm n n gk n gk max n n n x ka n kv n plim n n g n vmin n n kv n kv n k vmax it can be shown that at the optimal point of the inequality constraint in holds with the equality since otherwise we can enlarge the feasible region corresponding to by increasing n therefore we can conclude that is equivalent to thanks to the new auxiliary variables n constraints and now become convex while and are still in general to address these constraints we employ the sca methods first it can be checked that constraint is given by a difference of two concave functions hence the convex surrogate function n for n can be computed from a first order taylor approximation as k k x x gj l n n gj n gj l n n n where gk l n indicates a solution of gk n attained at the iteration of the sca process and p n k gj l n next to identify the surrogate functions of and we present the following lemmas lemma denoting ql n as a solution for q n calculated at the iteration the concave surrogate function glb k max n for gk max n can be expressed as glb k max n ppeak n wk bk n n wk t ql n wk ck n gk max n where the constants bk n and ck n are respectively given as bk n h kql n wk h kql n wk n wk ck n kql n wk h kql n wk h proof please refer to appendix lemma from a solution vl n obtained at the iteration the concave surrogate function of n can be computed as n n vl n n proof applying a similar process in appendix a we can conclude that the function in satisfies the conditions for a concave surrogate function with the aid of lemmas and at the l iteration the constraints in and can be approximated as gk n glb k max n n n n vl n as a result with given solutions ql n vl n gk l n at the iteration we solve the following problem at the l iteration of the sca procedure p max n n a n gk n n lb lb n x n k x gm n n lb where lb denotes the lower bound of in the original problem since is a convex problem it can be optimally solved via existing convex optimization solvers cvx based on these results we summarize the proposed iterative procedure in algorithm algorithm proposed algorithm for initialize n n n n and let l repeat compute n n gk n for with given ql n vl n gk l n update l l until convergence gk n obtain pk n hk n for the convergence analysis of algorithm let us define the objective values of and at the iteration as and respectively then we can express the relationship lb where the first equation holds because the surrogate functions in and are tight at the given local points the second inequality is derived from the property of the optimal solution of and the third inequality follows from the fact that the approximation problem is a lower bound of the original problem from we can conclude that the objective value in is for every iterations of algorithm since the objective value in has a finite upper bound value and at given local points the surrogate functions in and obtain the same gradients as their original functions it can be verified that algorithm is guaranteed to converge to at least a local optimal solution for b energy efficiency maximization in this subsection we consider the ee maximization problem first by applying and introducing an auxiliary variable n can be transformed as p max pn q n v n a n kv n k n n k v n gk n n k n x x gm n n w similar to we can see that is equivalent to but is still due to the constraints in and to tackle this issue we can employ the similar sca process presented in section by adopting and lemmas and a convex approximation of at the l iteration is given by p max n n a n gk n n lb pn kv n k w n x n k x ka n g n gm n n lb where lb denotes the lower bound of in the original problem it can be shown that is a fractional problem which can be optimally p solved via the dinkelbach s method then denoting n kv n k n ka n g n with a given constant can be converted to as p lb max n n a n gk n n based on we summarize the proposed iterative procedure in algorithm the convergence and the local optimality of algorithm can be verified similar to algorithm and thus the details are omitted for brevity algorithm proposed algorithm for initialize n n n n and let m and l repeat repeat compute n n gk n for with given ql n vl n gk l n n and update l l until convergence let f lb and lb update m m let n n n n n gk n n and l until convergence gk n obtain pk n hk n it is worthwhile to note that we need to initialize the trajectory variables q n v n for and however it is not trivial to find such variables satisfying the uav movement constraints and the propulsion power constraint this will be clearly explained in section iv c ircular trajectory system now we examine the circular trajectory system which will be used as a baseline scheme first we choose the center of the circular trajectory c t as the geometrical mean of the gns c pk k wk denoting r as the radius of the trajectory and n as the angle of the circle along which the uav flies at time slot n the horizontal coordinate of the uav q n can be obtained by q n r cos n r sin n t also the location of gn k wk can be represented as wk cos sin t where and equal the distance and the angle between the geometric center c and gn k respectively thus the distance dk n between p the uav and gn k in can be expressed as dk n r h cos n by adopting the angular velocity n and the angular acceleration n equations in can be rewritten as n n n for n n n n n n for n n n n n ka n kak n n r n r n for n n n for n n pprop n r n n n n g n for n n where ak n and n are the tangential and centripetal accelerations respectively and vmin and vmax indicate the minimum and maximum angular velocity respectively similar to section iii we address the minimum average rate maximization problem and the ee maximization problem for the circular trajectory which are respectively formulated as p max n n n r pk n rmin r rmax p max n n n r pk n where rmin vmin t and rmax pn pprop n vmax t a max min denote the minimum and max n n maximum radius of the circular trajectory respectively it is emphasized that and are difficult to solve because of the constraints and objective functions to deal with the problems and similar sca frameworks in section iii are applied a minimum average rate maximization and ee maximization for the minimum average rate maximization problem we first find r pk n with given n n n and then updates n n n pk n for a fixed for given n n n we adopt the change of variable sk n and sk max n as pk n r cos n n h sk n pk n hk n ppeak r cos n n h sk max n ppeak hk n similar to the method in section we employ the sca to sk max n based on lemma the concave surrogate function sk max n of sk max n with a solution rl at the iteration can be chosen as sk max n ppeak n n sk max n n n rl n n where the constants n n n and n are respectively given by n cos n n n h n n rl n n rl n rl n n n rl n n rl n n by applying for fixed n n n can be reformulated as an approximated convex problem at the l iteration of the sca p max sk n n x n k x sm n sk n sk max n n n k where n n e p k sj l n sj n sj l n pk sj l n and n can be successively solved by the cvx until convergence next we present a solution for with a given to obtain the concave surrogate function of sk max n we introduce the following lemma which identifies the surrogate function of the cosine function lemma for any given the concave surrogate function of cos can be computed as sin cos cos proof with a similar process in appendix a we can conclude that the function in satisfies the conditions for a concave surrogate function by inspecting lemmas and the concave surrogate function sk max n for sk max n can be identified as n n n sin n n n n sk max n ppeak n ppeak sk max n n n n where n n n and n are given by n n sin n n r h cos n n n n n n n n n n n n k sin n n by utilizing and at the l iteration of the sca algorithm with a given r can be approximated to the following convex problem p max n n n sk n n x n k x sm n sk n sk max n n n we then successively solve by the cvx until convergence similar to algorithm a solution of problem is obtained by alternately solving and until the objective value converges for the ee maximization problem in the circular trajectory case we can apply similar methods in section based on and given n n n and r can be transformed into two fractional problems by using algorithm we can alternately solve these problems until convergence trajectory initialization to initialize the proposed algorithms we employ a simple circular path concept in first the initial angular velocity is set to t which implies n nn next the initial radius is chosen to fulfill the constraints in and which can be expressed as vmin t vmax t amax min plim we can simply find which maximizes the minimum rate in and under constraints and via line search for the ee maximization problems and can be computed in the range of as a result the initial trajectory n can be written by n cos nn sin nn t n n and the initial velocity n can be simply obtained as n n n n n assuming in t sec t sec t sec t sec y m x m fig optimized uav trajectories for different periods t with plim n umerical r esults in this section we provide numerical results to validate the effectiveness of the proposed algorithms for the simulations we consider k gns which are distributed as in fig where the locations of the gns are marked with the triangles the constant altitude the bandwidth the reference snr and the peak transmission power are set to be h m w mhz db and ppeak dbm respectively also the minimum velocity the maximum velocity and the maximum acceleration of the uav are determined as vmin vmax and amax respectively for the propulsion power consumption model in the constants and are set as and respectively which make the minimum propulsion power consumption pprop min w when kvk we first demonstrate the performance of the minimum rate maximization algorithms fig illustrates the optimized uav trajectories with various t for plim it is observed that when t is smaller than sec as t increases the uav tries to get closer to all gns in order to improve the channel conditions from the gns in contrast if t is sufficiently large t sec the uav is now able to visit all the gns within a given time period thus the uav can rate proposed circular opt r p circular opt r p circular opt r period t sec fig rate with respect to the period t with plim hover over each gn for a while by traveling smooth path around the gns this is different from the results in where the uav does not have practical movement constraints this can be explained as follows due to the constraints on the velocity and the propulsion power the uav can not stay at fixed positions as in therefore the uav continuously moves around as close to the gns as possible to maintain good communication channels without exceeding the propulsion power limit plim fig shows the maximized minimum rate performance of the proposed algorithm as a function of t we compare the performance of the proposed algorithm with the following circular trajectory based methods circular with optimum r and p radius angular velocity angular acceleration and uplink transmit power are jointly optimized with in section with the circular trajectory circular with optimum r and p radius and uplink transmit power are jointly optimized with in section with the circular trajectory circular with optimum r radius is optimized with ppeak as the initial circular trajectory in plim w plim w no power limit y m x m fig optimized uav trajectories for different propulsion power limit plim with t sec section first it can be verified that the proposed algorithm outperforms the baseline schemes regardless of the time period t also we can see that the rate in the proposed algorithm monotonically increases with t since more time is available at the uav to hover around each gn in contrast in the baseline schemes which are restricted in circular shape trajectory the rate performance first increases as t grows and then decreases after a certain t this is due to a fact that in order to satisfy the propulsion power constraint the radius of the circular trajectory should increase as t gets large and thus the uav may become too far away from the geometric center of the gns after a certain t therefore we can expect the performance gain of the proposed algorithm over baseline schemes is to grow with t fig illustrates the optimized uav trajectories for various propulsion power limit plim with t sec it can be shown that for plim w the trajectory of the uav is restricted to a smooth path with a large turning radius to consume a low propulsion power however as plim gets larger we observe quick changes along the trajectory path thus the uav can move with a much smaller turning radius which enhances the rate performance proposed circular opt r p circular opt r p circular opt r rate propulsion power limit w fig rate with respect to the propulsion power limit plim with t sec in fig we depict the average rate of various schemes as a function of the propulsion power constraint plim for both the proposed algorithm and the baseline schemes the rate first increases as plim grows and then gets saturated this can be explained as follows with a large plim the trajectory and the velocity of the uav change more freely to attain good channel conditions and thus the rate increases however even if a large plim is given the rate can not continue to increase because there are practical limits on the velocity and acceleration similar to fig we can see that the proposed algorithm provides significant performance gains over the baseline schemes next in fig we investigate the optimized trajectory of the ee maximization problem with various t as t increases the overall patterns are similar to fig nevertheless to balance between the rate performance and the propulsion power consumption the ee maximization trajectory shows a smooth path with a relatively large turning radius and thus the average propulsion power consumption becomes lower to present the impact of the energy efficient uav communication designs fig depicts the uav speed of the proposed ee maximization method with t sec for comparison we t sec t sec t sec t sec y m x m fig optimized energy efficient uav trajectories for different periods t minimum rate maximization p lim ee maximization speed time sec fig uav speeds for the rate without propulsion power constraint and the ee maximization with t sec table i p erformance comparison with max min rate and ee maximization for t rate plim ee maximization proposed circular proposed circular sec average speed average acceleration average rate average power watts energy efficiency also consider the rate scheme without the propulsion power constraint it is observed that for the rate case the uav tries to fly between the gns as fast as possible and stay over the gns with a low speed on the other hand the ee maximization scheme keeps the speed of the uav at around in order not to waste the propulsion energy finally table i presents the performance comparison of the rate without propulsion power constraint and the ee maximization designs for both the proposed and the baseline schemes with t sec we can see that the rate methods consume much higher propulsion power by allowing a large variation of the speed and the average acceleration in contrast the speed of the proposed ee maximization design slowly varies with low acceleration and thus much higher ee can be achieved we observe that the proposed ee maximization algorithm exhibits about gain over the rate without the propulsion power constraint and gain over the circular baseline ee maximization scheme vi c onclusion in this paper we have studied the wireless communication optimization under the practical propulsion energy constraint at the uav for both the minimum average rate maximization problem and the ee maximization problem the uav trajectories and the uplink transmit power of the gns have been jointly optimized by applying the sca technique we have proposed efficient iterative algorithms which find local optimal solutions numerical results have demonstrated that the proposed algorithms provide substantial performance gains compared to the baseline schemes a ppendix a proof of lemma let us define a function u for u ux uy t where z and are positive constants for any given ul in order for arbitrary function to be a concave surrogate function of u it must satisfy the following conditions ul ul ul ul and u denoting the function as where b but ul c z k it can be easily shown and c l l that ul ul fulfills the first condition for the surrogate function also the gradient of u and with respect to u can be respectively computed as z bul z u since two gradients in and become identical at u ul satisfies the second condition for the surrogate function to prove the global lower bound condition we can calculate the hessian matrix of the function u as e u u u x y x d ux uy e where d z and e one can easily check that the hessian in is a positive matrix which implies that is a convex function since at u ul from and the global minimum of is achieved at u ul with ul as a result we can show that is greater than or equal to for any given ul and thus the third condition for the surrogate function holds by substituting u n wk ul ql n wk z h and and multiplying u and by ppeak lemma is thus proved r eferences zeng zhang and lim wireless communications with unmanned aerial vehicles opportunities and challenges ieee commun vol pp may lee lee kwak ihm and han mimo cooperation challenges and practical solutions in systems ieee wireless vol pp zeng zhang and lim throughput maximization for mobile relaying systems ieee trans vol pp wang chen mei and fang improving physical layer security using mobile relaying ieee wireless commun vol pp jun song lee and lee designs of mimo wireless relaying networks challenges and solutions ieee access vol pp may kong song park and lee a new beamforming design for mimo af relaying systems with direct link ieee trans vol pp jul merwaday and guvenc uav assisted heterogeneous networks for public safety communications in proc ieee wcnc pp may lyu zeng and zhang spectrum sharing and cyclical multiple access in cellular offloading arxiv preprint jeong simeone and kang mobile edge computing via a cloudlet optimization of bit allocation and path planning accepted in ieee trans veh technol online available http kandeepan and lardner optimal lap altitude for maximum coverage ieee wireless commun vol pp jul lyu zeng and zhang cyclical multiple access in communications a tradeoff ieee wireless commun vol pp wu zeng and zhang joint trajectory and communication design for enabled wireless networks filippone flight performance of fixed and rotary wing aircraft elsevier choi kim and sung maneuvering and communication of a single relay ieee trans aerosp electron vol pp jul zhang zeng and zhang spectrum and energy efficiency maximization in mobile relaying in proc ieee icc pp may zeng and zhang uav communication with trajectory optimization ieee trans wireless vol pp jun kim lee song lee and lee optimal power allocation scheme for energy efficiency maximization in distributed antenna systems ieee trans vol pp xu and qiu energy efficiency optimization for mimo broadcast channels ieee trans wireless vol pp lee jung park and lee a new beamforming strategy for miso interfering broadcast channels based on large systems analysis ieee trans wireless vol pp apr b du pan zhang and chen distributed power optimization for comp systems with fairness ieee commun vol pp jun li sheng tan zhang y sun wang shi and li subcarrier assignment and power allocation in ofdma systems with fairness guarantees ieee trans vol pp li sheng wang zhang and wen power allocation in wireless networks ieee trans veh vol pp marks and wright a general inner approximation algorithm for nonconvex mathematical programs operations research vol pp y sun babu and palomar algorithms in signal processing communications and machine learning ieee trans signal vol pp grant and boyd cvx matlab software for disciplined convex programming version available http dinkelbach on nonlinear fractional programming management science vol pp mar zappone and jorswieck energy efficiency in wireless networks via fractional programming theory foundations and trends in communications and information theory vol pp jun
7
c xxxx society for industrial and applied mathematics j uncertainty quantification vol xx pp x mathematical properties of polynomial dimensional decomposition apr sharif abstract many uncertainty quantification problems are solved by polynomial dimensional decomposition pdd which represents series expansion in terms of random orthonormal polynomials with increasing dimensions this study constructs and orthogonal splitting of polynomial spaces proves completeness of polynomial orthogonal basis for prescribed assumptions and demonstrates convergence to the correct limit all associated with pdd a error analysis reveals that pdd can not commit larger error than polynomial chaos expansion pce for the appropriately chosen truncation parameters from the comparison of computational required to estimate with the same precision the variance of an output function involving exponentially attenuating expansion the pdd approximation can be markedly more than the pce approximation key words uncertainty quantification anova decomposition multivariate orthogonal polynomials polynomial chaos expansion introduction polynomial dimensional decomposition pdd is a hierarchical infinite series expansion of a random variable involving orthogonal polynomials in independent random variables introduced by the author as a polynomial variant of the anova dimensional decomposition add pdd deflates the curse of dimensionality to some extent by developing an behavior of complex systems with low dimensions wherein the of degrees of interactions among input variables weaken rapidly or vanish altogether approximations stemming from truncated pdd are commonly used for solving uncertainty quantification problems in engineering and applied sciences including multiscale fracture mechanics random eigenvalue problems computational fluid dynamics and stochastic design optimization to name a few however all existing works on pdd are focused on practical applications with almost no mathematical analysis of pdd indeed a number of mathematical issues concerning necessary and conditions for the completeness of pdd basis functions convergence exactness and optimal analyses of pdd and approximation quality of the truncated pdd have yet to be studied or resolved this paper fills the gap by establishing fundamental mathematical properties to empower pdd with a solid foundation so that pdd can be as credible as its close cousin polynomial chaos expansion pce providing an alternative if not a better choice for uncertainty quantification in computational science and engineering the principal objective of this work is to examine important mathematical properties of pdd not studied heretofore for arbitrary but independent probability measures of input random variables the paper is organized as follows section defines or discusses mathematical notations and preliminaries two sets of assumptions on the input probability measures required by pdd are explained a brief exposition of univariate and multivariate orthogonal polynomials consistent with a general but probability measure this work was supported by the national science foundation under grant number college of engineering and applied mathematics computational sciences the university of iowa iowa city ia questions comments or corrections to this document may be directed to that email address rahman including their second moment properties is given in section the section also describes relevant polynomial spaces and construction of their orthogonal decompositions the orthogonal basis and completeness of multivariate orthogonal polynomials have also been proved section briefly explains add followed by presentations of pdd for a random variable the convergence and exactness of pdd are explained in the same section a truncated pdd and its approximation quality are discussed the formulae for the mean and variance of a truncated pdd are also derived the section ends with an explanation on how and when the pdd can be extended for infinitely many input variables section briefly describes orthogonal decompositions of polynomial spaces leading to pce in section a error analysis of pdd is conducted followed by a comparison with that of pce finally conclusions are drawn in section input random variables let n n and r represent the sets of positive integer natural integer and real numbers respectively denote by a i i n an ith bounded or unbounded subdomain of r i rn so that an a let f p be a complete probability space where is a sample space representing an abstract set of elementary events f is a on and p f is a probability measure with b n b an representing the borel on an rn consider an an input random vector x xn t f an bn describing the statistical uncertainties in all system parameters of a stochastic problem the input random variables are also referred to as basic random variables the finite integer n represents the number of input random variables and is often referred to as the dimension of the stochastic problem denote by fx x p xi xi the joint distribution function of x admitting the joint probability density function fx x n fx x given the abstract probability space f p of x the image probability space is an b n fx dx where an can be viewed as the image of from the mapping x an and is also the support of fx x similarly each component random variable xi is defined on the abstract marginal probability space i f i p i comprising sample space i f i and probability measure p i then the corresponding image probability space is a i b i fxi dxi where a i r is the image sample space of xi b i is the borel on a i and fxi xi is the marginal probability density function of xi relevant statements and objects in the abstract probability space have obvious counterparts in the associated image probability space both probability spaces will be used in this paper two sets of assumptions used by pdd are as follows assumption input random vector x xn t f an b n satisfies all of the following conditions each input random variable xi i f i a i b i has absolutely continuous marginal distribution function fxi xi p xi xi and continuous marginal density function fxi xi xi with a bounded or unbounded support a i all component random variables xi i n are statistically independent but not necessarily identical in consequence x is endowed with a probability density function that is fx x n fxi xi with a bounded or unbounded support an rn each input random variable xi possesses finite moments of all orders that is for all polynomial dimensional decomposition i n and l l l l e xi xi dp an xli fx x dx a i xli fxi xi dxi where e is the expectation operator with respect to the probability measure p or fx x dx assumption moments and marginal density function of each input random variable xi where i n satisfy at least one of the following conditions the density function fxi xi has a compact support that is there exists a compact interval ai bi ai bi r such that p ai xi bi for the moment sequence l for xi there holds lim inf for the moment sequence l for xi there holds the random variable xi is exponentially integrable that is there exists a real number a such that exp fxi xi dxi a i if the density function fxi xi is symmetric and strictly positive then there exists a real number a such that ln fxi xi dfxi xi as xi xi a dxi and fxi xi i x a i assumption assures the existence of an infinite sequence of orthogonal polynomials consistent with the input probability measure assumption in addition to assumption guarantees the input probability measure to be determinate resulting in a complete orthogonal polynomial basis of a function space of interest both assumptions impose only mild restrictions on the probability measure examples of input random variables satisfying assumptions and are gaussian uniform exponential beta and gamma variables which are commonly used in uncertainty quantification these assumptions to be explained in the next section are vitally important for the determinacy of the probability measure and the completeness of the orthogonal polynomial basis therefore for both pdd and pce which entail orthogonal polynomial expansions assumptions and are necessary unfortunately they are not always clearly specified in the pdd or pce literature a prototypical example where assumption is satisfied but assumption is not is the case of a lognormal random variable as noted by ernst et al the violation of assumption leads to indeterminacy of the input probability measure and thereby fails to form a complete orthogonal polynomial basis finally assumptions and can be modified to account for random variables with discrete or mixed distributions or dependent random variables the discrete or mixed distributions and dependent variables are not considered in this paper rahman orthogonal polynomials and polynomial spaces univariate orthogonal polynomials consider an ith random variable xi defined on the abstract probability space i f i p i with its image a i b i fxi dxi let i r xi be the space of real polynomials in xi for any polynomial pair p i q i i define an inner product p i q i f dx p i xi q i xi fxi xi dxi e p i xi q i xi xi i a i with respect to the probability measure fxi xi dxi and the induced norm i dxi p i p i f xi dxi ai p i xi fxi xi dxi x e p i i under assumption moments of xi of all orders exist and are finite including zeroorder moments a i fxi xi dxi i n that are always positive clearly i for all p i i then according to gautschi the inner product in is on i therefore there exists an infinite set of univariate orthogonal polynomials say p i ji xi ji p i ji which is consistent with the probability measure fxi xi dxi satisfying e p i j xi ji ki i p i ji p i ki f dx i xi ji ki for ki where e p i j xi here in the notation for the polynomial i p i ji xi the first and second indices refer to the ith variable and degree ji respectively prominent examples of classical univariate orthogonal polynomials comprise hermite laguerre and jacobi polynomials which are consistent with the measures defined by gaussian gamma and beta densities on the whole real line interval and bounded interval respectively many orthogonal polynomials including the three classical polynomials mentioned can be expressed in a unified way by invoking hypergeometric series incorporated in a tree structure of the askey scheme for even more general measures established numerical techniques such as and stieltjes procedure can be used to generate any orthogonal polynomials multivariate orthogonal polynomials for n n denote by n an index set so that u n is a subset including the empty set with cardinality n when u n a is denoted by ju with degree where jip p represents the pth component of ju for u n let xu t a subvector of x be defined on the abstract probability space f u pu where is the sample space of xu f u is a on and pu is a probability measure the corresponding image probability space is au b u fxu dxu where au a i is the image sample space of xu the same symbol is used for designating both the cardinality of a set and the degree of a in this paper polynomial dimensional decomposition b u is the borel on au and fxu xu is the marginal probability density function of xu supported on au under assumption fxu xu fxi xi denote by r xu r the space of all real polynomials in xu then given the inner product pu xu qu xu fxu xu dxu e pu xu qu xu pu qu fx dxu u au two polynomials pu and qu in xu are called orthogonal to each other if pu qu fx dxu moreover a polynomial pu is said to be an orthogonal u polynomial with respect to fxu dxu if it is orthogonal to all polynomials of lower degree that is if pu qu fx u dxu with deg qu deg pu let pu ju xu ju u n represent an infinite set of multivariate orthogonal polynomials which is consistent with the probability measure fxu xu dxu satisfying pu ju pu ku fx u dxu e pu ju xu pu ku xu ku ku clearly each pu ju is a multivariate orthogonal polynomial satisfying due to the probability measure of xu a consequence of statistical independence from assumption such multivariate polynomials exist and are easily constructed by tensorizing univariate orthogonal polynomials proposition x xn t f an b n be a vector of n n input random variables fulfilling assumption suppose that the sets of univariate orthogonal polynomials for all marginal measures are obtained as p i ji xi ji i n then for u n the set of multivariate orthogonal polynomials in xu consistent with the probability measure fxu xu dxu is p i ji xi ji pu ju xu ju where the symbol denotes tensor product in terms of an element the multivariate orthogonal polynomial of degree is pu ju xu p i ji xi proof consider two distinct polynomials pu ju xu and pu ku xu from the set pu ju xu ju satisfying since ju ku ju and ku must in at least one component without loss of generality suppose that then by fubini s theorem with statistical independence of random variables in mind pu ju pu ku fx dxu au pu ju xu pu ku xu fxu xu dxu u a ip p ip jip xip p ip kip xip fxip xip dxip p p i i j a rahman where the equality to zero in the last line results from the recognition that the inner integral vanishes by setting i in in addition for ju pu ju pu ju fx dxu e pu j x e p x u i i ji u u and is finite by virtue of the existence of the set of univariate orthogonal polynomials p i ji xi ji for i n therefore pu ju xu ju satisfying is a set of multivariate orthogonal polynomials consistent with the probability measure fxu xu dxu once the multivariate orthogonal polynomials are obtained they can be scaled to generate multivariate orthonormal polynomials as follows definition multivariate orthonormal polynomial ju xu u n ju of degree that is consistent with the probability measure fxu xu dxu is defined as ju xu pu ju xu x e pu j u u where i ji xi p i ji xi p i j xi i i ji xi e p i ji xi e p i j xi is a univariate orthonormal polynomial in i xi of degree ji that is consistent with the probability measure fxi xi dxi orthogonal decomposition of polynomial spaces an orthogonal decomposition of polynomial spaces entailing splitting leads to pdd here to facilitate such splitting of the polynomial space for any u n limit the power jip of the ip variable where ip u n p and to take on only positive integer values in consequence ju the of pu ju xu has degree varying from to as for ju and xu a monomial in the variables is the ji ji product xjuu and has a total degree a linear combination of xjuu where l l is a homogeneous polynomial in xu of degree for u n denote by qul span xjuu l ju l the space of homogeneous polynomials in xu of degree l where the individual degree of each variable is and by span xjuu m ju m the space of polynomials in xu of degree at least and at most m where the individual degree of each variable is the dimensions of the vector spaces qul and respectively are u dim ql ju n l polynomial dimensional decomposition and dim m u dim qul m m let for each l denote by zlu the space of orthogonal polynomials of degree exactly l that are orthogonal to all polynomials in that is zlu pu pu qu fxu dxu qu l then zlu provided that the support of fxu xu has interior is a vector space of dimension mu l dim zlu dim qul many choices exist for the basis of zlu here to be formally proved in section select pu ju xu l ju zlu to be a basis of zlu comprising mu l number of basis functions each basis function pu ju xu is a multivariate orthogonal polynomial of degree as defined earlier clearly zlu span pu ju xu l ju l according to proposition to be presented later pu ju xu is orthogonal to pv kv xv whenever u v and ju kv are arbitrary or u v and ju kv therefore any two distinct polynomial subspaces zlu and where u n v n l and are orthogonal whenever u v or l in consequence there exist orthogonal decompositions of m zlu m span pu ju xu l ju span pu ju xu m ju with the symbol representing orthogonal sum and zlv span pv jv xv l jv span pv jv xv jv where span the constant subspace needs to be added because the subspace zlv excludes constant functions recall that is the space of all real polynomials in x then setting u n in first and then swapping v for u yields yet another orthogonal decomposition of n n n zlu span pu ju xu l ju span pu ju xu ju rahman note that the last expression of is equal to the span of n n pj x j p i ji xi ji representing an infinite set of orthogonal polynomials in x given the orthogonal splitting of any function of input random vector x can be expanded as a series of hierarchically ordered multivariate orthogonal or orthonormal polynomials in xu u n the expansion is referred to as pdd to be formally presented and analyzed in section statistical properties of random multivariate polynomials when the input random variables xn instead of real variables xn are inserted in the argument the multivariate polynomials pu ju xu and ju xu where u n and ju become functions of random input variables therefore it is important to establish their properties to be exploited in the remaining part of this section and section proposition x xn be a vector of n n input random variables fulfilling assumption for u v n ju and kv the and moments of multivariate orthogonal polynomials are e pu ju xu and e pu ju xu pv kv xv e p i j x i i u v ju kv otherwise respectively independence of random variables e pu ju xu proof using and statistical e p i ji xi for any ju n since each component of ju is with the constant function p i in mind produces e p i ji xi for any i u ji n resulting in to obtain the result of set u v and ju kv and use directly the trivial result of is obtained by considering two subcases first when u v and ju kv yields the result already second when u v and ju kv are arbitrary then u and v by at least one element suppose that i u v is that element with the associated degree ji using the statistical independence of random variables and the fact that e p i ji xi as already demonstrated produces the desired result corollary u v n ju and kv the and secondorder moments of multivariate orthonormal polynomials are e ju xu and respectively e ju xu kv xv u v ju kv otherwise polynomial dimensional decomposition orthogonal basis and completeness an important question regarding multivariate orthogonal polynomials discussed in the preceding subsection is whether they constitute a complete basis in a function space of interest such as a hilbert space let an b n fx dx represent a hilbert space of functions with respect to the probability measure fx x dx supported on an the following two propositions show that indeed orthogonal polynomials span various spaces of interest proposition x xn t f an b n be a vector of n n input random variables fulfilling assumption and xu t f u au b u u n be a subvector of x then pu ju xu l ju the set of multivariate orthogonal polynomials of degree l l consistent with the probability measure fxu xu dxu is a basis of zlu proof under assumption orthogonal polynomials consistent with the probability m measure fxu xu dxu exist denote by pu l pu l pu l u l t a column vector of the elements of pu ju xu l ju arranged according to some monomial order let m j atu l au l au l u l be a row vector comprising some constants au l r j mu l set atu l pu l multiply both sides of the equality from the right by ptu l integrate with respect to the measure fxu xu dxu over au and apply transposition to obtain gu l au l where gu l e pu l ptu l is an mu l mu l matrix with its p q th element pq p q p q gu l pu l xu pu l xu fxu xu dxu e pu l xu pu l xu au representing the covariance between two elements of pu l according to proposition any two distinct polynomials from pu ju xu l ju are orthogonal meaning p q that e pu l pu l is zero if p q and positive and finite if p consequently gu l is a diagonal matrix and hence invertible therefore yields au l proving linear independence of the elements of pu l or the set pu ju xu l ju furthermore the dimension of zlu which is mu l matches exactly the number of elements of the aforementioned set therefore the spanning set pu ju xu l ju forms a basis of zlu proposition x xn t f an b n be a vector of n n input random variables fulfilling both assumptions and and xu t f u au b u u n be a subvector of x consistent with the probability measure fxu xu dxu let pu ju xu l ju the set of multivariate orthogonal polynomials of degree l l be a basis of zlu then the set of polynomials from the orthogonal sum n span pu ju xu l ju is dense in an b n fx dx moreover an b n fx dx n zlu rahman where the overline denotes set closure proof under assumption orthogonal polynomials exist according to theorem of ernst et al which exploits assumption the polynomial space i r xi is dense in a i b i fxi dxi now use theorem of petersen which asserts that if for p and all i n i is dense in lp a i b i fxi dxi then so is r xn in lp an b n fx dx therefore the set of polynomials from the orthogonal sum which is equal to as per is dense in an b n fx dx including the limit points of the orthogonal sum yields polynomial dimensional decomposition let y x y xn be a realvalued output random variable defined on the same probability space f p the vector space f p is a hilbert space such that e y x y x dp y x fx x dx an with inner product y x z x f p y x z x dp y x y x f p an y x z x fx x dx y x z x fx dx and norm x f p y x y x fx dx x dx it is elementary to show that y x f p if and only if y x an b n fx dx add the add expressed by the recursive form y x yu xu yu xu an n y x fx x dx an y xu yv xv is a finite hierarchical expansion of y in terms of its input variables with increasing dimensions where u n is a subset with the complementary set n and yu is a component function describing a constant or an interaction of xu on y when or here xu denotes an n vector whose ith component is xi if i u and xi if i u the summation in comprises terms with each term depending on a group of variables indexed by a particular subset of n when u the sum in vanishes resulting in the expression of the constant function in when u n the integration in the last line of is on the empty set reproducing and hence finding the last function y n indeed all component functions of y can be obtained by interpreting literally this decomposition first presented by in relation to his seminal work on u has been studied by many other researchers described by efron and stein the author and references cited therein polynomial dimensional decomposition the add can also be generated by tensorizing a univariate function space decomposition into its constant subspace and remainder producing an b n fx dx wu n where wu yu au b u fxu dxu e yu xu yv xv if u v v n is an add subspace comprising component functions of y however the subspaces wu u n are in general therefore further discretization of wu is necessary for instance by introducing orthogonal polynomial basis discussed in section a component function yu xu wu can be expressed as a linear combination of these basis functions indeed comparing and yields the closure of an orthogonal decomposition of wu zlu into polynomial spaces zlu l the result is a polynomial refinement of add which is commonly referred to as pdd pdd the pdd of a random variable y x f p is simply the expansion of y x with respect to a complete hierarchically ordered orthonormal polynomial basis of f p there are at least two ways to explain pdd a polynomial variant of add and a orthogonal polynomial expansion polynomial variant of add the first approach explained by the author in a prior work involves the following two steps expand the anova component function cu ju ju xu yu xu ju in terms of the basis of wu which originally stems from the basis of zlu l with yu xu ju xu fxu xu dxu u n ju cu ju representing the associated expansion and apply to and exploit orthogonal properties of the basis the end result is the pdd of y x cu ju ju xu n ju where eventually cu ju an y x ju xu fx x dx comparing and the connection between pdd and add is clearly palpable where the former can be viewed as a polynomial variant of the latter for instance cu ju ju xu in represents a pdd component function of y x describing the polynomial approximation of yu xu in addition pdd inherits all desirable properties of add rahman orthogonal polynomial expansion the second approach entails polynomial expansion associated with the orthogonal splitting of polynomial spaces as explained in section the latter approach has not been published elsewhere and is therefore formally presented here as theorem theorem let x xn t f an bn be a vector of n n input random variables fulfilling assumptions and for u n and xu t f u au b u denote by ju xu ju the set of multivariate orthonormal polynomials consistent with the probability measure fxu xu dxu then any random variable y x f p can be hierarchically expanded as a series referred to as the pdd of y x cu ju ju xu n ju cu ju ju xu n ju where the expansion r and cu ju r u n ju are defined by e y x y x fx x dx an cu ju e y x ju xu an y x ju xu fx x dx and the pdd of y x f p converges to y x in furthermore the pdd converges in probability and in distribution proof under assumptions and a complete infinite set of multivariate orthogonal polynomials in xu consistent with the probability measure fxu xu dxu exists from proposition and the fact that orthonormality is merely scaling the set of polynomials from the orthogonal sum n span ju xu l ju is also dense in an b n fx dx therefore any random variable y x can be expanded as shown in combining the two inner sums of the expansion forms the equality in the second line of from the denseness one has the bessel s inequality e n ju cu ju ju xu e y x polynomial dimensional decomposition proving that the pdd converges in or to determine the limit of convergence invoke again proposition which implies that the set on the left side of is complete in an b n fx dx therefore bessel s inequality becomes an equality e cu ju ju xu e y x n ju known as the parseval identity for a multivariate orthonormal system for every random variable y x f p furthermore as the pdd converges in it does so in probability moreover as the expansion converges in probability it also converges in distribution finally to find the expansion define a second moment epdd e y x cv kv kv xv n kv of the between y x and its full pdd both sides of with respect to and cu ju u n ju to write cv kv kv xv e y x n kv e cv kv kv xv y x n k v cv kv kv xv y x n kv e y x and ju cv kv kv xv e y x ju n kv cv kv kv xv e y x ju n kv cv kv kv xv y x ju xu n kv cu ju e y x ju xu here the second third and last lines of both and are obtained by interchanging the and expectation operators performing the swapping the expectation and summation operators and applying corollary respectively the interchanges are permissible as the infinite sum is convergent as demonstrated in the preceding paragraph setting in and ju in yields and respectively completing the proof rahman the expressions of the expansion can also be derived by simply replacing y x in and with the full pdd and then using corollary in contrast the proof given here demonstrates that the pdd are determined optimally it should be emphasized that the function y must be for the meansquare and other convergences to hold however the rate of convergence depends on the smoothness of the function the smoother the function the faster the convergence if the function is a polynomial then its pdd exactly reproduces the function these results can be easily proved using classical approximation theory a related expansion known by the name of also involves orthogonal polynomials in connection with add however the existence convergence and approximation quality of the expansion including its behavior for infinitely many input variables have not been reported truncation the full pdd contains an infinite number of orthonormal polynomials or in practice the number must be finite meaning that pdd must be truncated however there are multiple ways to perform the truncation a straightforward approach adopted in this work entails keeping all polynomials in at most s n variables thereby retaining the degrees of interaction among input variables less than or equal to s and preserving polynomial expansion orders total less than or equal to s m the result is an pdd ys m x s m cu ju ju xu n ju cu ju ju xu n ju of y x containing ls m s n m s s number of expansion including it is important to clarify a few things about the truncated pdd proposed first a truncation with respect to the polynomial expansion order based on as opposed to that is m was employed in prior works therefore comparing and with the existing truncation if it is desired should be done with care having said this the proposed truncation has one advantage over the existing one a direct comparison with a truncated pce is possible this will be further explained in the forthcoming sections second the right side of contains sums of at most orthonormal polynomials representing at most pdd component functions of y therefore the term used for the pdd approximation should be interpreted in the context of including at most interaction of input variables even though ys m is strictly an n function third when s m for any m as the outer sums of vanish finally when s n the nouns degree and order associated with pdd or orthogonal polynomials are used synonymously in the paper polynomial dimensional decomposition and m ys m converges to y in the sense generating a hierarchical and convergent sequence of pdd approximations readers interested in an adaptive version of pdd where the truncation parameters are automatically chosen are directed to the work of yadav and rahman including an application to design optimization it is natural to ask about the approximation quality of since the set of polynomials from the orthogonal sum in is complete in an b n fx dx the truncation error y x ys m x is orthogonal to any element of the subspace from which ys m x is chosen as demonstrated below proposition any y x f p let ys m x be its pdd approximation then the truncation error y x m x is orthogonal to the subspace s m n ju span ju xu ju f p comprising all polynomials in x with the degree of interaction at most s and order at most m including constants moreover e y x ys m x as s n and m proof let m x kv kv xv n kv with arbitrary expansion and kv be any element of the subspace s m of l f p described by then e y x ys m x m x e cu ju ju xu cu ju ju xu n ju n kv n ju kv kv xv where the last line follows from corollary proving the first part of the proposition for the latter part the pythagoras theorem yields e y x ys m x e ys m x e y x x e y x as s n and m therefore e y x from theorem e ys m ys m x as s n and m the second part of proposition entails convergence which is the same as the convergence described in theorem however an alternative route is chosen for the proof of the proposition besides proposition implies that the pdd approximation is optimal as it recovers the best approximation from the subspace s m as described by corollary rahman corollary s m in define the subspace of all polynomials in x with the degree of interaction at most s and order at most m including constants then the pdd approximation ys m x of y x f p is the best approximation in the sense that e y x ys m x inf m s m e y x m x proof consider two elements ys m x and m x of the subspace s m where the former is the pdd approximation of y x with the expansion coefficients defined by and and the latter is any polynomial function described by with arbitrary chosen expansion from proposition the truncation error y x ys m x is orthogonal to both ys m x and m x and is therefore orthogonal to their linear combinations yielding e y x ys m x ys m x m x consequently e y x m x e y x ys m x e ys m x m x e y x ys m x as the second expectation on the right side of the first line of is thereby proving the optimality of the pdd approximation the motivations behind and approximations are the following in a practical setting the function y x fortunately has an dimension much lower than n meaning that the right side of can be approximated by a sum of component functions yu n but still maintaining all random variables x of a uncertainty quantification problem furthermore an svariate pdd approximation is grounded on a fundamental conjecture known to be true in many uncertainty quantification problems given a function y its pdd component function cu ju ju xu where s n and is small and hence negligible leading to an accurate lowvariate approximation of y the computational complexity of a truncated pdd is polynomial as opposed to exponential thereby alleviating the curse of dimensionality to a substantial extent although pce contains the same orthogonal polynomials a recent work on random eigenvalue analysis of dynamic systems reveals markedly higher convergence rate of the pdd approximation than the pce approximation output statistics and other probabilistic characteristics the mthorder pdd approximation ys m x can be viewed as a surrogate of y x therefore relevant probabilistic characteristics of y x including its first two moments and probability density function if it exists can be estimated from the statistical properties of ys m x applying the expectation operator on ys m x and y x in and and imposing corollary their means e ys m x e y x polynomial dimensional decomposition are the same and independent of s and therefore the pdd truncated for any values of s n and s m yields the exact mean nonetheless e ys m x will be referred to as the pdd approximation of the mean of y x applying the expectation operator again this time on ys m x and y x and employing corollary results in the variances var ys m x cu j u n ju and var y x cu j u n ju of ys m x and y x respectively again var ys m x will be referred to as the pdd approximation of the variance of y x clearly var ys m x approaches var y x the exact variance of y x as s n and m being convergent in probability and distribution the probability density function of y x if it exists can also be estimated by that of ys m x however no analytical formula exists for the density function in that case the density can be estimated by sampling methods such as monte carlo simulation mcs of ys m x such simulation should not be confused with crude mcs of y x commonly used for producing benchmark results whenever possible the crude mcs can be expensive or even prohibitive particularly when the sample size needs to be very large for estimating tail probabilistic characteristics in contrast the mcs embedded in the pdd approximation requires evaluations of simple polynomial functions that describe ys m therefore a relatively large sample size can be accommodated in the pdd approximation even when y is expensive to evaluate infinitely many input variables in many fields such as uncertainty quantification information theory and stochastic process functions depending on a countable sequence xi of input random variables need to be considered under certain assumptions pdd is still applicable as in the case of finitely many random variables as demonstrated by the following proposition proposition xi be a countable sequence of input random variables defined on the probability space p where xi is the associated generated if the sequence xi satisfies assumptions and then the pdd of y xi p where y an r converges to y xi in moreover the pdd converges in probability and in distribution proof according to proposition is dense in an b n fx dx and hence in l fn p for every n n where fn xi n is the associated generated by xi n here with a certain abuse of notation is used as a set of polynomial functions of both real variables x and random variables x now apply theorem of ernst et al which says that if is dense in fn p for every n n then n rahman a subspace of p is also dense in p but using n n span ju l ju span ju l ju demonstrating that the set of polynomials from the orthogonal sum in the last line is dense in p therefore the pdd of y xi p converges to y xi in since the convergence is stronger than the convergence in probability or in distribution the latter modes of convergence follow readily polynomial chaos expansion in contrast to the splitting of polynomial spaces in pdd a orthogonal splitting of polynomial spaces results in pce the latter decomposition is briefly summarized here as pce will be compared with pdd in the next section orthogonal decomposition of polynomial spaces let j j n jn nn ji i n define an n for x xn an rn a monomial in the variables xn is the product xj xjnn and has a total degree jn denote by j n p span x p j p the space of real polynomials in x of degree at most let span be the space of constant functions for each l denote by vln l the space of orthogonal polynomials of degree exactly l that are orthogonal to all polynomials in that is n vln p l p q fx dx q l n from section with u n in mind select pj x l j nn vl to be n a basis of vl each basis function pj x is a multivariate orthogonal polynomial in x of degree obviously vln span pj x l j nn l according to with u n pj x is orthogonal to pk x whenever j therefore any two polynomial subspaces vln and vrn where l r are orthogonal whenever l in consequence there exists another orthogonal decomposition of n vln span pj x l j nn span pj x j compared with represents a orthogonal decomposition of pce given the orthogonal decomposition of the pce of any output random variable y x is expressed by y x cj x cj x polynomial dimensional decomposition where x j nn is an infinite set of multivariate orthonormal polynomials in x that can be obtained by scaling pj in and cj r j nn are the pce expansion like pdd the pce of y x l f p under assumptions and also converges to y x in in probability and in distribution since the pce of y x in is an infinite series it must also be truncated in applications a commonly adopted truncation is based on retaining orders of polynomials less than or equal to a specified total degree in this regard given p the pce approximation of y x f p reads p yp x cj x cj x this kind of truncation is related to the total degree index set n j nn ji p for defining the recovered multivariate polynomial space of a pce approximation other kinds of truncation entail n n n j max ji p and j ji p n describing the tensor product and hyperbolic cross index sets respectively to name just two the total degree and tensor product index sets are common choices although the latter one from the curse of dimensionality making it impractical for problems the hyperbolic cross index set originally introduced for approximating periodic functions by trigonometric polynomials is relatively a new idea and has yet to receive widespread attention all of these choices and possibly others including their anisotropic versions can be used for truncating pce in this work however only the total degree index set is used for the pce approximation this is consistent with the of ju used for truncating pdd in error analysis pdd error define a error es m e y x ys m x stemming from the pdd approximation presented in the preceding section replacing y x and ys m x in with the right sides of and respectively produces es m s n ju cu j u n cu j u n ju where the second term vanishes expectedly when s n as the lower limit of the outer sum exceeds the upper limit in the first term of the pdd error is due to the truncation rahman of polynomial expansion orders involving interactive of at most s variables whereas the second term of the pdd error is contributed by ignoring the interactive of larger than s variables obviously the error for a general function y depends on which expansion decay and how they decay with respect to s and nonetheless the error decays monotonically with respect to s m as stated in proposition other than that nothing more can be said about the pdd error proposition a general function y es m where s n s m and i and j are equal to either or but not both equal to proof setting i j and using m es m cu j u n ju cu j u n ju where the inequality to zero in the last line results from the fact that as s m the first term is smaller than or equal to the second term similarly setting i j and using s es es m cu j u n ju finally setting i j es m es es es m as es and es es m corollary a general function y es es m whenever s s and in practice the of interaction among input variables and polynomial expansion which is equal to order become increasingly weaker as and grow in this case cu j u the variance of cu ju ju xu decreases with and given the rates at which cu j u decreases with and a question arises on how fast does es m decay with respect to s and proposition corollary and subsequent discussions provide a few insights u n proposition a class of functions y assume that cu j u ju attenuates according to cu j u where c and u are three constants then it holds that n var y x c and es m c s n n l s n l p p s proof with the recognition that n u n s ju n l s polynomial dimensional decomposition use cu j u in and to obtain and u corollary the function class described in proposition es m where s n s m and i and j are equal to either or but not both equal to according to corollary es m decays strictly monotonically with respect to s m for any rate parameters and when the equality holds in and from proposition figure comprising three subfigures presents three sets of plots of the relative error es m y x against m for five distinct values of s these subfigures each obtained for n correspond to three distinct cases of the values of and and in all cases the error for a given s decays first with respect to m and then levels at a respective limit when m is large the limits get progressively smaller when s increases as expected however the magnitude of this behavior depends on the rates at which the expansion attenuates with respect to the degree of interaction and polynomial expansion order when as in case figure top the error for a given s decays slowly with respect to m due to a relatively weaker attenuation rate associated with the polynomial expansion order the trend reverses when the attenuation rate becomes stronger and reaches the condition as in case figure middle for larger values of s for example s or the respective limits are significantly lower in case than in case when the attenuation rates are the same and large as in case figure bottom the decay rate of error accelerates substantially relationship between pdd and pce since pdd and pce share the same orthonormal polynomials they are related indeed the relationship was first studied by rahman and yadav who determined that any one of the two infinite series from pdd and pce defined by and can be rearranged to derive the other in other words the pdd can also be viewed as a pce and vice versa however due to a strong connection to add endowed with a desired hierarchical structure pdd merits its own appellation more importantly the pdd and pce when truncated are not the same in fact two important observations stand out prominently first the terms in the pce approximation are organized with respect to the order of polynomials in contrast the pdd approximation is structured with respect to the degree of interaction between a finite number of random variables therefore significant may exist regarding the accuracy and convergence properties of their truncated sum second if a stochastic response is highly nonlinear but contains rapidly diminishing interactive of multiple random variables the pdd approximation is expected to be more than the pce approximation this is because the terms of the pdd approximation can be just as nonlinear by selecting appropriate values of m in in contrast many more terms and expansion are required to be included in the pce approximation to capture such high nonlinearity in this work a theoretical comparison between pdd and pce in the context of error analysis not studied in prior works is presented for error analysis it is convenient to write a pce approximation in terms of a pdd approximation indeed there exists a striking result connecting pce with pdd approximations as explained in proposition proposition yp x and ys m x be the pce approximation and svariate pdd approximation of y x f p respectively where s n s m and p then the pce approximation and the rahman figure pdd errors for various attenuation rates of the expansion top middle bottom polynomial dimensional decomposition p n pdd approximation are the same that is yp x p x where x and p n denotes the minimum of p and n proof according to rahman and yadav the right side of can be resulting in a long form of the pce approximation expressed by n n n s yp x c jiq xiq is z s sums z jis s sums in terms of the pdd expansion note that depending on the condition p n or p n at most or n sums survive in meaning that the pce approximation retains of at most p n interaction and at most polynomial expansion order accordingly the compact form of the pce approximation can be written as yp x p cu ju ju xu p x n ju completing the proof using proposition the number of expansion say lp associated with the pce approximation can be calculated from that required by the p n pdd approximation accordingly setting s p n and m p in lp p n p n p n p s s with the last expression commonly found in the pce literature the advantage of over is obvious the pdd once determined can be reused for the pce approximation and subsequent error analysis thereby sidestepping calculations of the pce pdd pce errors define another error ep e y x yp x resulting from the pce approximation yp x of y x using proposition ep p meaning that the pce error analysis can be conducted using the pdd approximation proposition a general function y let es m and ep denote the pdd and pce errors defined by and respectively given a truncation parameter p of the pce approximation if the truncation parameters of the pdd approximation are chosen such that p n s n and p s m then es m ep rahman where p s denotes the maximum of p and proof the result follows from propositions and and corollary proposition aids in selecting appropriate truncation parameters to contrast the errors due to pdd and pce approximations however the proposition does not say anything about the computational proposition and subsequent discussion explain the relationship between computational and error committed by both pdd and pce approximations for a special class of functions u n proposition a special class of functions y assume that cu j u ju diminishes according to cu j u where c and u are three constants then it holds that n n l n l ep c p p p p s s proof replacing s and m in with p n and p respectively obtains the result theoretically the numbers of expansion required by the pdd and pce approximations can be used to compare their respective computational table presents for n the requisite numbers of expansion when pdd is truncated at s and m and when pce is truncated at p they are calculated using and for pdd and pce approximations respectively according to table the growth of the number of expansion in pce is steeper than that in pdd the growth rate increases markedly when the polynomial expansion order is large this is primarily because a pce approximation is solely dictated by a single truncation parameter p which controls the largest polynomial expansion order preserved but not the degree of interaction independently in contrast two truncation parameters s and m are involved in a pdd approximation a greater flexibility in retaining the largest degree of interaction and largest polynomial expansion order in consequence the numbers of expansion and hence the computational by the pdd and pce approximations can vary appreciably table growth of expansion in the pdd and pce approximations ls m m or p lp using the equalities in and figure depicts how the relative pdd error es m y x and the relative pce error ep y x vary with respect to the polynomial dimensional decomposition figure pdd pce errors for various attenuation rates of the expansion top middle bottom rahman number of expansion required for n again the three preceding cases of the attenuation rates and with respect to the degree of interaction and polynomial expansion order are studied in all cases the pdd or pce errors decay with respect to s m and p as expected however in the pdd approximation the error for a fixed s may decline even further by increasing m whereas no such possibility exists in the pce approximation this behavior is pronounced in case that is when figure top for example in case the bivariate pdd approximation s m achieves a relative error of employing only expansion in contrast to match the error the pce approximation p is needed committing a relative error of at the cost of expansion therefore the pdd approximation is substantially more economical than the pce approximation for a similar accuracy however when as in case figure middle the computational advantage of pdd over pce approximations disappears as the attenuation rate associated with the polynomial expansion order is dominant over that associated with the degree of interaction nonetheless in case the pdd approximation with the lowest m possible can not commit more error than the mthorder pce approximation for the same computational finally when the attenuation rates are the same as in case figure bottom the pdd approximation is still more computationally than the pce approximation for instance the trivariate fifthorder pdd s m and pce p approximations require and expansion to commit the errors of and respectively but unlike in case an unnecessarily large polynomial expansion order may render the pdd approximation more expensive than required readers should take note that the comparative error analyses reported here are limited to pdd and pce approximations derived from truncations according to the total degree index set for other index sets such as the tensor product and hyperbolic cross index sets it would be intriguing to find whether a similar conclusion arises conclusion the fundamental mathematical properties of pdd representing fourierlike series expansion in terms of random orthogonal polynomials with increasing dimensions were studied a splitting of appropriate polynomial spaces into orthogonal subspaces each spanned by orthogonal polynomials was constructed resulting in a polynomial refinement of add and eventually pdd under prescribed assumptions the set of orthogonal polynomials was proved to form a complete basis of each subspace leading to an orthogonal sum of such sets of basis functions including the constant subspace to span the space of all polynomials in addition the orthogonal sum is dense in a hilbert space of functions leading to convergence of pdd to the correct limit including for the case of infinitely many random variables the optimality of pdd and the approximation quality due to truncation were demonstrated or discussed from the error analysis of a general function of n random variables given p the p n pdd approximation and pce approximation are the same therefore an pdd approximation can not commit a larger error than a pce approximation if p n s n and p s m from the comparison of computational required to estimate with the same accuracy the variance of an output function entailing exponentially attenuating expansion the pdd approximation can be substantially more economical than the pce approximation polynomial dimensional decomposition references askey and wilson some basic hypergeometric polynomials that generalize jacobi polynomials mem amer math ams providence ri babenko approximation by trigonometric polynomials in a certain class of periodic functions of several variables soviet math pp bellman dynamic programming princeton university press princeton nj caflisch morokoff and owen valuation of mortgage backed securities using brownian bridges to reduce dimension journal of computational finance pp cameron and martin the orthogonal development of functionals in series of fourierhermite functionals ann pp chakraborty and rahman stochastic multiscale models for fracture analysis of functionally graded materials engineering fracture mechanics pp courant and hilbert methods of mathematical physics vol i interscience publishers dunkl and xu orthogonal polynomials of several variables encyclopedia of mathematics and its applications cambridge university press second efron and stein the jackknife estimate of variance the annals of statistics pp pp ernst mugler starkloff and ullmann on the convergence of generalized polynomial chaos expansions esaim mathematical modelling and numerical analysis pp freud orthogonal polynomials akademiai budapest gautschi orthogonal polynomials computation and approximation numerical mathematics and scientific computation oxford university press golub and van loan matrix computations the john hopkins university press third griebel sparse grids and related approximation schemes for higher dimensional problems in foundations of computational mathematics pardo pinkus suli and todd cambridge university press pp griebel kuo and sloan the anova decomposition of a function of infinitely many variables can have every term smooth mathematics of computation pp hoeffding a class of statistics with asymptotically normal distribution the annals of mathematical statistics pp pp http kuo sloan wasilkowski and wozniakowski on decompositions of multivariate functions mathematics of computation pp li hu and wang random dimensional model representation and orthogonality of its order component functions journal of physical chemistry a pp petersen on the relation between the multidimensional moment problem and the onedimensional moment problem math pp rahman a polynomial dimensional decomposition for stochastic computing international journal for numerical methods in engineering pp rahman extended polynomial dimensional decomposition for arbitrary probability distributions journal of engineering mechanics pp rahman approximation errors in truncated dimensional decompositions mathematics of computation pp rahman a generalized anova dimensional decomposition for dependent probability measures journal on uncertainty quantification pp rahman and yadav orthogonal polynomial expansions for solving random eigenvalue problems international journal for uncertainty quantification pp ren yadav and rahman design optimization by polynomial dimensional decomposition structural and multidisciplinary optimization pp stieltjes quelques recherches sur la thorie des quadratures dites mcaniques ann sci cole norm pp rahman tang congedo and abgrall adaptive surrogate modeling by anova and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation journal of computational physics pp wiener the homogeneous chaos american journal of mathematics pp xiu and karniadakis the polynomial chaos for stochastic equations siam journal of scientific computing pp yadav and rahman polynomial dimensional decomposition for highdimensional stochastic computing computer methods in applied mechanics and engineering pp
10
apr exploration and retrieval of sequencing samples sohan seth niko samuel kaski antti honkela helsinki institute for information technology hiit department of information and computer science aalto university espoo finland biology program and department of medical genetics university of helsinki helsinki finland helsinki institute for information technology hiit department of computer science university of helsinki helsinki finland april abstract over the recent years the field of whole metagenome shotgun sequencing has witnessed significant growth due to the sequencing technologies that allow sequencing genomic samples cheaper faster and with better coverage than before this technical advancement has initiated the trend of sequencing multiple samples in different conditions or environments to explore the similarities and dissimilarities of the microbial communities examples include the human microbiome project and various studies of the human intestinal tract with the availability of ever larger databases of such measurements finding samples similar to a given query sample is becoming a central operation in this paper we develop a exploration and retrieval method for whole metagenome sequencing samples we apply a distributed string mining framework to efficiently extract all informative sequence from a pool of metagenomic samples and use them to measure the dissimilarity between two samples we evaluate the performance of the proposed approach on two human gut metagenome data sets as well as human microbiome project metagenomic samples we observe significant enrichment for diseased gut samples in results of queries with another diseased sample and very high accuracy in discriminating between different body sites even though the method is unsupervised a software implementation of the dsm framework is available at https introduction metagenomics is the study of microbial communities in their natural habitat using genomics techniques it is undergoing a boom due to proliferation of sequencing technologies many studies focus at targeted sequencing of specific marker genes such as the rrna gene in bacteria but recently there has been a growing interest in whole metagenome sequencing see while targeted studies provide data for phylogenetic profiling at a lower cost whole metagenomes provide much more information for example about the collective metabolism and the population genetics of the community recent studies have also found associations between features of whole human gut metagenomes and type ii diabetes new data are accumulating rapidly with a popular server listing almost public whole metagenomes analysing shotgun wms sequencing data is very challenging the original sample typically contains genetic material from hundreds to thousands of bacterial species of different abundances most of which have not been fully sequenced previously after sequencing we obtain a huge bag of metagenomic samples actagtca tagcatag ccatgaca cttaatga atcgcaga aggttaat gtgtaccg tcaacggg actgactg attcctta ctatgcac gttgcttc atgacata gatcatga cacatgca catgactg feature extraction gatggatt gtcagtac gtactgac actgcatg dissimilarity evaluation dissimilarity values query retrieve query retrieve actggtca cttaaggc gtgtacca aggacaac figure given a set of metagenomic samples our objective is to be able to retrieve relevant samples to a query sample for this we need to extract relevant features and evaluate a pairwise similarity or dissimilarity measure the samples are then ranked in the order of increasing dissimilarity from the query collection of short sequence reads whose species of origin is unknown while significant progress has been made analysis relying on either the limited previously annotated genomes or assembling the reads into novel more complete genomes remains difficult and inefficient and potentially susceptible to annotation biases in this paper we introduce an efficient purely feature extraction and selection method as well as similarity measures for wms sequencing data sets and apply them in retrieval of similar data sets such retrieval is an extremely powerful tool for exploration of the data and generating hypotheses of disease associations as previously demonstrated with gene expression data retrieval from existing databases makes it possible to automatically explore a much greater variety of hypotheses than relying solely on the more common specifically designed focused studies similarity measures and retrieval of similar metagenomic data sets have been suggested previously based on quantifying abundances over a relatively small number of predetermined features requiring existing annotation up to some thousands of known taxa genes or metabolic pathways have been used we introduce similarity measures that are based solely on raw sequencing reads and hence unbiased and insensitive to the quality of the existing annotation a similar measure has been previously suggested by but only for pairwise comparisons using a method that is computationally too expensive to scale to even modestly large data sets furthermore instead of considering all sequences of particular length also known as as has been done earlier for other tasks and by we employ an efficient distributed string mining algorithm to find informative subsequences that can be of any length in order to deal with the very large number of features some feature selection is necessary previous approaches for detecting relevant features in metagenomic data have been based on direct comparison of two classes of samples again most of these methods work on up to some thousands of features with the notable exception of one study where quantification and association testing was done for over million predefined genes without feature selection one can use short or limit to a set of that are likely to be informative such as associated with well characterised protein families while there are no previous examples of unsupervised feature selection for metagenomics it is a common practice in information retrieval with text documents a particularly relevant method assesses the entropy of the distribution of documents in which a specific term occurs we evaluate the performance of the proposed unsupervised unconstrained retrieval method on synthetic data as well as metagenomic samples from human body sites to evaluate the performance of the retrieval engine we use external validation based on a ground truth similarity between two samples to simplify this process we consider a binary similarity which is crude but easily accessible the human gut samples in come from studies exploring the change in bacterial species composition between healthy persons and either inflammatory bowel disease or type ii diabetes we utilize disease state to construct a normalization collection of metagenomic samples regularization distributed string mining dissimilarity matrix dissimilarity computation entropy evaluation annotations figure processing steps of our method given a collection of metagenomic samples we use the collection as an input to the distributed string mining method for the method we estimate the frequency of each evaluate if the is informative or not and compute the needed dissimilarities finally in this paper we evaluate the performance considering the existing annotations as ground truth annotations are not needed for the retrieval in general binary ground truth thus we study if given the metagenomic sample of a person with a disease the retrieval finds metagenomic samples related by having the same disease in the body site data we use the body sites as ground truth to investigate whether it is possible to identify the bacterial communities at different body sites in an unsupervised setting without the need of reference genomes it should be noted that especially for the gut data two samples may be related in other ways too the external validation with one simple ground truth nonetheless provides an objective platform for comparing different methods given that the method is unsupervised and hence completely oblivious of the disease labels if such retrieval is successful it is a promising starting point for developing methods for leveraging data from earlier patients in early detection of disease and personalized medicine approach our objective is to extract and select suitable features for representing wms sequencing samples and to form a pairwise dissimilarity measure for a collection of such samples given this dissimilarity one can query with a sample and retrieve other samples that are similar to it fig the measure needs to be reasonably rapidly computable yet captures relevant differences between the samples and does all this with as little prior biological knowledge and annotations as possible since detailed quantitative prior knowledge is typically not yet available for metagenomics evaluating dissimilarity requires representing the metagenomic sample in a suitable feature space a standard choice for representing objects over strings is to estimate the frequency values where a kmer here is a string of k letters from the dna alphabet a c t g therefore there are possible for any given it is standard practice to set k to a specific value typically a small value to keep the estimation problem tractable both computationally and statistically a larger k would give better discriminability but not without bounds as for finite data set sizes there simply is not enough data to estimate long we argue that instead of setting k to a particular value it is more effective to estimate all possible for all possible k which the data supports this makes the problem more challenging since the number of such observed different for large k becomes very large and they become more susceptible to sequencing errors focusing on appearing more than once in a sample helps significantly because it is relatively rare to have the exactly same sequencing errors in two independent reads to make the method computationally efficient we treat each as an independent feature we compute a bayesian estimate of their relative frequencies across samples the employed prior helps in suppressing noise caused by small observed read counts in the filtering step the abundance distribution of each figure technical overview of our distributed string mining framework consisting of client left and server right processes the processes are responsible for computing the substring frequencies within each sample sd separately substrings and their frequencies are found using a over a compressed suffix tree frequency information is transmitted over to the by streaming it as a representation of a sorted trie for example the trie on the left results as the parenthesis representation given in the middle the server reads the and merges the already sorted tries in recursive manner at each node the server computes the entropy based on the received values and updates the affected pairwise distances on the is achieved by hashing the prefix of the substring so that each server corresponds to a certain range of hash values over samples is used to judge informativeness of the for retrieval a with constant abundance does not have discriminative power and in the other extreme a which is present in only one sample can not generalize over samples we show that the filtering step significantly improves the retrieval performance with most datasets and distance measures finally we compute the dissimilarity between two samples across the features as a weighted average of distances between relative frequencies of individual treating each as an independent feature allows us to execute these steps fast and on the fly without storing the intermediate results such simplified distance measures are necessary to guarantee scalability given the extremely high dimensionality of the features to summarize we introduce methods to estimate the frequencies of a large number of over multiple samples ii decide if a is informative or uninformative in the context of a retrieval task iii compute a distance metric using the filtered frequencies and iv execute these steps fast without explicitly storing the frequency values fig summarizes the method methods estimating frequencies normalization regularization filtering in order to perform the feature selection or filtering we first compute bayesian estimates of the relative frequencies p of each w over samples s s using observed frequencies s w of the these are distributions over samples for each that are computed independently for each for reasons of computational efficiency even if the relative abundance of a is the same in every sample the observed frequencies may differ because of different sequencing depth or coverage in different samples to tackle this issue we employ normalization we normalize the frequency s w by a constant s which is proportional to the total number of base pairs in a sample and s for the largest sample in the collection in terms of total base pair count obtaining f s w s w s the s can be interpreted probabilistically as the probability of observing a sequence in the actual sample assuming every sample had the same number of base pairs to start with but some have been lost in the processing in order to estimate the relative frequencies we place a conjugate symmetric dirichlet prior on the parameters of the multinomial distribution over the observed counts the common choice of uniform prior distribution corresponds to a dirichlet distribution with all parameters equal to this yields a posterior mean estimate of the relative frequency values as p p f s w f w the dirichlet prior with all parameters equal to one is ubiquitous in document retrieval it is particularly suitable for metagenomics due to the following observations the distributed string mining algorithm described below trades off low counts for speed and ignores any that are present only once in a sample the from the prior makes up for this missing count adding assists in playing down the significance of very rare that may appear due to sequencing errors in the filtering step without affecting other too much finally given the massive number of potential it is crucially important to improve ratio by focusing on the informative ones for the unsupervised tasks of comparing the samples obviously only which distinguish between the samples are informative as a concrete example consider a kmer that is present in all samples with a similar abundance it certainly does not give information useful for comparing samples in the other extreme if a is present in one specific sample but not in any other it is potentially a spurious due to sequencing error and in any case does not help in comparing samples either on the other hand if a is present in some samples but not all then it gives information that those samples are similar in a specific sense informativeness in this sense can be measured by the entropy h of the distribution of the over the samples we filter the based on the conditional entropies h x p log p log a is taken into account in distance computation only if the normalized entropy is lower than a certain threshold by design h notice that in standard information theory terminology higher entropy implies higher information however in our context an informative has low entropy also due to the bayesian estimation a spurious having only very small counts will have large conditional entropy and will be filtered out the optimal value of threshold e varies with datasets it can be optimized in a supervised manner by utilizing a training set where we have labelled samples in the absence of a labelled set we suggest taking the average of distance metrics computed over the potential thresholds as the final metric we refer to the final metrics in the two cases as optimized metric and average metric in our experimental we randomly make a split of a given dataset in training str and testing ste sets str ste and str ste we use str to optimize the entropy threshold we query with samples in str and retrieve relevant samples within the same set to observe which entropy threshold results in the best retrieval result see sec for details while comparing the performance of two methods we always present the evaluation over ste we query with samples within ste and we retrieve relevant samples from s not just ste algorithms to extract informative our main computational challenge is to extract all informative from datasets in feasible time and space recall that the filtering step relies on knowledge over multiple samples to decide if the respective is informative for the retrieval task or not since the typical collections of wms samples are huge in size we can not assume that even the plain input fits into the main memory of any single machine to process these datasets the computation needs to be done either using external memory disk or in a distributed manner a computer cluster we review two approaches counting and distributed string mining the first one is a standard approach in the literature for fixed k but has several limitations when applied in our context of multiple samples and data we show that the latter approach is more flexible in this context and can also be generalized to extract informative over all values of k simultaneously jellyfish and dsk are examples of recent algorithmic improvements in counting both tools use hash tables to compute the distribution for a given fixed in both tools is achieved by keeping most of the hash table on disk the main drawback with these approaches is that they are aimed at counting in a single sample and extending them over to multiple samples is for example jellyfish could in principle be extended to count over multiple samples the authors give a roughly linear time algorithm to merge two or more hash tables however the intermediate counts would need to be stored on disk which requires significant amount of additional space and the is not parallelized user manual sect bugs the decision whether a particular is informative or not is made by looking at its frequency over all the given wms samples we tackle this problem by a distributed string mining dsm framework that can handle inputs by utilizing a computer cluster the main advantages of this framework are that i divides the data and computation over multiple cluster nodes ii intermediate counts are not stored explicitly and iii there is no additional disk strain except reading through the input once these advantages allow data analysis on a cluster consisting of nodes having limited main memory we extend the dsm framework to be compatible with our definition of informative see the above subsection it allows us to extract the informative either for a fixed k or over all values of k in feasible time the dsm framework is based on a model the clients have correspondence to the given samples each client being responsible for computing the frequencies within the designated sample the computation relies heavily on suffix sorting techniques and on data structures for strings the input data are first preprocessed into a compressed representation which replaces the input data and acts as an efficient search structure the computation is more straightforward the server simply merges the sorted input from the clients computes the entropies and updates the distance matrices fig gives a toy example of the interaction two crucial observations are needed to keep the whole computation and transmission costs feasible first the informative can be seen as a subset of substrings substrings whose instances have differentiating continuation on both left and right more formally substring w of string t n is called if there exists two symbols a and b such that a b and both wa and wb are substrings of t similarly a substring w is leftbranching if aw and bw a b are substrings of t if a substring is both and we say it is second for any string of length n there are at most o n substrings and the total length of all such substrings is bounded by o n log n theorem the first observation allows us to reduce the computation to a smaller set of substrings it is easy to see that if w having frequency f s w is then there exists a substring of length k k that is and has exactly the same frequency f s w f s it follows that the frequency of can be deduced from the branching k and the substrings contain all the necessary information for us to detect informative the second observation guarantees a feasible transmission cost between clients and servers the upper bound for the concatenation of all substrings also acts as an upper bound for both the running time and the amount of communication needed the drawback of restricting to substrings is that the informative that we are able to detect have to appear at least twice in a sample although this limit may be useful in pruning spurious introduced by sequencing errors more detailed explanation and analysis of the dsm framework is given in a software implementation of the dsm framework is available at https dissimilarity metrics having extracted the informative we use them to compute the dissimilarity between two metagenomic samples we consider three dissimilarity metrics that can be computed easily over a large number of in sequential manner one at a time and without storing all the frequencies explicitly to utilize the natural variance structure of the are more abundant than weight the relative frequencies of each by their respective total counts we utilize the absolute frequencies f s w as defined in we mainly use the very simple jaccard distance which does not consider abundances at all only whether a occurs or not given two sets and of detected as present in two different samples jaccard distance measures how many elements are shared between these two sets mathematically it is defined as dcount despite its simplicity we observe that jaccard distance performs well a potential reason is its robustness to measurement noise and effectiveness when two metagenomic samples differ in terms of presence and absence of certain species or functionalities we assume a is present in a sample if its frequency is more than we also experiment with two metrics that use the abundance information euclidean distance an obvious distance measure between two metagenomic samples and is the euclidean distance between their respective frequencies we consider the distance metric xp p dsqrt f w f w w which can be computed sequentially as new informative are extracted the square root transformation is the variance stabilizing transformation for poisson popular model for quantitative sequencing data ii log transformed euclidean distance we also consider the same metric but with log transformation which is a popular approach in document retrieval x dlog log f w log f w w the motivation for using the log transformation is that it decreases sensitivity to high frequency counts some are present in high abundance in almost every genome for instance from the marker gene and the log transformation reduces their effect in the metric evaluation metric we evaluate the performance of the dissimilarity metric in terms of its performance in the task of retrieving relevant samples given a query metagenomics sample the ground truth for relevance is either the disease class disease vs not or the known body site samples from the same class are considered relevant for measuring retrieval performance we use an evaluation metric which is popular in document retrieval the mean average precision map given a query q the retrieval method ranks the samples in an increasing order of their dissimilarities from q given one has retrieved the top closest n n samples the precision n is defined as precision n q number of relevant samples in n retrieved samples n input size gb samples preproc h total memory gb all k h cpu time h k h cpu time h metahit hmp table computational resources required by the distributed string mining on different datasets we report times and total cpu times for both fixed k and over all preprocessing is done only once separately from the actual computation total memory is the memory requirement over all computation nodes experiments were ran on a cluster of dell poweredge nodes having gb of ram and cores simulated data and metahit were run using up to nodes was ran using nodes allowing more parallelization at the hmp was ran on a cluster of nodes with xeon cpus and ram and map defined using average precision as map x x precision n q avep q avep q mq here q is the set of all queries mq is the number of relevant samples to query q and rq is the set of locations in the ranked list where a relevant sample appears it is that a higher map implies better performance to judge if two map values are significantly different or not we employ the randomization test described in for each query this test randomly reassigns the aveps achieved by two methods to one another and computes the difference between the resulting map for multiple such reassignments to get a distribution against which the true map value is tested in terms of in case two samples share the same dissimilarity from a query sample we employ the modification suggested in to break ties when computing the mean we follow a type approach using each sample as a query and retrieving from the rest of the collection for simulated data and human gut samples we only query with the positive samples in the testing set q ste whereas for body site samples we query with each sample in the testing set for both cases we retrieve from the entire set s q while choosing the entropy threshold in a supervised setting we query from q str and retrieve from str q synthetic data generation to test the method we simulated four datasets containing samples from separate classes with the interpretation that samples from the same class are relevant in all the datasets we have two classes both classes of samples have the same species composition but different relative abundances we used metasim to generate illumina reads of length using the error configuration file provided by the developers each dataset contains samples of them belong to the positive class and the rest belong to the negative class for each dataset we used the same species from the following genera acetobacter acetobacterium acidiphilium acidithiobacillus acinetobacter bacillus bacteroides bifidobacterium chlamydia chlamydophila clostridium escherichia haloarcula halobacterium lactobacillus pasteurella salmonella staphylococcus and streptococcus the abundance profiles were generated from two dirichlet distributions one for positive and the other for negative class the parameters of the dirichlet distributions were shared between two classes for half of the species randomly chosen the same parameters were used for both classes and for the other half of the species the parameters were randomly permuted for example given species the assigned parameters could be and where the parameters for of metahit log hmp jaccard all entropy threshold jaccard of jaccard entropy threshold jaccard f entropy threshold jaccard jaccard all entropy threshold entropy threshold entropy threshold f entropy threshold figure number of informative strings over varying entropy thresholds for the proposed approach all fixed lenthgs and and for protein family based comparison with figfam f the box denotes the optimized entropy threshold that has been used to evaluate the performance of the methods some general observations are as follows the number of strings for k is lower than the rest while the number of strings for all is much higher than rest of the methods and number of strings for k and k are very close we observe that there are strings with low in the real data sets than in the simulated data indicate the presence of discriminative features also the optimized entropy threshold varies for different methods the second and fourth species are the same but for the other species they were permuted the exact species and corresponding parameter values can be downloaded from https the resulting datasets are relatively easy data with high coverage reads per sample relatively difficult data with low coverage reads per sample mixed data with half the samples from and the rest from to simulate varying sequencing depth relatively difficult data with same coverage as high but additional noise in the class distributions to simulate more overlap between classes to elaborate the relative abundance of species is phigh noise where noise is generated from a symmetric dirichlet distribution with all parameters equal to results we evaluated the retrieval performance on three human metagenomics datasets metahit metagenomic samples from healthy people and patients with inflammatory bowel disease ibd syndrome each sample has on average million reads our goal was to retrieve ibd positive patients mean average precision metahit jaccard metahit log jaccard hmp jaccard ak fig abd method jaccard mean average precision ak fig abd method jaccard t fig abd method t ak fig abd method jaccard ak fig abd method jaccard ak fig abd method ak ak fig abd method t ak fig abd method t figure retrieval performance comparison of the proposed approach using all ak against the following base measures fig retrieval performance using known protein family abd hellinger distance between relative estimated abundance distance between relative abundance of ak uses the optimized metric over equally spaced threshold values between and each errorbar shows the map value along with the standard error the grey horizontal line shows retrieval by chance map computed over zero similarity metric an arrow if present over a method indicates whether the performance of the corresponding method is significantly better or worse than ak the stars denote significance level for the synthetic datasets in the bottom row the relative abundance is known from experimental design we present this result as t for metahit we present the performance for both jaccard and log metric since the latter performs much better compared to the former phase ii metagenomic samples from healthy people and patients with type ii diabetes each sample has on average million reads our goal was to retrieve diabetic patients we chose to explore the phase ii data instead of the phase i data since the former has higher coverage about more reads than the latter hmp metagenomic samples from different body sites out of samples that passed the qc assessment http we discarded samples that had less than of the number of reads of the largest sample to recapitulate for metahit and our goal is to observe if given a positive sample from a patient with a particular disease one can retrieve relevant samples with similar disease whereas for hmp our goal is to observe if given a sample from a particular body site one can retrieve relevant samples samples from the same body site for all data we applied a quality threshold of and ignored any base pairs with quality less than the threshold table gives an overview of the computational resources required for each data set additionally number of used by different methods for each data set is available in retrieval of samples with similar annotation we applied the proposed approach and a number of alternatives to retrieval of similar samples from the same data set and evaluated by how many of the retrieved metahit log average precision jaccard all all average precision k all jaccard k jaccard all k jaccard k jaccard all k hmp jaccard all k all k figure comparison of best performances for different lengths the figures show the performance over queries by all positive samples as a violin plot all methods use the optimized metric chosen over equally spaced threshold values between and the box denotes the map value the horizontal lines show retrieval by chance avep computed over zero dissimilarity metric straight line is the mean and dotted lines are and quantiles respectively when number of relevant samples differ for different queries an arrow if present over a method implies whether the corresponding method performs significantly better or worse than all the stars denote significance level we observe that the considering all usually perform equally well with respect to considering a single samples had the same annotation class label disease state or body site a comparison of the obtained mean average precision values averaged over queries by all positive samples is shown in fig the results show the performance achieved by the optimized metric the alternatives we considered were retrieval performance based on the proposed distances but with the frequencies counted on specific from known protein families figfams ii retrieval based on hellinger distances between relative species abundances estimated using metaphlan and iii retrieval based on distances between relative frequencies of for the simulated data the two classes differ only by the relative species abundance thus retrieval based on ground truth abundance can be considered to give an upper limit for the performance for highc and the proposed method performs closer to the ground truth performance than any other methods although the difference from ground truth performance is still statistically significant for the performance of all methods except the protein family based comparison drop compared to while for the performance is again close to despite the presence of low coverage samples this is an encouraging observation showing the robustness of the proposed approach to varying sequencing depths for the real data sets the proposed approach yielded statistically significantly higher mean average precision than any of the alternatives p for all the datasets except where protein family based comparison works equally well interestingly the retrieval performs relatively poorly here suggesting that the differences between the classes can not be easily captured by species composition alone while the proposed features can provide a better separation retrieval based on the known protein family performed fairly well but slightly worse than the proposed approach on metahit we observe that mean average precision mean average precision metahit log all jaccard fig k all fig k jaccard all hmp jaccard jaccard fig k all jaccard all fig k fig k jaccard all fig k all fig k figure comparison of the best retrieval performance achieved with optimized metric middle average metric right and without entropy filtering left for proposed approach all individual ks as well as figfam based distance metric the metrics are optimized averaged over equally spaced threshold values between and each errorbar line shows the map value along with the standard error the grey horizontal line shows retrieval by chance map computed over zero dissimilarity metric an arrow if present over a method implies whether the performance of the corresponding method top average metric bottom optimized metric is better or worse than when entropy filtering is employed the stars denote significance level we observe that filtering has a positive impact on the retrieval performance for metahit jaccard metric performs poorly however a change of metric to log significantly improves the performance for all methods otherwise all metrics usually work equally well over different data sets effect of using specific or unspecific length we next compared the proposed approach of using all to using a specific the retrieval performance using optimized metric is shown in fig the figures show the complete distribution of average precision values over different queries whose mean is the mean average precision of fig the performance of the proposed method is usually better than with any individual thus the proposed method appears to be a relatively safe choice that does not suffer from catastrophically bad performance on any of the data sets effect of the entropy filtering next we evaluated the efficacy of filtering the informative against retrieval performance without the filtering operation the results are presented in fig we observed that entropy filtering usually improved retrieval performance for all tested lengths when using the optimized metric although the improvement might not always be statistically significant although average metric often provides significant performance it might not always improve over performance without filtering also retrieval performance of figfam may or may not improve with entropy filtering comparison across different metrics finally we evaluated the retrieval performance over different dissimilarity metrics we presented the performance using optimized metric for different metrics in fig we average precision metahit average precision hmp count sqrt metric log count sqrt metric sqrt metric log log count count count sqrt metric log sqrt metric log count sqrt metric log count sqrt metric log figure comparison of the best retrieval performance for different distance metrics using all they show a violin plot of the average performances over queries by all positive samples in the data sets the optimized metrics have been selected over equally spaced threshold values between and the box denotes the map value the horizontal lines show retrieval by chance avep computed over zero dissimilarity metric straight line is the mean and dotted lines are and quantiles respectively when number of relevant samples differ for different queries an arrow if present over a method implies whether the corresponding method performs significantly better or worse than the other methods denoted by their colors the stars denote significance level we observe that different distance metrics usually demonstrate similar performance observed that the simple metric dcount performed at least as well as abundancesensitive log and sqrt metrics except for the metahit data for which the other metrics performed the better conclusion in the wake of collecting multiple samples from similar environments information retrieval for metagenomic samples is expected to become a handy tool in metagenomics research in this paper we have addressed the problem of retrieving relevant metagenomic samples given a query sample from the same collection the novelty of the proposed approach is that it is unsupervised and does not rely on the availability of reference databases we have suggested employing frequencies as feature representation however rather than exploring of a fixed k we have scanned through all possible of all possible k s using distributed string mining and have proposed appropriate filtering technique to discard uninformative we have evaluated our method on both real and simulated data and observed that the approach can effectively retrieve relevant metagenomic samples outperforming both the figfams method based on known highly informative protein families as well as retrieval based on species composition of the samples acknowledgement the authors would like to thank ahmed sobih for his help with the metaphlan experiments on metahit and part of the calculations presented above were performed using computer resources within the aalto university school of science project funding this work was supported by the academy of finland project numbers and references yael baran and eran halperin joint analysis of multiple metagenomic samples plos comput biol february caldas nils gehlenborg ali faisal alvis brazma and samuel kaski probabilistic retrieval and visualization of biologically relevant microarray experiments bioinformatics caldas nils gehlenborg eeva kettunen ali faisal mikko andrew nicholson sakari knuutila alvis brazma and samuel kaski information retrieval in heterogeneous collections of transcriptomics data links to malignant pleural mesothelioma bioinformatics jan robert edwards robert olson terry disz gordon pusch veronika vonstein rick stevens and ross overbeek real time metagenomics using to annotate metagenomes bioinformatics dec sharon greenblum peter turnbaugh and elhanan borenstein metagenomic systems biology of the human gut microbiome reveals topological shifts associated with obesity and inflammatory bowel disease proc natl acad sci u s a jan bai jiang kai song jie ren minghua deng fengzhu sun and xuegong zhang comparison of metagenomic samples using sequence signatures bmc genomics december pmid manzini and puglisi permuted longest common prefix array in proc cpm lncs pages springer christine largeron christophe moulin and mathias gry entropy based feature selection for text categorization in proceedings of the acm symposium on applied computing sac pages association for computing machinery kelvin li monika bihan shibu yooseph and barbara meth analyses of the microbial diversity across the human microbiome plos one zhenqiu liu william hsiao brandi cantarel elliott franco drbek and claire sparse learning for simultaneous multiclass classification and feature selection of metagenomic data bioinformatics dec nicolas maillet claire lemaitre rayan chikhi dominique lavenier and pierre peterlongo compareads comparing huge metagenomic experiments bmc bioinformatics suppl guillaume marais and carl kingsford a fast approach for efficient parallel counting of occurrences of bioinformatics march frank mcsherry and marc najork computing information retrieval performance measures efficiently in the presence of tied scores in proceedings of the ir research european conference on advances in information retrieval ecir page berlin heidelberg meyer paarmann d souza olson glass kubal paczian rodriguez stevens wilke wilkening and edwards the metagenomics rast server a public resource for the automatic phylogenetic and functional analysis of metagenomes bmc bioinformatics september folker meyer ross overbeek and alex rodriguez figfams yet another set of protein families nucleic acids research november pmid pmcid suparna mitra bernhard klar and daniel huson visual and statistical comparison of metagenomes bioinformatics aug donovan parks and robert beiko identifying biologically relevant differences between metagenomic communities bioinformatics mar junjie qin et al a human gut microbial gene catalogue established by metagenomic sequencing nature march junjie qin et al a association study of gut microbiota in type diabetes nature oct daniel richter felix ott alexander auch ramona schmid and daniel huson metasim a sequencing simulator for genomics and metagenomics plos one guillaume rizk dominique lavenier and rayan chikhi dsk counting with very low memory usage bioinformatics mar siegfried schloissnig manimozhiyan arumugam shinichi sunagawa makedonka mitreva julien tap ana zhu alison waller daniel mende jens roat kultima john martin karthik kota shamil sunyaev george weinstock and peer bork genomic variation landscape of the human gut microbiome nature jan nicola segata jacques izard levi waldron dirk gevers larisa miropolsky wendy garrett and curtis huttenhower metagenomic biomarker discovery and explanation genome biol nicola segata levi waldron annalisa ballarini vagheesh narasimhan olivier jousson and curtis huttenhower metagenomic microbial community profiling using unique marker genes nature methods august mark smucker james allan and ben carterette a comparison of statistical significance tests for information retrieval evaluation in proceedings of the sixteenth acm conference on conference on information and knowledge management cikm pages new york ny usa acm xiaoquan su jian xu and kang ning efficient search for similar microbial communities based on a novel indexing scheme and similarity score for metagenomic data bioinformatics oct the human microbiome project consortium structure function and diversity of the healthy human microbiome nature june gene tyson jarrod chapman philip hugenholtz eric allen rachna ram paul richardson victor solovyev edward rubin daniel rokhsar and jillian banfield community structure and metabolism through reconstruction of microbial genomes from the environment nature february niko and simon puglisi distributed string mining for sequencing data in workshop on algorithms in bioinformatics wabi lncs pages james robert white niranjan nagarajan and mihai pop statistical methods for detecting differentially abundant features in clinical metagenomic samples plos comput biol apr yiming yang and jan pedersen a comparative study on feature selection in text categorization in proceedings of the fourteenth international conference on machine learning icml pages morgan kaufmann publishers
5
amenable uniformly recurrent subgroups and lattice embeddings mar adrien le boudec abstract we study lattice embeddings for the class of countable groups defined by the property that the largest amenable uniformly recurrent subgroup is continuous when comes from an extremely proximal action and the envelope of is in we obtain restrictions on the locally compact groups g that contain a copy of as a lattice notably regarding normal subgroups of g product decompositions of g and more generally dense mappings from g to a product of locally compact groups we then focus on a family of finitely generated groups acting on trees within this class and show that these embed as cocompact irreducible lattices in some locally compact wreath products this provides examples of finitely generated simple groups to a wreath product c f where c is a finite group and f a free group keywords lattices locally compact groups strongly proximal actions chabauty space groups acting on trees irreducible lattices in wreath products introduction the questions considered in this article fall into the setting of the following general problem given a class of countable group study the locally compact groups g such that embeds as a lattice in g such that sits as a discrete subgroup of g and carries a probability measure malcev showed that every finitely generated torsion free nilpotent group embeds as a cocompact lattice in a unique simply connected nilpotent lie group ch ii conversely if g is a locally compact group with a finitely generated nilpotent lattice then after modding out by a compact normal subgroup the identity component is a lie group of polynomial growth these have been characterized in and is finitely generated and virtually nilpotent this statement is a combination of several works first if g has a finitely generated nilpotent lattice then is necessarily cocompact in since is virtually torsion free this is a classical fact when g is totally disconnected and the general case can be deduced from prop which uses notably the solution of hilbert s fifth problem in particular g is compactly generated with polynomial growth and the statement then follows from the generalization of gromov s polynomial growth theorem for locally compact groups beyond the nilpotent case examples of classifications of embeddings of as a cocompact lattice have been obtained by dymarz in for several families date march this work was carried out when the author was postdoctoral researcher current affiliation cnrs umpa ens lyon adrien le boudec of examples of solvable groups although not directly related to our concerns we also mention that a certain dual problem was considered by in for the class of amenable groups outside the setting of amenable groups furman addressed the above problem for the class of lattices in lie groups in improving rigidity results of mostow prasad margulis see the references in see also furstenberg in considered a large class of countable groups defined by certain group theoretic conditions and established given a lattice embedding of in g a general arithmeticity result in the setting where the connected component of g is in this article we consider the class of groups whose furstenberg uniformly recurrent subgroup is continuous see below for definitions in the first part of the article we address the question to what extent the properties of the furstenberg uniformly recurrent subgroup of a countable group influence the locally compact groups into which embeds as a lattice in the second part we focus on a family of finitely generated groups within this class which embed as cocompact irreducible lattices in some locally compact wreath products the groups under consideration for a countable group the chabauty space sub of all subgroups of is a compact space on which acts by conjugation a uniformly recurrent subgroup urs of is a closed minimal subset of sub glasner and weiss showed that every minimal action of on a compact space x gives rise to a urs see proposition called the stabilizer urs associated to the action conversely every urs arises as the stabilizer urs of a minimal action see matte and elek in the case of finitely generated groups urs s have been shown to be related to the study of ideals in reduced group c algebras and reduced crossed products urs s of several classes of groups have been studied in for certain examples of groups rigidity results about minimal actions on compact spaces have been obtained in from a complete description of the space urs various results about homomorphisms between topological full groups of groupoids notably obstructions involving invariants of the groupoids have been obtained in via urs s considerations more precisely via a complete description of the points in the chabauty space of these groups whose orbit does not approach the trivial subgroup in the present article we will make use of urs s as a tool in order to study lattice embeddings for a class of countable groups that we now define a urs is amenable if it consists of amenable subgroups every countable group admits a largest amenable urs with respect to a natural partial order on urs see which is the stabilizer urs associated to the action of on its furstenberg boundary see for definitions the urs is called the furstenberg urs of is either a point in which case we have rad where rad is the amenable radical of or homeomorphic to a cantor space in this last case we say that is continuous we refer to for a more detailed discussion let c denote the class of groups for which the furstenberg urs is continuous equivalently a group belongs to c if and only if admits an amenable urs whose envelope is not amenable see below for the definition of the amenable urs s and lattice embeddings envelope the class c is disjoint from all classes of groups previously mentioned in the introduction more precisely the class c is disjoint from the class of amenable groups the class of linear groups and also from other classes of groups specifically considered in such as groups with numbers or acylindrically hyperbolic groups see th and th the class c is stable under taking quotient by an amenable normal subgroup and extension by an amenable group prop also if has a normal subgroup that is in c then belongs to c prop by a result of the complement of the class c is also stable under extensions see prop the study of this class of groups is also motivated by the work of kennedy who showed the following characterization a countable group belongs to c if and only if the group has a reduced c that is not simple for an introduction and the historical developments of the problem of c we refer to the survey of de la harpe topological boundaries we will make use of the notion of topological boundary in the sense of furstenberg these are compact spaces with a minimal and strongly proximal group action see for definitions many different notions of boundaries appear in the study of groups and group actions what is now sometimes called boundary theory is particularly well described in the introduction of we insist that in the present article the term boundary will always refer to a topological boundary in the above sense this notion should not be confused with any of the measured notions of boundaries in particular despite the possibly confusing terminology the maximal topological boundary called the furstenberg boundary is not the same notion as the measured notion of boundary lattices and direct products special attention will be given to products of locally compact groups the study of lattices in product groups is motivated among other things by its connections with the theory of lattices in lie groups its rich geometric aspects as well as the instances of groups with rare properties appearing in this setting we refer to the literature see for developments over the last years on the study of lattices in products of locally compact groups given a countable group with a continuous furstenberg urs and a group g containing as a lattice we are interested in understanding how close the group g can be from a direct product of two groups or which properties the group g can share with a direct product of course various notions of closeness can be considered the most basic one is to ask whether the group g admits decompositions as a direct product one step further one might consider quotient morphisms from g onto direct products of groups in theorems and below we more generally consider continuous morphisms with dense image from g to a direct product of groups g we make no assumption about injectivity of these maps or injectivity of the composition with the projection to one factor adrien le boudec gi in particular this setting allows maps of the form g for closed normal subgroups such that is dense in first results a central notion in this article is the one of extremely proximal action minimal and extremely proximal actions naturally arise in geometric group theory and are boundaries in the sense of furstenberg we refer to for definitions and examples we say that the furstenberg urs of a countable group comes from an extremely proximal action if there exists a compact space z and a on z that is minimal and extremely proximal whose associated stabilizer urs is equal to note that typically z will not be the furstenberg boundary of if h is a urs of the envelope env h of h is by definition the subgroup of generated by all the subgroups h theorem let be a countable group whose furstenberg urs comes from a faithful and extremely proximal action and let g be a locally compact group containing as a lattice the following hold a assume that env is finitely generated and in then g can not be a direct product g of two groups b assume that env has finite index in and finite abelianization then any continuous morphism with dense image from g to a product of locally compact groups g is such that one factor gi is compact the same conclusions hold for any group commensurable to g up to compact kernels this result has applications to the setting of groups acting on trees see corollary we make several comments about the theorem we do not assume that is finitely generated nor that g is compactly generated for statement a the assumption that env if finitely generated admits variations see theorem making an assumption on the size of the envelope of with respect to is natural in the sense that in general there is no hope to derive any conclusion on the entire group if this envelope is too small an extreme illustration of this is that there are groups whose furstenberg urs comes from a faithful and extremely proximal action but is trivial and these can be lattices in products psl z inside psl r psl qp see also the discussion right after corollary under the assumption that env is in the fact that the furstenberg urs comes from a faithful and extremely proximal action is equivalent to asking that the action of on is faithful and extremely proximal see remark this provides an intrinsic reformulation of the assumption not appealing to any auxiliary space for as in the theorem the assumption in statement b that env has finite index in and env has finite abelianization is equivalent to being virtually simple see proposition the urs approach to study lattice embeddings allows to consider more generally subgroups of finite covolume recall that a closed subgroup h of a locally amenable urs s and lattice embeddings compact group g has finite covolume in g if carries a probability measure thus a lattice is a discrete subgroup of finite covolume before stating the following result we need some terminology recall the notion of disjointness introduced by furstenberg in if x y are compact x and y are disjoint if whenever is a compact and x and y are continuous equivariant surjective maps the map x y that makes the natural diagram commute remains surjective see when x y are minimal this is equivalent to asking that the diagonal on the product x y is minimal consider the following property two are never disjoint a group with this property will be called boundary indivisible glasner characterized minimal compact which are disjoint from all as those carrying a fully supported measure whose orbit closure in the space of probability measures is minimal th the relation between disjointness and boundaries that we consider here is of different spirit as it deals with disjointness within the class of rather than disjointness from this class locally compact groups with a cocompact amenable maximal subgroup are examples of boundary indivisible groups prop on the contrary many discrete groups are not boundary indivisible the relevance of this property in our setting comes from the fact that as we will show in proposition a discrete group as in theorem is boundary indivisible actually the only examples of boundary indivisible discrete groups that we are aware of fall into the setting of proposition recall that a convex compact is irreducible if it does not contain any proper closed convex subspace we say that a subgroup l of a topological group g is weakly in g if whenever q is a convex compact in which l fixes a point q is not irreducible this is indeed a weakening of the notion of which asks that every convex compact q with points has points and hence q is not irreducible unless trivial if g has a subgroup that is both amenable and weakly then g is amenable and a normal weakly subgroup is coamenable however in general weak does not imply even for discrete groups in we exhibit examples of finitely generated groups such that every subgroup is either amenable or weakly but having subgroups that are not finally we say that a subgroup l g is if there exists a on which l acts minimally we refer to for context and examples theorem let h be a locally compact group with an amenable urs that comes from an extremely proximal action and whose envelope is in let g be a locally compact group containing h as a closed subgroup of finite covolume then g is boundary indivisible and the following hold a whenever g is a continuous morphism with dense image one factor gi is amenable not a relative version of a notion of weak amenability adrien le boudec b if l is a subgroup of g and l is uniformly recurrent then l is weakly in in particular every normal subgroup of g is in again we make several comments the group h is allowed to be discrete so the theorem applies for all groups as in theorem while a will be an intermediate step in the proof of theorem b provides additional information that is rather independent of the conclusion of theorem statement a implies that whenever are closed normal subgroups of g such that is dense in g at least one ni must be in the last sentence in b implies that if n is a closed normal subgroup of g such that n cn is open in g where cn is the centralizer of n then n is either amenable or in g see proposition we do not know whether the condition that n cn is open can be removed theorem does not say anything about amenable normal subgroups of it is worst pointing out that as illustrated by the examples discussed in section it happens that a discrete group satisfying the assumptions of theorem and with trivial amenable radical sits as a lattice in a group g with noncompact amenable radical remark below provides showing that in statement b the conclusion can not be strengthened by saying that l is in for h g as in the theorem it happens that g splits as a direct product of two groups even under the additional assumption that the amenable urs of h comes from a faithful extremely proximal action see example so that amenable can not be replaced by compact in statement a we view the above remarks as illustrations of the limitations of the use of topological boundaries and urs s to the problem addressed here in the rather abstract setting of theorem group actions on trees is a natural source of extremely proximal actions and theorems and find applications in this setting in the following statement t is a locally finite simplicial tree corollary let aut t be a countable group having no proper invariant subtree and no finite orbit in t assume that is and amenable for all and is virtually simple if g is a locally compact group containing as a lattice then a any continuous morphism with dense image g is such that one factor gi is compact in particular g itself can not be a direct product of two groups b the conclusion b of theorem holds for a group as in corollary is never discrete in aut t recall that burger and mozes constructed simple groups acting on two locally finite regular trees t t such that the image of in aut t and aut t are but acts amenable urs s and lattice embeddings freely and cocompactly on t t so that is a cocompact lattice in the product aut t aut t these examples illustrate the fact that the assumption in corollary that are all is essential examples of groups to which corollary applies can be found among the family of groups denoted g f f in see corollary these are examples of groups with a continuous furstenberg urs here f sym d is a finite permutation group and f is a simply transitive or regular subgroup of f the group g f f is then a finitely generated group acting on a tree transitively on vertices and edges and with local action at every vertex isomorphic to f we refer to for a definition the normal subgroup structure of these groups is highly sensible to the permutation groups there are permutation groups f f such that g f f virtually admits a free quotient proposition and there are permutation groups f f such that g f f the subgroup of index two in g f f preserving the bipartition of td is simple cor this family of groups and the family of lattices in the product of two trees both contain instances of finitely generated simple groups which embed densely in some universal group u f despite these similarities corollary shows that any group containing a virtually simple g f f as a lattice is rather allergic to any direct product behavior compare with theorem we also mention that other examples of groups to which corollary can be applied may be found among the family of piecewise prescribed tree automorphism groups considered in sec irreducible lattices in wreath products leaving aside the previous abstract situation we then focus on the family of groups g f f see for definitions the above mentioned common properties between the discrete groups g f f and certain lattices in the product of two trees provide motivation for studying which locally compact groups can contain a group g f f as a lattice the contribution of this article to this problem is on the one hand the conclusions given by corollary see corollary and on the other hand to describe embeddings of these groups as irreducible lattices in some locally compact wreath products if h is a group acting on a set and a is a subgroup of a group b the semirestricted permutational wreath product b h introduced by cornulier in a is the product b h where b a is the set of functions f b such that f x a for all but finitely many x and h acts on b a in the usual way this definition somehow interpolates between the restricted and the unrestricted permutational wreath products which correspond respectively to a in which case we will write b h and a b when b h are locally compact and a is compact open in b there is a natural locally compact group topology on b h see we call a lattice in b h irreducible if has a projection to the terminology is motivated by the fact that this definition prevents and more generally any subgroup commensurable with from being of the form where and are lattices in b a and in the following statement ck is the cyclic group of order k sk the symmetric group on k elements vd the vertex set of a tree td and g f f the subgroup of index two in g f f preserving the bipartition of td adrien le boudec theorem let d f f sym d permutation groups such that f acts freely and n the index of f in f then the following hold a the group g f f embeds as an irreducible cocompact lattice in the semis restricted permutational wreath product gn d sn aut td b when f is a transitive permutation group the finitely generated group d cn cd cd cd and g f f have isometric cayley graphs we note that no finite index subgroup of gn d can split as a product but the stabilizer of an edge of td in gn d for the projection action on td is an open subgroup which does split as a direct product of two groups v s the embedding of g f f in gn d snd aut td is not the inclusion in the subgroup aut td but a twisted embedding associated to the cocycle given by the local action on td see section for details we also note that the image v s of g f f in gn d does not intersect the amenable radical snd but does intersect the subgroup aut td along a cocompact lattice of aut td the case n is particular as the group d is actually a restricted wreath product and in this situation the group g f f is an irreducible cocompact lattice in aut td aut td applications recall that the property of being virtually simple is not invariant by indeed the lattices constructed by burger and mozes in show that a virtually simple finitely generated group may have the same cayley graph as a product of two finitely generated free groups theorem together with simplicity results from provide another illustration of this fact namely finitely generated simple groups having the same cayley graph as a wreath product the wreath product construction is already known to be a source of examples of finitely generated groups whose algebraic properties are not reflected in their cayley graphs two wreath products and may have isometric or one being solvable or torsion free while no finite index subgroup of the second has these properties the phenomenon exhibited in theorem is nonetheless very different in the sense that it provides finitely generated groups with isometric cayley graphs such that one is a wreath product but the other is simple and hence not commensurable with a wreath product recall that for finitely generated groups being amenable is a invariant by contrast theorem implies corollary among finitely generated groups the property of having infinite amenable radical is not invariant by the examples from theorem simultaneously show that having an infinite elliptic radical is also not invariant by recall that the elliptic radical of a discrete group is the largest locally finite normal subgroup recall that by a theorem of any finitely generated group that is to a wreath product c where c is a finite group must act properly and cocompactly on a graph dl m m by the algebraic description of the isometry groups of these graphs given in see also this implies in particular that has a subgroup amenable urs s and lattice embeddings of index at most two that is locally finite by contrast theorem shows that this rigidity fails in the case of c fk when k questions we end this introduction with two questions extreme proximality is used in a crucial way at different stages of the proofs of theorems and these results both fail without the extreme proximality assumption simply because then the group itself may very well be a direct product putting aside these trivial we do not know whether serious algebraic restrictions on a locally compact group may be derived from the existence of a lattice with a continuous furstenberg urs in this direction we find the following question natural question does there exist with a continuous furstenberg urs which is a lattice in a group g such both factors are and has an injective and dense projection to each factor what if we impose moreover that has trivial amenable radical theorem presents a situation of a locally compact group g with two cocompact lattices g such that the stabilizer urs associated to the on g is rad while the stabilizer urs associated to the on g is continuous here g stands for the furstenberg boundary of g see in these examples the group g splits as g n q where n is the amenable radical of the lattice preserves this splitting meaning that we have n and hence does not act faithfully on g while has an injective projection to q this naturally raises the following question let g be a locally compact group with two lattices and both acting faithfully on x is it possible that the on x is topologically free but the on x is not topologically free can this happen with g being a homogeneous note that by prop the condition that and act faithfully on g is equivalent to saying that and have trivial amenable radical recall that topologically free means that there is a dense subset of points having trivial stabilizer equivalently the stabilizer urs is trivial outline of proofs and organization the article is organized as follows in the next section we introduce terminology and preliminary results about topological boundaries and extremely proximal actions in section we establish the results about uniformly recurrent subgroups that are used in later sections in particular we prove a certain gap property for urs s coming from extremely proximal actions proposition combined with an observation about compact spaces with comparable stabilizer urs s proposition we deduce that a locally compact group h with an amenable urs that comes from an extremely proximal action and whose envelope is in h is boundary indivisible proposition the setting of section is that of a group admitting a free extremely proximal action we establish intermediate results notably concerning normal subgroups proposition and commensurated subgroups proposition and deduce results for this class of groups see proposition and corollary in section we use results from section together with proposition of furstenberg and prove theorem we then specify to discrete groups and give adrien le boudec the proof of theorem the proof essentially splits in two steps the first one is the application of theorem to obtain amenability of one factor and the second consists in proving that under appropriate assumptions the amenable factor is compact using results from section in section we consider groups acting on trees and apply previous results of the article to this setting after giving the proof of corollary we focus on the family of groups with prescribed local action g f f we study boundaries of these groups and use results from section in order to characterize the discrete groups within this family which are boundary indivisible see theorem this includes those which are virtually simple but this also contain simple instances finally we study lattice embeddings of these groups and give the proof of theorem acknowledgements i am grateful to alex furman for pointing out proposition to my attention and to uri bader for enlightening discussion about the proof i am also grateful to caprace yves cornulier bruno duchesne matte bon nicolas monod and pierre pansu for interesting discussions and comments related to this work finally i am indebted to alain valette for a decisive remark made in in may which eventually led to theorem preliminaries conventions and terminology the letter g will usually refer to a topological group while will denote a discrete group the group of homeomorphic automorphisms of g will be denoted aut g whenever g is a locally compact group we will always assume that g is second countable the notation x will refer to a topological space the letters x y will be reserved for compact spaces and z for a compact space equipped with an extremely proximal group action all compact spaces are assumed to be hausdorff a space x is a if g admits a continuous action g x x the action of g on x or the x is minimal if all orbits are dense the x is said to be trivial if x is a space if x is locally compact we denote by prob x the set of all regular borel probability measures on x the space of continuous compactly supported functions on x is denoted ck x each prob x defines a linear functional on ck x and we endow prob x with the weak a net converges to if f f for all f ck x by theorem prob x is relatively compact in ck x we denote by the set of all closed subsets of x the sets n o o k un c c k c ui for all i where k x is compact and un x are open form a basis for the chabauty topology on endowed with the chabauty topology the space is compact we will freely identify x with its image in by the natural inclusion x x note that when x is a so is in the particular case where x g is a locally compact group the space sub g of closed subgroups of g is closed in in particular sub g is a compact space on which g acts by conjugation a uniformly recurrent subgroup urs of amenable urs s and lattice embeddings g is a closed minimal subset of sub g the set of urs s of g is denoted urs g by extension we also say that a subgroup h g is uniformly recurrent if the closure of the conjugacy class of h in sub g is minimal topological boundaries if x is a compact the action of g on x is strongly proximal if the closure of any in prob x contains a dirac measure strong proximality is stable under taking products with diagonal action and continuous equivariant images see we say that x is a boundary if x is both minimal and strongly proximal for every topological group g there exists a unique boundary g with the universal property that for any boundary x there exists a continuous surjection g x prop this universal space g is referred to as the furstenberg boundary of it is easy to verify that any amenable normal subgroup n of g acts trivially on any so that g if g admits a cocompact amenable subgroup then the furstenberg boundary is a homogeneous space g and the of the form with l containing h are precisely the prop the situation for discrete groups is quite different as shown in and furstenberg boundaries of discrete groups are always unless trivial the following is a fundamental property of boundaries see theorem any convex compact contains a boundary in fact if q is an irreducible convex compact then the action of g on q is strongly proximal and the closure of extreme points of q is a irreducible means that q has no proper closed convex subspace in particular theorem has the following consequence theorem a group g is amenable if and only if all are trivial or equivalently g is trivial extremely proximal actions let x be a compact a closed subset c of x is compressible if the closure of the of c in the space contains a singleton x equivalently for every neighbourhood u of x there exists g g such that g c u the action of g on x is extremely proximal if every closed subset c x is compressible references where extremely proximal actions were considered include we will make use of the following result which is theorem from theorem let x be a compact and assume x has at least three points if the on x is extremely proximal then it is strongly proximal examples of extremely proximal actions are provided by group actions on trees or hyperbolic spaces if g aut t acts on t with no proper invariant subtree and no finite orbit in t then the action of g on is minimal and extremely proximal and if g acts coboundedly on a proper geodesic hyperbolic space x with no fixed point or fixed pair at infinity then the on the gromov boundary is minimal and extremely proximal these two situations are particular cases of the following more general result that we believe is a homeomorphism g of a space x is hyperbolic if there exist x called the endpoints of g such that for all neighbourhoods adrien le boudec of for n large enough we have gn x and x proposition if g acts on a compact space x with hyperbolic elements having no common endpoints and such that the set of endpoints of hyperbolic elements of g is dense in x then the action is minimal and extremely proximal proof let u x be a open invariant subset by our density assumption there is g g hyperbolic whose attracting endpoint belongs to u so for every x there is n such that gn x u since u is open so we deduce that u contains x but the existence of hyperbolic elements with no common endpoints ensures that g fixes no point of x so finally u x the action is minimal now if c x is a closed subset then again there is g g whose attracting endpoint is outside c and c is compressible to the repealing endpoint of recent work of duchesne and monod shows that group actions on dendrites is also a source of extremely proximal actions recall that a dendrite x is a compact metrizable space such that any two points are the extremities of a unique arc duchesne and monod show that if acts on x with no invariant proper subdendrite then there is a unique minimal closed invariant subset m x and the on m is extremely proximal see the proof of theorem in extremely proximal actions also play a prominent role in the context of group actions on the circle for any minimal action either is conjugated to a group of rotations or has a finite centralizer in and the action of on the quotient circle is extremely proximal see ghys and margulis we mention however that in all the examples of countable groups with an action on that is minimal and not topologically free that we are aware of the stabilizer urs is either or not known to be amenable in particular we do not know any application of theorem to groups acting on the circle in the sequel we will make use of the following easy lemma lemma let g be a topological group and h a subgroup of g such that there is some compact subset k of g such that g kh let x be a compact and c a closed subset of x that is compressible by then c is compressible by in particular if the on x is extremely proximal then the is extremely proximal proof by assumption there exists x x and gi such that gi c converges to x in if gi ki hi by compactness of k we assume that ki converges to some k and it follows that hi c converges to x by continuity of g uniformly recurrent subgroups generalities on uniformly recurrent subgroups let g be a locally compact group for h k urs g we write h k when there exist h h and k k such that h this is equivalent to the fact that every h h is contained in an element of k and every k k contains an element of h and the relation is an order on urs g see in amenable urs s and lattice embeddings for simplicity the urs n associated to a closed normal subgroup n of g will still be denoted n in particular n h resp h n means that n is contained in resp contains all the elements of by the trivial urs we mean the urs corresponding to the trivial subgroup we warn the reader that in this terminology the urs corresponding to a normal subgroup n is trivial as a it is a space but is not trivial as a urs let x y be compact we say that x is a factor of y and y is an extension of x if there exists a continuous equivariant map y x that is onto if y x is a continuous equivariant map we say that is almost if the set of y y such that y y is dense in y when moreover is onto we say that y is an almost extension of x we now recall the definition of the stabilizer urs associated to a minimal action on a compact space if x is a compact and x x we denote by gx the stabilizer of x in definition if x is a compact we denote by x the set of points at which stab x sub g x gx is continuous upper of the map stab and second countability of g imply that is a dense subset of x indeed if un is a basis of the topology on sub g and xn is the set of x x such that gx un which is closed one verifies that stab is continuous on c following we denote cls x gx x x sub g and sg x cls gx x sub g where cls stands for the closure in the ambient space we have the obvious inclusions x cls x gx x x and sg x sg x cls gx x x we denote by and the projections from x sub g to x and sub g respectively proposition prop in if x is a minimal compact then x is an almost extension and and sg x are the unique minimal x closed subsets of respectively x and sg definition sg x is the stabilizer urs associated to the on x the action of g on x is topologically free if sg x is trivial sg x remark when g is not assumed second countable in general is no longer dense in x however it is still possible to define the stabilizer urs associated to a minimal action on a compact space see the discussion in in the sequel we will sometimes use the following version of proposition proposition let x be compact and let h g be a subgroup acting minimally on x then h acts minimally on and sg x and and sg x x are the unique minimal closed subsets of x and sg adrien le boudec proof let y x closed and since x is a factor of x and h acts minimally on x for every x there exists l sub g such that x l y but for x the fact that x l belongs to x forces l to be equal to gx by definition of and it follows that y moreover h acts minimally on since x is an almost extension and minimality is preserved by taking almost extensions if is almost and if c is a closed subset such that c then c so the statements for and x since these are factors x are established and the same hold for sg x and sg of and x envelopes let g be a locally compact group and h urs g definition the envelope env h of h is the closed subgroup of g generated by all the subgroups h by definition env h is the smallest closed subgroup of g such that h sub env h note that env h is a normal subgroup of g and is actually the smallest normal subgroup such that h env h let be a discrete group x a compact and the domain of continuity of the map stab it is a classical fact that consists of those x x such that for every there exists u neighbourhood of x that is fixed by see lem for a proof for x x we will denote by the set of elements fixing a neighbourhood of x so that x if and only if lemma let be a countable discrete group x a compact minimal n and the following are equivalent i ii iii iv has interior there is x x such that for all i same as in ii but with x there is h x such that hfor all i in particular env x is generated by the elements such that fix has interior proof it is clear that i and ii are equivalent also iii clearly implies ii and ii also implies iii by density of in x finally iii implies iv since x for x and iv implies iii by density of the set of x in x with comparable stabilizer urs s we recall the notion of disjointness from two compact x y are disjoint if whenever x y are factors of a compact via x and y then the map x y is surjective when x y are minimal this is equivalent to saying that the product x y remains minimal lem the following lemma presents a situation which easily implies disjointness lemma let x y be minimal compact such that there exists y such that acts minimally on x then x and y are disjoint amenable urs s and lattice embeddings proof this is clear if w is a closed invariant subset of x y then by minimality of y there exists x such that w since acts minimally on x we deduce that w contains x and by minimality of y it follows that w is equal to x y the following proposition will be used notably in proposition proposition let x y be compact minimal and write h sg x and k sg y suppose that h then x and y can be disjoint only if env h in particular if sg x sg y and this urs is not a point then x and y are not disjoint proof using notation from proposition we have almost extensions x and y and we write x y and h the set w h k of pairs h k such that h k is by assumption is clearly and is easily seen to be closed if w is a proper subset of h k then w is a proper subset of since is a factor and it follows that w is a closed subset of x y that is proper since is almost this contradicts disjointness of x and y therefore we have w h this means that for a fixed k k we have h k for every h h and hence env h action of a urs on a in this paragraph g still denote a locally compact group and x is a compact given h urs g we study the properties of the action of elements on h on the space x the proof of the following lemma is an easy verification and we leave it to the reader lemma if x is a compact h sub g h fixes a point in x is a closed subset of sub g in particular the following definition makes sense definition let x be a compact and h urs g we say that h fixes a point in x if for some all h h there is x x such that h x x for all h lemma let x be a compact y x a closed invariant subset of x and h urs g if there exists h h fixing x x such that gx y then h fixes a point in y proof by assumption there exist gi and y y such that gi x converges to y if k h is a limit point of h gi which exists by compactness then k fixes y by upper of the stabilizer map lemma implies the following lemma if x is a compact containing a unique minimal closed ginvariant subset xmin x x is proximal and if h urs g fixes a point in x then h fixes a point in xmin proposition let z be a compact that is extremely proximal and h urs g then either h fixes a point in z or all h h act minimally on z adrien le boudec proof if there exist h h and a closed subset c z that is invariant by h then we may apply lemma to the space x the subspace y z and the point x c and we deduce that h fixes a point in z x stands for the closure in sub g recall that given a compact x sg of the set of subgroups gx x lemma let x be a compact assume that k g is a closed x subgroup of g which acts minimally on x and such that there exists h sg with h then env sg x x proof since k acts minimally on x the closure of the of h in sg contains sg x according to proposition since h sub k and sub k is a closed subset of sub g we deduce that sg x sub k and in particular env sg x definition let h urs g we say that h comes from an extremely proximal action if there exists a compact z that is minimal and extremely proximal and such that sg z it was shown in that for a discrete group with a urs h coming from an extremely proximal action any k urs must be relatively large with respect to h see th for a precise statement appropriate assumptions on and h further imply that h k for every k urs cor the following proposition goes in the opposite direction by considering urs s larger than proposition let h urs g that comes from an extremely proximal action let k urs g such that h k and h then env h proof let z be a compact that is minimal and extremely proximal and such that sg z fix k k and assume that k does not act minimally on z according to proposition this implies that the urs k fixes a point in z k since moreover h k satisfy h k by assumption we deduce that h k which is a contradiction therefore k acts minimally on z since moreover there exists h h such that h k we are in position to apply lemma from which the conclusion follows it should be noted that proposition is false without the extreme proximality assumption as in general there are plenty of urs s between h and env h lemma let h urs g that comes from an extremely proximal action then env h acts minimally on proof let z be a compact that is minimal and extremely proximal and such that sg z h and let n env h without loss of generality we may assume that h is not a point since otherwise there is nothing to prove this ensures that n acts on z by extreme proximality n must act minimally on z see lemma and therefore also on h by proposition remark the extreme proximality assumption can not be removed in lemma indeed it is not true in general that given h urs g h remains a urs of env h indeed as explained in any minimal subshift on two letters amenable urs s and lattice embeddings gives rise to a urs h of the lamplighter group g such that h is contained in the chabauty space sub l of the base group l in particular env h lies inside the abelian group l and it follows that env h acts trivially on proposition let h urs g that comes from an extremely proximal action and assume h is not a point then a the action of g on h gives rise to the same urs sg h b if moreover h comes from a faithful extremely proximal action then the action of g on h is faithful proof write k sg h by definition we have h argue by contradiction and suppose h then applying proposition we deduce that env h acts trivially on but env h also acts minimally on h by lemma so we deduce that h must be a point a contradiction this shows a for b arguing as in the proof of lemma we see that any normal subgroup n of g acts minimally on since h is not a point we have in particular that n acts on remark proposition implies that as far as our interest lies inside the urs associated to a minimal and extremely proximal action and not the space z itself there is no loss of generality in assuming that g z is a of g sub g see also remark amenable urs s recall that we say that h urs g is amenable if every h h is amenable the following lemma already appeared in prop lemma if h urs g is amenable and x is a then h sg x proof since h is amenable h must fix a point in the compact prob x now x is the unique minimal subspace of prob x since x is a gboundary so by lemma we have that h fixes a point in x h sg x proposition let x be a compact minimal such that h sg x is amenable and let y be a such that x and y are disjoint then env h acts trivially on y in particular if env h is in g a is never disjoint with x proof the fact that env h must act trivially on y follows by applying lemma and proposition since an amenable group has no boundary the second statement follows proposition says that when g admits an amenable urs whose envelope is a is never disjoint with x this conclusion is not satisfactory for our concerns as it depends on the choice of a space x and not only on although there is no hope to get a better conclusion in full generality the next result which will play an important role in section will remove this dependence under an extreme proximality assumption we recall from the introduction that we say that g is boundary indivisible if two are never disjoint adrien le boudec proposition assume that g admits an amenable h urs g that comes from an extremely proximal action and let x be a a either sg x h or env h acts trivially on x b assume that env h is in then sg x h and g is boundary indivisible proof a since h is amenable we have h sg x by lemma now if we assume h sg x then according to proposition we have env h sg x which exactly means that env h acts trivially on x b if sg x h then the action of g on x factors through an action of h by a but by assumption the latter is amenable so has no boundaries so it follows that x is trivial a contradiction therefore all have the same stabilizer urs since moreover h can not be a point because otherwise g would be amenable the fact that g is boundary indivisible follows from proposition for a countable group the furstenberg urs of is the stabilizer urs associated to the action of on its furstenberg boundary we refer to for the proof of the following properties proposition let be a countable group and its furstenberg urs then the following hold a is amenable and h for every amenable h urs b if x is a then x if moreover there is x x such that is amenable then x c is invariant under aut proposition let be a countable group and let env be the envelope of the furstenberg urs of then acts minimally on and proof the conjugation action of on the normal subgroup env induces a map aut since is invariant under aut by proposition it is in particular moreover the action of on is clearly minimal since it is already the case for therefore is an amenable urs of so it follows that since is larger than any amenable urs of on the other hand is a closed and subset of sub consisting of amenable subgroups so by the domination property applied to we must have equality follows remark when env is in the fact that comes from a faithful and extremely proximal action is equivalent to saying that the on is faithful and extremely proximal the direct implication is consequence of proposition and the converse follows from proposition this gives us an intrinsic reformulation of the assumption of theorem inside the chabauty space of extremely proximal actions if x is a hausdorff and u x we denote by the set of elements of acting trivially on x u we say that the action of on x is if is for every open set u we will need the following easy lemma amenable urs s and lattice embeddings lemma assume that the action of on x is and let u be a open set then is not solvable proof assume that is a subgroup of whose action on u is and let v be a open subset of u by assumption there exists a nontrivial so that we may find an open set w v such that w and w are disjoint for the commutator coincides with on w and is therefore provided that is it follows by induction that if n is the term of the derived series of then the action of n on u is in particular n is never trivial and is not solvable in this section we will consider the following setting ep is a discrete group z is a compact and the action of on z is faithful minimal and extremely proximal in order to avoid trivialities we assume that z has at least three points unless specified otherwise in the remaining of this section and z will be assumed to satisfy ep our goal is to derive various properties on the group that will be used in later sections lemma let n homeo z be a subgroup that is normalized by then n acts minimally and does not fix any probability measure on z proof assume there exists c z that is closed and n since c is compressible and n is normalized by wee see that n has a fixed point in z now the set of n points is so it has to be the entire z by minimality and n is trivial the same argument shows the absence of n probability measure on z since an extremely proximal action is also strongly proximal by theorem in all this section the terminology topologically free see definition has to be understood with viewed as a discrete group therefore that the action is not topologically free means that there exists which acts trivially on a open subset of lemma if the action of on z is not topologically free then it is microsupported proof let u be a open subset of z let be a element such that there is a open set v on which acts trivially and let g such that g x v u then the element acts trivially outside u so is definition let be the subgroup of generated by the elements such that fix has interior remark when is a countable group is also equal to the envelope of the urs z by lemma recall that the monolith mon is the intersection of all normal subgroups of we say is monolithic if mon is proposition assume that the action of on z is not topologically free then the following hold adrien le boudec a the commutators where fix fix has interior generate b is monolithic and one has mon c any normal subgroup of has trivial centralizer d if the action of on z is extremely proximal then is a simple group e is virtually simple if and only if has finite index in and finite abelianization proof a denote by n the subgroup generated by the set of where act trivially on a common open set we show that for every g h fixing open sets u v the commutator g h belongs to n since is generated by all these elements g h this will show that is abelian hence n and the other inclusion is clear first note that n is not trivial by lemmas and therefore n acts minimally on z according to lemma and we may find s n such that the open set w u s v is since g and hs fix w by construction we have g hs n but since s n we deduce that g h h s g g hs s h is a product of three elements of n and hence belongs to n as desired b we shall show that any normal subgroup n contains since is itself a normal subgroup this will prove that it is the monolith of by a classical commutator manipulation see lemma from there exists an open set u such that n contains the derived subgroup of now let fixing an open set v if is such that z v u then are supported inside u so that is contained in n since n is normal n now all these elements generate by a hence the conclusion c if n is a normal subgroup of then so is n therefore they can not be both because otherwise the intersection would be abelian and would contain by the previous paragraph a contradiction d if the action of on z is extremely proximal then according to b the monolith n of is since n is characteristic in n is normal in and hence contains by b so n and is simple e for to be virtually simple it is clearly necessary that the normal subgroup has finite index in conversely if this condition holds then the action of on z is extremely proximal lemma and is simple by d definition let x be a topological space and let be a group acting on x a open set x is wandering for if the translates are pairwise disjoint we say that is wandering for if it is wandering for proposition let and z as in ep then there exist an open set and a free subgroup such that is wandering for proof following glasner we consider pairwise disjoint open sets and elements a b such that a and b let w it follows from a argument than any reduced word in the letters a b sends the complement of w inside w so that the subgroup generated by a b is free th amenable urs s and lattice embeddings upon reducing w if necessary we may find an open set such that w and a b and induction on the word length shows that if the letter of is respectively a b then lies respectively inside in particular is empty since is disjoint from w so is wandering for proposition retain notations ep then the wreath product embeds into for every open subset u z that is not dense proof let and as in proposition and let be the subgroup of generated by and since is wandering for all the conjugates pairwise commute and it follows that is isomorphic to now if u is an in the statement by extreme proximality the group is isomorphic to a subgroup of hence the conclusion the argument in the following proof is borrowed from proposition in the setting ep if the action of on z is not topologically free then can not have any faithful linear representation proof let u be an open subset of z by lemmas and we may find a finitely generated subgroup b inside now if we choose u small enough it follows from proposition that the finitely generated group b is isomorphic to a subgroup of since is not residually finite it admits no faithful linear representation by malcev s theorem and a fortiori the same is true for recall that a subgroup of a group is commensurated if all conjugates of are commensurable where two subgroups are commensurable if the intersection has finite index in both the beginning of the argument in the proof of the following proposition already appeared in the idea is to extend classical techniques for normal subgroups to certain commensurated subgroups proposition retain notation from ep and assume that the action of on z is not topologically free if is a commensurated subgroup of such that there exists an element of admitting a wandering open set then contains the monolith mon proof let admitting a wandering open set we shall first prove that is contained in let g h and let also n since is wandering we have it follows that the commutator g is trivial outside and coincides with g on and with on therefore its commutator with h is trivial outside and is coincides with g h on but since g h the elements g h and g h actually coincide everywhere g h g h now since is commensurated in there exists such that g h belongs to applying the previous argument with n we deduce that g h belongs to in order to prove the statement it is enough to prove that is contained in for every closed subset c z according to proposition so let c a proper closed subset of z by minimality and extreme proximality there is such adrien le boudec that c fix such a and choose some integer such that belongs to set and this is wandering for and so is contained in by the first paragraph since c we have and the proof is complete proposition assume that z is a compact and the action of on z is faithful minimal extremely proximal and not topologically free then there exists a free subgroup such that for every commensurated subgroup not containing the monolith mon we have proof let be a free subgroup of as in the conclusion of proposition if is a commensurated subgroup such that then in particular contains an element admitting a wandering open set so by proposition we have mon this shows that every commensurated subgroup not containing mon intersects trivially corollary assume that z is a compact and the action of on z is faithful minimal extremely proximal and not topologically free if g is a locally compact amenable group whose connected component is a lie group then there exists no injective homomorphism proof argue by contradiction and assume embeds in let u be an open subgroup of g containing as a cocompact subgroup such a u is commensurated in g so the subgroup u is commensurated in if there exists u such that u does not contain mon then according to proposition we may find a free subgroup such that u in particular g contains the group as a discrete subgroup which contradicts amenability of therefore mon is contained in u for every choice of u since compact open subgroups form a basis at in by van dantzig s theorem it follows that mon actually lies inside now since is a connected lie group the group aut is linear so the map g aut induced by the conjugation action of g on is not injective in restriction to by proposition therefore this map must vanish on mon which means that mon actually lies inside the center of in particular mon is abelian which contradicts proposition the proofs of theorems and subgroups in this paragraph we consider the following property definition let g be a topological group and l a closed subgroup of we say that l is if there exists a on which l acts minimally it should be noted that being does not prevent l from being amenable for instance the action of thompson s group t on the circle is a boundary action and the abelian subgroup of t consisting of rotations acts minimally on other examples may be found among the groups acting on trees considered in where the stabilizer of a vertex is an amenable subgroup acting minimally on the ends of the tree in the sequel we will mainly focus on the case when l is normal in g or more generally when l belongs to a urs see proposition by contrast with the amenable urs s and lattice embeddings previous examples a normal subgroup is never amenable as a normal amenable subgroup of g acts trivially on any recall that furman showed prop see also prop that if n is normal subgroup of a locally compact group g there always exists a on which n acts this naturally raises the question whether any normal subgroup of g is we do not know the answer to this question while the case of discrete groups is easily settled see below the situation for groups seems to be more delicate we recall the following result of furstenberg see theorem let g be a topological group and denote by g homeo g the action of g on then there exists a homomorphism aut g homeo g such that inn where inn g aut g is the group of inner automorphisms of in particular when n is a normal subgroup of a group g the map g aut n coming from the conjugation action of g on n induces an action of g on n which factors through n note that this result readily answers the above question for discrete groups by showing that the normal subgroups of g are exactly the normal subgroups of indeed if n is then n is a space n acts minimally on n and n is a by theorem however the argument does not carry over for arbitrary groups as in general the on n is not continuous proposition let g be a locally compact group and n a closed normal nonamenable subgroup assume that at least one of the following holds true a n cg n is open in g when n is a direct factor of g b n is cocompact in g c there exists h urs n that is not a point invariant by aut n and that is an n if n has a closed cocompact amenable subgroup then n is in proof condition a ensures that the image of n in n is open therefore the on n given by theorem is continuous because the n is and we deduce that n is a if b holds then n acts minimally on any finally the verification of case c is straightforward since g aut n is continuous th and aut n acts continuously on sub n weakly subgroups in this paragraph we consider the following weakening of the notion of definition let g be a topological group and h a subgroup of we say that h is weakly in g if whenever q is a convex compact in which h fixes a point q is not irreducible the following properties readily follow from the definition proposition let k h g be subgroups of i if h g is then h is weakly adrien le boudec ii for a normal subgroup n g weakly is equivalent to coamenable iii if h g is amenable and weakly in g then g is amenable iv if g is continuous with dense image and h g is weakly coamenable then h is weakly in v if k is weakly in g then h is weakly in vi if k is in h and h is weakly in g then k is weakly in proof i if q is convex compact with points then there is a point by of h in g so q is not irreducible ii if n g is not there is a convex q such that fix n is nonempty but fix g is empty since n is normal fix n is so that by zorn s lemma fix n contains an irreducible convex which is since fix g is empty this shows n is not weakly the proofs of iii iv v and vi are similar verifications and we leave them to the reader remark as for it is natural to wonder whether weak coamenability of k in g implies weak of k in in view of ii the same given in show that the answer is negative in general by the correspondence between irreducible convex compact and gboundaries weak admits the following characterization proposition a subgroup h g is weakly in g if and only if for every x there is no probability measure on x that is fixed by proof follows from theorem the following shows how weak naturally appears for boundary indivisible groups see also proposition proposition let g be a boundary indivisible locally compact group and l a closed subgroup of g that is and uniformly recurrent then l is weakly in proof write h for the closure of lg in sub g which is a urs by assumption let x be a on which l acts minimally and let y be a on which l fixes a probability measure we have to show that y is trivial since h fixes a point in prob y and the on prob y is strongly proximal by theorem h fixes a point in y by lemma so there exists y y such that l gy and it follows that gy acts minimally on x therefore by lemma x and y are disjoint and since x is and g is boundary indivisible this is possible only if y is trivial the proof of theorem in this paragraph we shall give the proof of theorem from the introduction we will make use of the following result proposition furstenberg let g be a locally compact group h g a closed subgroup of finite covolume and x a then x is a amenable urs s and lattice embeddings for completeness we repeat the argument from prop proof write q prob x and consider a closed subspace q we have to show that x the set x gh g q is a closed subspace of q fix a probability measure on and consider n o y prob x where is the projection from onto the first factor and is the induced operator then y is a closed and hence compact subspace of prob x and y prob q is continuous so y is closed in prob q and by strong proximality of the on q theorem y must intersect q now x being the unique minimal closed subspace of q one has x y for every x x we therefore have prob x such that and this implies gh x g for every x and it easily follows that x remark in the case when h is cocompact in g strong proximality of the action of h on x also follows from applied to the action on prob x and minimality follows from disjointness of the and x theorem assume that h admits an amenable urs h that comes from an extremely proximal action and such that env h is in let g be a locally compact group containing h as a closed subgroup of finite covolume then a g is boundary indivisible b more generally if l is a locally compact group such that there is a sequence a topological group homomorphisms g gn l such that either gi has dense image or gi is an embedding of gi as a closed subgroup of finite covolume in then l is boundary indivisible in particular whenever g maps continuously and with dense image to a product one factor gi must be amenable proof since h is amenable h comes from an extremely proximal action and env h is in h the group h is boundary indivisible by proposition now by proposition the property of being boundary indivisible is inherited from closed subgroups of finite covolume indeed if x y are disjoint gboundaries x y is a then x y is also a boundary for h by proposition hence of x or y must be trivial since h is boundary indivisible this shows a since boundary indivisibility passes to dense continuous images and is inherited from closed subgroups of finite covolume b follows from a finally if are as in the last statement and xi gi then is a boundary for which is boundary indivisible by the previous paragraph so one factor xi must be trivial which exactly means that gi is amenable by theorem adrien le boudec remark in the proof of theorem we obtain that g is boundary indivisible from the same property for h which is itself deduced from proposition which in turn relies notably on proposition we note that the order in which the argument is developed seems to matter in the sense that the arguments applied to h do not seem to be applicable directly to the group indeed we do not know whether a group g as in theorem falls into the setting of proposition we do not know whether all have the same stabilizer urs we actually believe this might be false in general we note at this point that the proof of theorem from the introduction is now complete indeed the fact that a group g as in theorem is boundary indivisible as well as statement a is theorem and statement b follows from proposition the following remark explains a comment from the introduction remark theorem provides instances of countable groups being lattices in a group g of the form g n aut td such that all the assumptions of theorem are satisfied see section if m is a cocompact subgroup of aut td acting minimally on then l n m is a cocompact hence uniformly recurrent subgroup of g and l is in g since is a however when m is then m is not in aut td and l is not in this shows that the conclusion of statement b in theorem that l is weakly in g can not be strengthen by saying that l is in the proof of theorem recall that if g is a topological group the qz g of g is the subgroup of g containing the elements g g having an open centralizer note that qz g contains the elements having a discrete conjugacy class so in particular it contains all discrete normal subgroups recall also that the elliptic radical of g is the largest normal subgroup of g in which every compact subset generates a relatively compact subgroup it is a closed characteristic subgroup of we say that two groups are commensurable up to compact kernels if there exist ki hi gi such that hi is open and of finite index in gi ki is a compact normal subgroup of hi and and are isomorphic the following is slightly more complete than theorem from the introduction theorem let be a countable group whose furstenberg urs comes from a faithful and extremely proximal action and assume that env is in let g be a locally compact group containing as a lattice and h a group commenurable with g up to compact kernels consider the following properties a env is finitely generated b env has finite index in and admits a finitely generated subgroup with finite centralizer c env has finite index in and env has finite abelianization then a b both imply that h can not be a product of two groups amenable urs s and lattice embeddings c implies that any continuous morphism with dense image from h to a product of locally compact groups h is such that one factor gi is compact proof for simplicity we give the proof for h the general case follows the same lines of course we may assume that env is since otherwise there is nothing to prove according to proposition we have in particular that is monolithic and mon env env for simplicity in all the proof we write e env and m mon e e assume that g is continuous with dense image and denote by pi the projection gi i we will show that one factor must be compact upon modding out by the maximal compact normal subgroup of the identity component which intersects trivially since has no finite normal subgroup proposition we may also assume that has no compact normal subgroup this implies in particular that is a connected lie group by the assumption that e is in we can apply theorem which says that one factor say must be amenable we then apply corollary which tells us that the map is not injective in restriction to by definition of m we deduce that m assume now that c holds then m being of finite index in is a lattice in g and is contained in the closed normal subgroup therefore we deduce that is cocompact in g and that g is a compact subgroup of since g is also dense in we have that is compact we now have to deal with a b in which case is the identity and g without loss of generality we may assume that the projections pi are dense the proofs of the two cases will share a common mechanism given by the following easy fact lemma if there exists a subgroup l g whose centralizer cg l contains cg l is open in g and cg l is finite then is compact indeed since must intersect an open subgroup o g along a lattice of o it follows that cg l is compact and a fortiori so is we start with case a consider e which is normal in by density of note that is compactly generated in view of the assumption that e is finitely generated since m e e the group is abelian and therefore of the form zn rm c for some compact group it follows that the group admits a discrete cocompact normal subgroup which is an extension of m by a free abelian group being characteristically simple and the group m has trivial elliptic radical so the group also has trivial elliptic radical now since is compactly generated there is a compact open normal subgroup k of such that k has finite index in see lem so we deduce that has a compact open elliptic radical since any connected group has compact elliptic radical we deduce that has a compact elliptic radical r and is the compact group r is also normal in g and therefore we can mod out by r and assume that r is trivial so that is open in since centralizes m any e such that belongs to adrien le boudec centralizes m and therefore is trivial by proposition therefore is open in and intersects the dense subgroup e trivially so it follows that is trivial and e is a discrete subgroup of observe that e is centralized by and normalized by and hence is normal in being a discrete normal subgroup of g e therefore lies in the qz g since e is finitely generated the centralizer of e in g is actually open in moreover the subgroup cg e is normal in since cg e is normal in g but clearly does not contain m and hence is trivial by proposition therefore we can apply lemma with l e and we obtain the conclusion we now deal with b let z be a minimal compact on which the is faithful and extremely proximal and such that z by proposition actually an easy case of it the action of e on z is also minimal and it is extremely proximal by lemma moreover the associated stabilizer urs remains equal to and is also the furstenberg urs of e by proposition so e satisfies all the assumptions of case b of the theorem so it is enough to prove the result under the additional assumption in this case we have m thanks to proposition so it follows that is abelian by density of the projection the group is also abelian and hence lies in the center of therefore is normalized by the dense subgroup and it follows that is normal in in particular qz g and the conclusion follows by applying lemma with l a subgroup of such that l is finite groups acting on trees amenable urs s and groups acting on trees in this paragraph t is a locally finite tree and h acts continuously on t by isometries the assumption that t is locally finite is not essential here and the results admit appropriate generalizations for finite trees using the compactification from prop recall that the on t is minimal if there is no proper invariant subtree and of general type if h has no finite orbit in t the following is wellknown and essentially goes back to tits see also and proposition for details proposition if the action of h aut t is minimal and of general type then the action of h on is minimal and extremely proximal theorem therefore implies the following result corollary let h aut t be a locally compact group whose action on t is continuous minimal and of general type assume that end stabilizers are amenable and the envelope of sh is in assume h embeds as a subgroup of finite covolume in then whenever g maps continuously and with dense image to a product one factor gi must be amenable the conclusion of corollary implies in particular that whenever h embeds in g with finite covolume then g can not be a product of two groups the following example which is largely inspired from ex shows that the group g can nonetheless be a product of two groups amenable urs s and lattice embeddings example let k fp t be the field of laurent series over the finite field fp and let aut k be a automorphism of the group l sl k acts on a p tree and with amenable stabilizers on the boundary this action extends to a continuous action of h l z so that h satisfies all the assumptions of corollary nevertheless h embeds diagonally in the product g l aut k z as a closed subgroup of finite covolume since is compact and h and g are unimodular we will need the following fact if a is a subtree of t by the fixator of a we mean the subgroup fixing pointwise a proposition let aut t be a countable group whose action on t is minimal and of general type and such that in are amenable then and env is the subgroup generated by fixators of proof since the on is extremely proximal it is also strongly proximal by theorem so is a with amenable stabilizers and we deduce that by proposition now according to lemma the subgroup env env is generated by the elements whose fixed point set in has interior since form a basis of the topology in the statement follows before going to the proof of corollary we make the following observation remark for acting on t action minimal and general type such that the action on is not topologically free virtual simplicity of is equivalent to being of finite index in and having finite abelianization where is the subgroup generated by fixators of see statement e of proposition proof of corollary in view of proposition the assumptions on imply that the furstenberg urs of comes from a faithful extremely proximal action the fact that are all means that the action of on is not topologically free and by the above observation virtual simplicity of is equivalent to env being of finite index in and with finite abelianization the first statement of the corollary therefore follows from theorem case c and the second statement from theorem groups with prescribed local action in the next paragraphs we will illustrate the results of the previous sections on a family of groups acting on trees which contains instances of discrete and groups the purpose of this paragraph is to recall the definition and give a brief description of known properties of these groups we will denote by a set of cardinality d and by td a tree the vertex set and edge set of td will be denoted respectively vd and ed we fix a coloring c ed such that neighbouring edges have different colors for every g aut td and every v vd the action of g on the star around v gives rise to a permutation of denoted g v and called the local permutation of g at these permutations satisfy the identity gh v g hv h v for every g h aut td and v vd adrien le boudec given a permutation group f sym the group u f introduced by burger and mozes in is the group of automorphisms g aut td such that g v f for all it is a closed cocompact subgroup of aut td definition given f f sym we denote by g f f the group of automorphisms g aut td such that g v f for all v and g v f for all but finitely many that g f f is indeed a subgroup of aut td follows from and we note that we have u f g f f u f we make the following observation for future reference remark as it follows from the definition any element g f f fixing an edge e can be uniquely written as where each belongs to g f f and fixes one of the two defined by in the sequel we always assume that f preserves the f in see lem for the relevance of this property in this context these groups satisfy the following properties see the group g f f is dense in the locally compact group u f in particular g f sym is a dense subgroup of aut td g f f admits a locally compact group topology defined by requiring that the inclusion of u f is continuous and open and the action of g f f on td is continuous but not proper as soon as f f endowed with this topology the group g f f is compactly generated stabilizers of vertices and stabilizers of ends in g f f are respectively locally elliptic and locally elliptic in particular they are amenable g f f is a discrete group if and only if f acts freely on when this is so the group g f f is therefore a finitely generated group and stabilizers of vertices and stabilizers of ends in g f f are respectively locally finite and locally finite when f acts freely on and f f the groups g f f are instances of groups obtained from a more general construction described in sec more precisely a variation of it which provides discrete groups with a continuous furstenberg urs the later being the stabilizer urs associated to the action on the boundary of the tree on which these groups act in the particular case of the groups g f f the furstenberg urs can be explicitly described see proposition and corollary in in the sequel whenever we use letters f and f we will always mean that f f are permutation groups on a set that f contains f and preserves the f in following we will denote by g f f the subgroup of g f f generated by fixators of edges and by g f f the subgroup of index two in g f f preserving the bipartition of td the following result also obtained in prop supplements simplicity results obtained in where the index of the simple subgroup was found explicitly under appropriate assumptions on the permutation groups proposition the group g f f has a simple subgroup of finite index if and only if f is transitive and f is generated by its point stabilizers amenable urs s and lattice embeddings proof these conditions are necessary by prop conversely assume f transitive and f generated by its point stabilizers by prop again g f f has index two in g f f so in particular it is compactly generated if m is the monolith of g f f which is simple and open in g f f by cor we have to show m has finite index according to remark g f f is also the subgroup generated by fixators of and therefore by proposition m is the commutator subgroup of g f f the abelianization of g f f is therefore a finitely generated abelian group which is generated by torsion elements since g f f is generated by locally elliptic subgroups fixators of edges therefore this abelianization is finite and it follows that m has finite index in g f f boundaries of g f f in this paragraph we use results from the previous sections in order to study the boundaries of the discrete groups g f f the following result shows that several properties of the set of boundaries are governed by the permutation groups and that rigidity phenomena occur under mild conditions of the permutation groups theorem assume f acts freely on f f and write g f f the following are equivalent i the subgroup of f generated by its point stabilizers has at most two orbits in ii is isomorphic to one of or iii env is in iv x for every x v is boundary indivisible we will need preliminary results before proving theorem lemma assume that f acts freely on the envelope of the furstenberg urs of g f f is equal to g f f proof write g f f and g f f according to proposition env is the subgroup generated by fixators of in therefore the inclusion env is clear the converse inclusion also holds true by remark so equality follows in view of lemma and proposition we are led to consider the quotient g f f f f and in particular study when it is amenable to this end we will denote by f the subgroup of f generated by its point stabilizers and write d f since f is normal in f we have an action of f on the set of orbits of f which factors through a free action of proposition the group q g f f f f is isomorphic to the group u d where d f is viewed as a permutation group acting freely on the set of orbits of f in if moreover f is transitive one has q d proof we let or be the orbits of f and we will freely identify the set of orbits with the integers r for every a there is a unique i r such that a oi and we denote i ia we view the tree td as the cayley graph of the free coxeter group of rank d namely the group defined by generators xd and relators for all j adrien le boudec when adding relations of the form xa xb whenever ia ib a and b are in the same f we obtain a free coxeter group of rank it has a cayley graph that is a regular tree of degree r and we have a surjective map p td tr two elements v w xd i have the same image in if and q only if one can write w v wj xaj xbj for some words wj and colors aj bj such that iaj ibj since the inverse of wj is equal to the word obtained from wj by reversing the order we have lemma two vertices v w of td have the same projection in tr if and only if the distance between v and w is even say d v w and if is the sequence of colors from v to w then the word is a concatenation of palindromes of even length lemma for every g g f f and every vertex v on td the image of g v in d f does not depend on we denote by d the corresponding element which is trivial when g g f f proof if v w are adjacent vertices and a is the color of the edge between them then g v a g w a so g v g w f the first statement follows by connectedness the fact that is trivial on g f f is then clear because g is a morphism according to the first statement which vanishes on fixators of edges note that the set of edges of tr inherits a natural coloring by the integers lemma there is a natural morphism g f f aut tr such that ker g f f and im u d proof we shall first define an action of g f f on the set of vertices of tr let g g f f let v w be two vertices of td and an be the sequence of colors from v to if v w are the vertices between v and w then the sequence of colors from g v to g w is g g vn an if is the element defined in lemma then one has g vj aj iaj for all j this shows in particular that if v w satisfy the condition of lemma then the same holds for g v and g w this means that for every vertex x of tr the formula g x p where is any vertex of td such that p x is a action of g f f on tr the fact that the tree structure is preserved is clear note that for every g g f f all the local permutations of g are equal to for every vertex x of tr one has g x in particular the image of lies inside u d we shall prove that ker g f f let g g f f fixing an edge in td then g also fixes an edge of tr by moreover one has lemma and it follows from the previous paragraph that all local permutations of g are trivial this implies g ker conversely we let g be an element of ker and we prove that g g f f note that since g is trivial one has all local permutations of g are in f now let v be any vertex by lemma the sequence of colors from v to g v gives rise to a sequence that is a concatenation of palindromes for simplicity we treat the case where is a palindrome the general case consists in repeating the amenable urs s and lattice embeddings argument for this case let v g v be the vertices between v and g v note that vn is the midpoint between v and g v since is a palindrome one easily checks that there are elements gn such that gj belongs to the stabilizer of vj in g f f and gn g fixes the vertex v this is obtained by successively folding the geodesic v g v onto itself starting from its midpoint in order to bring back g v to v with gn we now invoke the following easy fact whose verification is left to the reader lemma let g f f fixing a vertex w and such that w f then g f f we apply lemma to and all gj s and deduce that g belongs to g f f as desired the last thing that remains to be proved in the statement of lemma is that the image of is equal to u d the fact that g always belongs to u d has already been observed for the converse inclusion observe that since g f f acts transitively on the vertices of tr as it is already the case on td it is enough to check that the image of contains u d x for some vertex x of tr now since d acts freely on r the map u d x d x is an isomorphism therefore it is enough to see that any action on the star around x on tr can realized by an element of g f f and this is indeed the case see lemma in to finish the proof of the proposition remark that the image of g f f by is precisely u d when f is transitive then d is also transitive so that u d has two orbits of vertices and one orbit of edges and therefore splits as the free product d remark the case f f is allowed in proposition so that the conclusion also holds for the groups u f from proposition naturally leads us to isolate the following three situations we keep the previous notation so that r is the number of orbits of f in d f and q g f f f f r in this case tr is a segment of length one and d and q are trivial r tr is a line and this case splits into two disjoint a if f is intransitive then d is trivial and q u z generated by a translation of tr of length b if f is transitive then we have d sym and q d d r then q u d is a virtually free group since it acts vertex transitively and with trivial edge stabilizers of tr theorem says that all properties stated there hold true if and only if r a sufficient condition for having r is for instance that f acts transitively on f f and f acts primitively or on recall that a permutation group is if every normal subgroup acts transitively but theorem also applies beyond the case of permutation groups for example a situation giving rise to case a is when f has a fixed point and acts transitively on the complement examples giving rise to case b are for instance obtained by taking f sym n sym n i acting naturally adrien le boudec on letters and f the subgroup generated by cn cn and where cn is a cycle of order proof of theorem i ii follows from lemma and proposition and the discussion following its proof ii iii is clear iii iv is proposition iv v is guaranteed by proposition and the fact that is not a point finally assume that i does not hold f has at least three orbits in and write g f f by proposition the group q has a subgroup of finite index that is free of rank at least so there exist and a fortiori these are if x is such a boundary then acts trivially on x since also acts minimally on it follows that x and are disjoint contradicting v therefore property v implies property i and the proof is complete weakly subgroups in this paragraph we show that subgroups of the groups g f f satisfy the following dichotomy proposition assume that f acts freely transitively and f acts primitively on then any subgroup of g f f is either locally finite and hence amenable or weakly we will need the following lemma lemma assume that f acts primitively on and take two subgroups in the furstenberg urs of g f f then i g f f proof write g f f recall from prop that the furstenberg urs of consists of subgroups where is the set of elements acting trivially on a neighbourhood of given we show that the subgroup generated by and must be equal to g f f take a vertex v on the geodesic from to let be the edges containing v and pointing towards and and a b the colors of denote by k v the subgroup of consisting of elements fixing v and such that w f for every w since f is primitive f is generated by the point stabilizers and this implies that every element of k v may be written as a product of elements fixing either the defined by containing or the defined by containing so that k v since v was arbitrary we also have k v for v a neighbour of v on the geodesic the conclusion now follows since for two neighbouring vertices v v the subgroups k v k v always generate g f f cor proof of proposition write g f f and let be a subgroup of that is equivalently whose action of td is of general type by proposition we have to show that fixes no probability measure on any argue by contradiction and assume that x is a on which fixes a probability measure according to theorem we have x therefore by proposition there exist an almost extension x and a factor map let q prob be the set of such that and write r q which is a closed subset of prob since the action of on is strongly proximal and since is a factor of prop amenable urs s and lattice embeddings we deduce that r contains some dirac measures let h such that there is prob with and such a measure must be supported in the set of x h and it follows that is supported in the set of points in x because x h implies that h gx by upper of the stabilizer map but since does not fix any point in we may find another h such that r so that the same argument shows that h also acts trivially on the support of by lemma the subgroups h h generate g f f which is of index two in therefore any point in the support of has a of cardinality at most two which is absurd since acts minimally on x and x is by assumption remark assume f acts freely transitively and f acts primitively and write g f f let such that the on td is of general type but with a proper subtree for instance one could take for the subgroup generated by two hyperbolic elements with sufficiently far apart axis then is not in see the argument in the proof of theorem in but is weakly in by proposition remark we mention that when f is primitive following the proof of corollary from with minor modifications one could prove that every g f f factors onto this would provide an alternative proof of proposition lattice embeddings of the groups g f f in this section we study how the discrete groups g f f can embed as lattices in some locally compact groups the purpose of this paragraph is twofold first we apply previous results of the article to the family of groups g f f and deduce some properties of general locally compact groups containing a group g f f as a lattice corollary second we explain how the groups g f f embed as lattices in some locally compact wreath products this will be the content of below remark maybe it is worth pointing out that instances of lattice embeddings of the groups g f f already appeared in indeed under appropriate assumptions on permutation groups f f h h the inclusion of g f f in g h h has discrete and cocompact image cor corollary assume that f acts freely transitively on and that f is generated by its point stabilizers let g be a locally compact group containing g f f as a lattice then the conclusions of corollary hold proof the assumptions on f f imply that g f f is virtually simple by proposition so corollary applies remark in the setting of corollary although g f f can not be a lattice in a product it happens that there exist groups such that g f f embeds as a discrete subgroup of with injective and dense projection to each factor for instance if are permutation groups such that f fi f and we set gi g fi f then the diagonal embedding of g f f in has this property as soon as f see lem and adrien le boudec locally compact wreath products in this paragraph we introduce some terminology that will be used in the sequel let be a set b a group and a a subgroup of b we will denote by b a the set of functions f b such that f x a for all but finitely many x note that b a is a group definition if h is a group acting on the a h permutational wreath product b h is the product b x a where h h acts on f b by hf x f h x the extreme situations when a and when a b correspond respectively to the restricted and the unrestricted wreath product when a we shall write b h for the restricted wreath product also for simplicity we will sometimes say wreath product b h instead of permutational wreath product when a is a compact group and h a locally compact group acting continuously on the group h is a locally compact group for the product topology if moreover b is locally compact and a is compact open in b there is a natural locally compact group topology on b h defined by requiring that the inclusion a of a h in b h is continuous and open see sec in the remaining of the article we shall be interested in the study of certain lattices in some locally compact groups b a few remarks are in order lemma let a b and h as above a assume is a lattice in b a and is a lattice in h that normalizes then is a lattice in b a b for b h to contain a lattice it is necessary that h contains a lattice proof for the first statement see lem for the second statement observe that if b h is a lattice the intersection a h is a lattice in since is open in b the subgroup a being compact the projection of to h is discrete and hence is a lattice in recall that there are various notions of irreducibility for a lattice in a direct product of groups in general whether all these notions coincide depends on the context we refer to and for detailed discussions in the setting of wreath products we will use the following terminology definition a lattice in b h is an irreducible lattice if has a projection to the group this definition implies that neither nor its finite index subgroups can be of the form as in lemma lemma if the group b a does not contain any lattice then any lattice in b h is irreducible proof if h is a lattice with a discrete projection to h then the subgroup contains as a lattice and it follows that intersects the subgroup b a b a which is open in b along a lattice of b amenable urs s and lattice embeddings remark of course if b admits no lattice then the same holds for b a more interestingly there are finite groups a b for which b a fails to admit any lattice provided is infinite this is for instance the case when a b and any element of b has a power in consequently all lattices in b h are irreducible by lemma for arbitrary h proof of remark we claim that the above condition on a b actually implies that b a has no infinite discrete subgroup for every finite we write b a for the subgroup vanishing on and assume is a discrete subgroup of b a so that there is a finite such that the assumption on a b is easily seen to imply that any subgroup of intersects therefore and being of finite index in b a is finite it should be noted that the existence of an irreducible lattice in b h forces h to be and b to be however this does not force b a to be and as we will see below interesting examples already arise when b is finite and a is trivial the proof of theorem let n and d we denote by the v set of integers n and by d the set of functions f vd with finite support where the support of f is the set of v such that f v we will also write fv for the image of v by f and sometimes use the notation fv for the function f we consider the graph xn d whose set of vertices is the set of pairs f e where v f belongs to d and e ed and edges emanating from a vertex f e are of two types type f is connected to f e if ed is a neighbour of e if e and share exactly one vertex type f e is connected to f e if the function f is obtained from f by changing the value at exactly one vertex of note that since any e ed has d neighbours and has cardinality n every vertex of xn d has d neighbours of type and n neighbours of type the graph xn d is almost the wreath product of the complete graph on n vertices with the tree td see below let sn be the group of permutations of for sn and i we will write i the action of on i the stabilizer of in sn is obviously isomorphic to and by abuse of notation we will denote it in particular when viewing as a subgroup of sn we will always implicitly mean that is the subgroup of sn acting only on n s definition we will denote by gn d the wreath product sn aut td groups of the form gn d were considered in ex we will denote v s un d snd so that gn d un d aut td we endow gn d with the topology such that sets of the form form a basis of neighbourhoods of d and in where and belong to a basis of the identity respectively in aut td this defines a totally disconnected locally compact group topology on gn d see prop we note that the case n is somehow particular adrien le boudec as d is a discrete subgroup of d and d is just the restricted wreath product d aut td aut td proposition the group gn d acts by automorphisms on the graph xn d by preserving the types of edges moreover the action is faithful continuous proper and transitive on the set of vertices proof the group gn d is a subgroup of the unrestricted permutational wreath product of sn and aut td the latter group has a faithful action on the set of functions vd given by fv v v the group gn d preserves d because fixes almost surely if belongs to un d now the projection from gn d onto aut td induces an action of gn d on the v set ed and we will consider the diagonal action of gn d on d ed in other words if g gn d and fv e xn d g fv e v fix x fv e xn d and g gn d and let be a neighbour of x if is of type then we have fv where e and share a vertex w in td then and have the vertex in common so that by the formula g is a neighbour of type of g x in xn d now if is of type then we may write e with fv if and only if v w where w is one of the two vertices of it follows that v v if and only if v so that by g is a neighbour of type of g x this shows that the action is by graph automorphisms and preserves the types of edges lemma let e ed and let k be the stabilizer of e in aut tq d then the stabilizer of the vertex e in gn d is the compact open subgroup proof that g fixes e exactly means by that fixes for all v and that fixes so the fact that the action is continuous and proper follows from the lemma and the transitivity on the set of vertices is an easy verification consider now the free product cd cd of two cyclic groups of order d acting on its tree td with one orbit of edges and two orbits of vertices denote by cn the cyclic subgroup of sn generated by the cycle n and set d cn cd cd gn d remark that cd cd has a split morphism onto cd whose kernel acts on td with two orbits of vertices and is free of rank d therefore d splits as d cd lemma d gn d acts freely transitively on the vertices of xn d proof this is clear the image of the vertex e by an element is so both transitivity and freeness follow from the fact that the actions of cn on and of cd cd on ed have these properties amenable urs s and lattice embeddings we now explain how the groups g f f act on the graphs xn d in the sequel f f denote two permutation groups on such that f f and f preserves the orbits of f and we denote by n the index of f in f fix a bijection between n and f such that is sent to the class f the action of f on the coset space f induces a group homomorphism f sn such that f lies inside for g f f and v vd write v v sn note that v if and only if v f we also denote by v proposition let f f sym and n the index of f in f the map g f f gn d is a group morphism that is injective continuous and with a closed and cocompact image proof the map is because v for all but finitely many v s v so that we indeed have snd the fact that is a group morphism follows from the cocycle identity satisfied by local permutations indeed for g f f we have with v v v v v v so and injectivity of is clear since the composition with the projection to aut td d aut td is is injective the preimage in g f f of the open subgroup the subgroup u f which is open in g f f by definition of the topology so it follows that the map is continuous also the intersection between im and the d aut td is u f and it is easy to check that the latter open subgroup is indeed a closed subgroup of gn d so it follows that im is closed in gn d the fact that im is cocompact will follow from proposition and proposition below in the sequel for simplicity we will also write g f f for the image of g f f gn d in particular when speaking about an action of g f f on the graph xn d we will always refer to the action defined in proposition restricted to g f f this means that g f f acts on f e xn d by f e f where f v v v v v this action should not be confused with the standard action fv e v coming from the inclusion of g f f in aut td proposition let f f sym and n the index of f in f adrien le boudec a the group g f f acts cocompactly on xn d when f is transitive on the group g f f acts transitively on vertices of xn d b the stabilizer of a vertex e xn d in g f f is the stabilizer of e in u f in particular the action of g f f on xn d is proper therefore when f acts freely transitively on the group g f f acts freely transitively on the vertices of xn d proof we show that for every vertex x fv e of xn d there is g g f f such that g x e since u f preserves the vertices of this form and since the number of orbits of u f g f f on ed is finite and is equal to one when f is transitive statement a will follow we argue by induction on the cardinality n of the support of fv there is nothing to show if n assume n and let vd with and such that maximizes the distance from e among vertices v such that fv let be the edge emanating from toward e if belongs to e then e and let a be the color of we also denote by t and t the two defined by where t contains for every b b a we denote by b the edge containing and having color c b b and by t b the defined by b not containing by assumption the permutation group f preserves the f in so we have f f the subgroup f sn being transitive it follows from the previous decomposition that there exists such that for every b a we choose f such that b b and we consider the unique element h aut td whose local permutations are h v if v t h and h v for every v t b and every b a it is an easy verification to check that h is a automorphism of td and h g f f because all but possibly one local permutations of h are in f note that h fixes e by construction write h x e we claim that the support of has cardinality n since h fixes by we have h moreover we also have fv for every v in t because h acts trivially on t finally by the choice of we had fv for every v in t and since h v f for all these v and f fixes we still have for every v in t v this proves the claim and the conclusion follows by induction statement b follows from lemma and the last statement follows from a and b and the fact that u f acts freely on ed when f acts freely on propositions and lemma imply theorem from the introduction note that when f acts freely transitively on we have an explicit description of a generating subset of the group g f f whose associated cayley graph is xn d for fix an edge ed whose color is a and whose vertices are and denote xn d for i let si be the set of g f f fixing vi and such that v f for every v vi and vi is and belongs to f then s generates g f f and cay g f f s xn d is a graph isomorphism moreover neighbours of type resp type of a vertex of xn d are labeled by elements s si such that vi f resp vi amenable urs s and lattice embeddings we end the article by observing that there are possible variations in the definition of the graph xn d if kn is the complete graph on n vertices let kn td be the wreath product of the graphs kn and td sometimes also called the lamplighter v graph over td the vertex set is d vd and there is an edge between f v and f v if and only if either f f and v v are adjacent in td or v v and f w f w if and only if w again if n is the index of f in f the group g f f acts on kn td by f v f where f is given by the previous arguments for the graph xn d carry over to this graph so that we have proposition let f f sym and n the index of f in f then g f f acts properly and cocompactly on the graph kn td the reason why we considered the graph xn d instead of kn td is to obtain under the assumption that f acts freely on a free action of g f f on the set of vertices in the case of kn td the stabilizer of a vertex in g f f is finite but we note that it might be interesting to investigate whether the generalized wreath products of graphs from could provide other kind of interesting groups of automorphisms yet another possibility is to take the same vertex set as xn d but declaring that there is an edge between f e and f if e share a vertex w and fv for every v this graph zn d has larger degree namely d again all the results proved above for xn d remain true in the case d one may check that is the graph dl n n so that zn d may be thought of as higher dimensional versions of these graphs references bader caprace gelander and mozes lattices in amenable groups bader and furman boundaries rigidity of representations and lyapunov exponents proceedings of icm bader furman and sauer on the structure and arithmeticity of lattice envelopes math acad sci paris no breuillard kalantar kennedy and ozawa c and the unique trace property for discrete groups burger and sh mozes finitely presented simple groups and products of trees acad sci paris i math no groups acting on trees from local to global structure inst hautes sci publ math no lattices in product of trees inst hautes sci publ math no burger and monod continuous bounded cohomology and applications to rigidity theory geom funct anal no bartholdi neuhauser and woess horocyclic products of trees eur math soc jems no benoist and quint lattices in lie groups j lie theory no bader and shalom factor and normal subgroup theorems for lattices in products of groups invent math no caprace simple locally compact groups to appear in the proceedings of the european congress of mathematics cornulier fisher and kashyap lamplighter groups new york j math adrien le boudec caprace and monod isometry groups of curved spaces discrete subgroups topol no decomposing locally compact groups into simple pieces math proc cambridge philos soc no a lattice in more than two groups is arithmetic israel j math relative amenability groups geom dyn no cornulier locally compact wreath products caprace and simplicity and superrigidity of twin building lattices invent math no caprace reid and ph wesolek approximating simple locally compact groups by their dense locally compact subgroups dahmani guirardel and osin hyperbolically embedded subgroups and rotating families in groups acting on hyperbolic spaces duchesne and monod group actions on dendrites and curves dymarz envelopes of certain solvable groups comment math helv no dyubina instability of the virtual solvability and the property of being virtually for groups internat math res notices no eskin fisher and whyte and rigidity of solvable groups pure appl math q no special issue in honor of grigory margulis part coarse differentiation of i spaces not to cayley graphs ann of math no coarse differentiation of ii rigidity for sol and lamplighter groups ann of math no elek on uniformly recurrent subgroups of finitely generated groups erschler generalized wreath products int math res not art id eymard moyennes invariantes et unitaires lecture notes in mathematics vol york frisch schlank and tamuz normal amenable subgroups of the automorphism group of the full shift furstenberg disjointness in ergodic theory minimal sets and a problem in diophantine approximation math systems theory poisson boundaries and envelopes of discrete groups bull amer math soc boundary theory and stochastic processes on homogeneous spaces harmonic analysis on homogeneous spaces proc sympos pure vol xxvi williams williamstown amer math providence pp rigidity and cocycles for ergodic actions of semisimple lie groups after margulis and zimmer bourbaki seminar vol lecture notes in vol springer york pp furman rigidity with locally compact targets geom funct anal no on minimal strongly proximal actions of locally compact groups israel j math ghys groups acting on the circle enseign math no glasner topological dynamics and group theory trans amer math soc compressibility properties in topological dynamics amer j math amenable urs s and lattice embeddings proximal flows lecture notes in mathematics vol york gruenberg residual properties of infinite soluble groups proc london math soc guivarc h croissance polynomiale et des fonctions harmoniques bull soc math france glasner and weiss uniformly recurrent subgroups recent trends in ergodic theory and dynamical systems contemp vol amer math providence ri pp de la harpe on simplicity of reduced c of groups bull lond math soc no hewitt and ross abstract harmonic analysis vol i second grundlehren der mathematischen wissenschaften fundamental principles of mathematical sciences vol york structure of topological groups integration theory group representations jenkins growth of connected locally compact groups functional analysis jolissaint and robertson simple purely infinite c and actions funct anal no kawabe uniformly recurrent subgroups and the ideal structure of reduced crossed products kennedy characterizations of c kalantar and kennedy boundaries of reduced c of discrete groups reine angew math a le boudec groups acting on trees with almost prescribed local action comment math helv no c and the amenable radical invent math no a le boudec and matte bon subgroup dynamics and of groups of homeomorphisms ann sci ecole norm sup to appear a le boudec and ph wesolek commensurated subgroups in tree almost automorphism groups groups geom dyn to appear losert on the structure of groups with polynomial growth math z no laca and spielberg purely infinite c from boundary actions of discrete groups reine angew math malcev on isomorphic matrix representations of infinite groups rec math mat sbornik margulis discrete subgroups of semisimple lie groups ergebnisse der mathematik und ihrer grenzgebiete vol berlin margulis free subgroups of the homeomorphism group of the circle acad sci paris i math no matte bon rigidity of graphs of germs and homomorphisms between full groups matte bon and tsankov realizing uniformly recurrent subgroups monod and popa on for groups and von neumann algebras math acad sci soc can no monod and shalom cocycle superrigidity and bounded cohomology for negatively curved spaces j differential geom no montgomery and zippin topological transformation groups interscience publishers new nekrashevych finitely presented groups associated with expanding maps i pays and valette libres dans les groupes d automorphismes d arbres enseign math no adrien le boudec radu new simple lattices in products of trees and their projections with an appendix by caprace raghunathan discrete subgroups of lie groups new yorkheidelberg ergebnisse der mathematik und ihrer grenzgebiete band rattaggi computations in groups acting on a product of trees normal subgroup structures and quaternion lattices proquest llc ann arbor mi thesis technische hochschule zuerich switzerland construction de en de acad sci paris i math no shalom rigidity of commensurators and irreducible lattices invent math no tits sur le groupe des automorphismes d un arbre essays on topology and related topics georges de rham springer new york pp vorobets notes on the schreier graphs of the grigorchuk group dynamical systems and group actions contemp vol amer math providence ri pp wise curved squared complexes aperiodic tilings and finite groups proquest llc ann arbor mi thesis university uclouvain irmp chemin du cyclotron belgium cnrs de pures et france address
4
multisensor poisson filtering with uncertain sensor states dec markus christopher lindberg karl henk wymeersch a typical multitarget tracking mtt scenario the sensor state is either assumed known or tracking is performed based on the sensor s relative coordinate frame this assumption becomes violated when the mtt sensor such as a vehicular radar is mounted on a vehicle and the target state should be represented in a global absolute coordinate frame then it is important to consider the uncertain sensor location for mtt furthermore in a multisensor scenario where multiple sensors observe a common set of targets state information from one sensor can be utilized to improve the state of another sensor in this paper we present a poisson mtt filter which models the uncertain sensor state the multisensor case is addressed in an asynchronous way where measurements are incorporated sequentially based on the arrival of new sensor measurements in doing so targets observed from a well localized sensor reduce the state uncertainty at another poorly localized sensor provided that a common subset of features is observed the proposed mtt filter has low computational demands due to its parametric implementation numerical results demonstrate the performance benefits of modeling the uncertain sensor state in feature tracking as well as the reduction of sensor state uncertainty in a multisensor scenario compared to a per sensor kalman filter scalability results display the linear increase of computation time with number of sensors or features present i ntroduction intelligent transportation systems in general and autonomous driving ad in particular require accurate position information measurements provided by various sensors allow to infer the vehicle state position and velocity as well as information about the surrounding environment for instance a global navigation satellite system gnss receiver provides absolute position whereas a radar sensor provides relative position with respect to the sensor origin furthermore vehicles have access to a local dynamic map ldm containing static features such as landmarks dynamic features such as pedestrians cyclists etc are not part of the map for an ad system to be fully aware of the surrounding environment dynamic features need to be estimated and tracked over time using the vehicles sensors thus allowing to enrich the vehicle s ldm in order to incorporate mobile features into the ldm which contains map features described in a global coordinate frame location uncertainty of sensors used to track dynamic features needs to and wymeersch are with the department of electrical engineering chalmers university of technology gothenburg sweden frohle henkw lindberg is with zenuity ab gothenburg sweden be considered the vehicle s state uncertainty such as its location and pose in an its vehicles communicate through the wireless channel with other vehicles communication or with road infrastructure such as a road side unit rsu through communication using ieee or cellular communication through information exchange vehicles can make local ldm information available to their neighbors allowing to enrich their ldms and their situational awareness not only can ldm information be shared it can also be fused to improve every ldm for the special case they observe an overlapping set of dynamic features information from one vehicle can be utilized to increase location accuracy of other vehicles and vice versa note in this context no measurements are performed different to a traditional cooperative localization approach the problem of vehicular localization using locally observed features with unknown observation to feature correspondence aggregated at an rsu can be interpreted as an mtt problem in mtt a varying number of mobile features targets are tracked using sensors such as for example radars lidars or cameras thereby it is typically assumed that the state of the observing sensor is known although not true in general this assumption can be motivated by the fact that sensor state uncertainty is negligible in comparison to the sensors measurement accuracy if sensor state uncertainty is significant it needs to be modeled in the mtt in order not to have negative impact on feature tracking performance in this paper we consider the case of mtt with uncertain sensor state where there are potentially multiple sensors with varying sensor state uncertainty to enable accurate feature tracking we model the sensor state uncertainty in the mtt filter the main contributions are an asynchronous parametric tracking filter with uncertain sensor state information fusion of tracking information with local sensor tracking information and numerical simulation results demonstrating the performance of the filter in a multisensor vehicular scenario in the application example we demonstrate how the proposed filter can be used to transfer location information from a well localized vehicle to a poorly localized vehicle through mtt hence positioning accuracy of the poorly localized vehicle is greatly improved compared to using a local kalman filter with gnss measurements alone legend cooperative vehicle mobile feature rsu communication link fig urban its scenario with two vehicles cooperating through the rsu and six mobile features a motivation in this paper we consider an urban intelligent transportation system its scenario consisting of cooperating vehicles illustrated in fig each vehicle is equipped with an sensor allowing them to determine their absolute position a gnss receiver and an sensor to retrieve relative positions of mobile features present in the environment a radar absolute position measurements are denoted gnss measurements and measurements taken features are denoted measurements due to the sensor used to obtain measurements it is in general not known which feature gave rise to which measurement a gnss and a set of measurements from every vehicle are transmitted in a synchronized manner to the rsu where a centralized filter is run to track the feature as well as the vehicle states this information can be utilized by the rsu an sent back to the vehicles to increase their situation awareness b related work mtt with known sensor state many mtt filters have been proposed to track mobile features using sensors when the sensor state is known the tracking mht filter builds a growing hypothesis tree with data association da and needs to be pruned to limit computation complexity the joint probability data association jpda filter finds the most likely da where feature state information is reduced after each update step to a single gaussian per feature in the last years mtt filters based on random finite set rfs and statistics fisst which avoid the inherent da originally developed by have gained much attention the probability hypothesis density phd filter propagates the first moment of the rfs density over time the poisson pmb filter approximates the global joint da by the product of local marginal da similar to the jpda filter in a derivation of this pmb filter based on standard single target measurement models without using probability generating functional and functional derivatives is presented furthermore a connection to the labelled filter is shown where the density is a special case of mb density for labeled targets and can therefore be seen as a special case of the pmb filter in a gaussian mixture gm bernoulli tracker with known sensor locations is developed and compared to a particle filter pf implementation for multistatic sonobuoy fields target state update from multiple sensors was achieved through sequential sensor updates in a factor graph fg based approach not using fisst was proposed for a variant of the jpda filter a multiscan scenario was considered and the filter was realized by running loopy belief propagation on the fg containing cycles in a pf based implementation of the pmb filter was presented this implementation has been used in as a performance comparison of the fg based mtt there it was found that the pf based pmb filter implementation scales exponentially with an increasing number of features in vehicles perform local feature tracking and send track information over the wireless channel to a central fusion center then fusion is performed taking care of measurements which can arise by utilizing a shared communication media there the da problem arises on the decision of which local tracks to fuse the mahalanobis distance is employed as da metric mtt with uncertain sensor state in contrast to mtt simultaneous localization and mapping slam based methods determine the sensor state while mapping features of the environment most of the proposed slam methods assume static features such as walls and street signs slam methods excel through maintaining the correlation between features a reduction of complexity was achieved in fastslam through rb where the feature state conditioned on the sensor state is tracked through a kalman filter and the sensor state through a pf in a rfs based approach to the slam problem was proposed there the target state is conditioned on the sensor location and then tracked through a phd filter following a rb in an mtt with uncertain single sensor location is derived for the slam problem using fisst and point process theory simulation results are shown with a rb pf implementation in the problem of sensor uncertainty for bernoulli filtering of at most one target using a single sensor is addressed since the scenario is restricted to a single sensor a suboptimal approach is presented where the sensor state is updated only by measurements independent of the target state target tracking information is not used to update the sensor location similar to a fg based approach was considered in for an urban its scenario where the number of features is assumed a priori known and in a variant of slam was considered for indoor environments using radio signal measurements there features are the static source locations of and non signal propagation paths of the transmitted radio wave notation and paper organization scalars are described by letters r vectors by bold letters x matrices and sets by bold letters x the cardinality of set x is denoted the set operator denotes the disjoint set union f u f d f means f u f d f and f u f d the vehicle state is reserved by letter x the feature state by letter f and measurements by letter z the identity matrix of size n n is denoted i n the of vector x is the remainder of this paper is organized as follows section ii gives some background knowledge on rfs and section iii introduces the problem formulation and system models section iv details the proposed mtt filter with uncertain sensor state numerical results are given in section v and conclusions are drawn in section vi ii background on rfs in this section we describe some useful properties of an rfs if not stated otherwise the source of all these is a random finite set formulation according to rfs based methods have been developed in to conduct statistical inference in problems in which the variables of interest observations form finite sets in tracking they address two major challenges of interest i the number of targets present in the scene is unknown ii measurements are invariant to ordering correspondence is unknown an rfs x is a valued random variable which can be described by a discrete probability distribution p n n and a family of joint probability densities fn n yielding x f xn p n fn n where the sum spans over the n permutation functions such that its rfs density f x is permutation invariant the set integral of a function g x of a variable x is defined as z z x g xn dxn g x g n a bernoulli process x with probability of existence r and probability density function pdf f x has rfs density r f x r f x x x otherwise the rfs density of a mb process for the rfs x xn is n n y x y rik f x ri fi xi rik k a poisson point process ppp with intensity function y has rfs density y f y exp y r with inner product hi y h y dy remark if x and y are independent rfss such that z x y then x fz z fx x fy y x y note for rfs x a mb and rfs y a ppp is called a pmb density b state estimation from rfs density a common way to estimate the set states from a bernoulli process with rfs density f x is by comparing the probability of existence r against an existence threshold rth for r rth the target is said to exist and has pdf f x rits state can then be estimated by the mean of f x xf x dx iii p roblem f ormulation and s ystem m odels here we first present the problem formulation and the vehicle and feature dynamics this is followed by the gnss and measurement models and the communication model a problem formulation the goal of the filter which runs on the rsu is to track the features and the states of all vehicles in every discrete time step t through incorporation of all sensor measurements gnss and measurements up until time step we are therefore interested in the joint posterior distribution of the feature and vehicle states at every time step b vehicle and feature dynamics vehicle state motion follows independent markovian processes where the vehicle state xs t rnx of each vehicle s s at time step t is statistically modeled as p xs t with linear model xs t as t xs ws t where as t denotes the matrix and ws t n w s t with error covariance matrix w s t a single feature k k with state f k rnf survives to the next time step t following an independent identically distributed iid markovian process with survival probability ps f k t the feature state motion follows iid markovian processes and is statistically modeled as p f k t k with linear model f k t b t f k v k t where b t denotes the model and process noise v k t n v t with error covariance matrix v t the statetransition matrices as well as the error covariance matrices are assumed equal among the features note that vehicle and feature state motion is independent of each in the following we will drop the subscript indexing on states and measurements whenever the context allows measurement models at time step t vehicle s s obtains two different kind of measurements i measurements of the vehicle state x the reference frame measurements and ii measurements to features from a onboard sensor without loss of generality we assume that the sensors state is equal to the vehicle state thus an uncertain vehicle location implies an uncertain location for the sensor gnss measurement the gnss measurement z g rmg of vehicle s s at time t is statistically modeled through the likelihood function p z g with linear observation model z g h g x r where h g is the linear observation matrix and r n r with error covariance matrix measurement let z f be a set of measurements from a tracking sensor that is susceptible to measurement noise missed detections and false detections examples of such sensors include camera radar and lidar consequently z f z fa z d z h x h f q where h and h denote observation matrices and q n q with error covariance matrix q note that the state correspondence denoted da is in general not known and needs to be inferred from the measurements let the measurement likelihood for a single feature rfs f be if f zf if f z f z pd x f if f f z f z f f f pd x f f if f f z z if or for where is a constant such that it is a valid pdf depending on the specific sensor at hand the fov may be different and needs to be adapted communication model we assume that every vehicle is able to communicate all obtained measurements and gnss with the rsu instantaneously and without errors this implies that at any time t the number of vehicles communicating with the rsu can vary the incorporation of a realistic channel model and its performance impact is a point for future work iv p oisson m ulti ernoulli filtering with uncertain sensor state where z fa denotes the set of false alarm measurements due to clutter modeled by a ppp with intensity z and z d denotes the set of detected features let a measurement z z d with state dimension be obtained through the sensor at vehicle s s feature k at time it is modeled through the likelihood function f with linear observation model where f follows the measurement model note that due to the probability of detection pd x f in depends on the vehicle state x as well as on the feature state f for instance a limited sensor fov affects the probability of feature detection based on the distance between vehicle and feature remark for the case the sensor is able to detect features within a radius rmax the probability of detection is defined if kh x h f rmax pd x f otherwise the sake of brevity we present linear system and measurement models in case the true system dynamics measurement model are mild nonlinear linearization steps can be performed similar to the steps taken in an extended kalman filter ekf or the unscented kalman filter ukf in doing so the proposed filter remains valid unaltered note that the proposed filter only predicts over small in the order of a few tens of milliseconds then the assumption on vehicle and feature states evolving independently is reasonable because there is very little interaction among them within the prediction horizon in this section we formulate the proposed pmb filter with uncertain sensor state we consider a tracking scenario subject to section where there may be multiple features in section we proceed with the asynchronous multisensor case allowing to track multiple features and vehicles with uncertain vehicle state in section a tractable gaussian density approximation of the proposed filter is given the vehicle state pdf at time step t is indicated by subscript x the pdf predicted to the current time step t before updating by a measurement is indicated by subscript x and the posterior pdf is stated without subscript similar definitions hold the feature rfs density the proposed filter is developed within a bayesian framework with alternating prediction and update steps operating on an rfs x by ch z x f x and f x where x is the prior rfs density f is the rfs transition density x is the predicted density and is the rfs measurement likelihood for measurement set z from the problem definition stated in section we are interested in the joint posterior density of the vehicle and features considering all measurements up to the current time step we now proceed with the development of the proposed filter within this framework with a single vehicle the prior joint density is of form x f x f where x is the prior pdf on the vehicle state and f is the prior pmb density the latter density can be written in u terms of an ppp intensity of undetected features f u features which are hypothesized to exist but have never been detected def i and the prior mb rfs density of detected d features f d as x u d f u f d f f u f d in the ppp density of undetected features is y u u u f u f f u u where f is the intensity of undetected features we are interested in a low computational complexity method to compute the posterior joint density f x f in every discrete time step t through incorporation of all sensor measurements in doing so the posterior density should remain in the same form as the prior joint density a prediction step with the vehicle state x and existing feature rfs f the predicted joint density is x f x f here the predicted vehicle state pdf is given by the equation z x p where p is the state transition pdf described by and is the prior pdf similarly the predicted feature state pmb density is calculated by z f f f f where f f is the transition rfs density and f is the prior pmb density the predicted intensity of undetected u u features f of the predicted ppp density f u is given by z u u f db f p f ps f f df here the birth intensity is denoted db f feature transition pdf p f is described by feature survival probability u is denoted ps f and f denotes the prior intensity the mb rfs density of detected features of is x d i f i f d f i d where denotes the set of existing features before measurement update and i fi i i i i f r p f f i f i otherwise here the predicted pdf of feature i is z f i p f i f df where f is the prior pdf of feature i the probability of existence of feature i is eqn z i i ps f f df i where denotes the prior probability of existence the prior pdf and ps the probability of feature survival b measurement update step updating the joint density by any of the two types of different measurements gnss and measurements involves the application of bayes theorem in the following we describe the update calculations using the different type of measurements update with vehicle state measurement let z g be a measurement related to the vehicle state x and unrelated to the set of features f p z g f p z g for example z g could be a gnss an inertial measurment unit imu measurement given a predicted vehiclefeature density and by bayes theorem the updated density is f x f g p g f in other words the vehicle state density is updated with the measurement z g the feature set density is unaffected by the update and the independent form is retained note this update step can be omitted in the absence of measurements in a pure slam application update with cluttered set of feature measurements let the set of measurements z f subject to the measurement model of section be indexed by m and let a be the space of all das a for the predicted mb a da a a is an assignment of each measurement in z f to a source either to the background clutter or new feature or to one of the existing features indexed by i it is therefore a partition of m i into disjoint subsets c a called index remark due to the standard mtt assumption that the features generate measurements independent of each other for example let m m m m and i i i three measurements and two features one valid partition of m i one of the possible associations is the meaning of this is that measurement is associated to feature feature is not detected and measurements and are not associated to any previously detected feature measurements and are either clutter or from new features an index cell contains at most one feature index for all c a any association in which there is at least one cell with at least two feature indices will have zero likelihood because this violates the independence assumption further due to the point feature assumption any feature generates at most one measurement in each time step for all c a any association in which there is at least one cell with at least two measurement indices will have zero likelihood because this violates the point feature assumption if the index cell c contains a feature index then let ic denote the corresponding feature index further if the index cell c contains a measurement index then let mc denote the corresponding measurement index measurements in c not assigned to any feature are associated to the background with the help of bayes rule the updated joint density is x f u f u f x f f f u f d x a a w p f f d a f d z f p where wa denotes the weight of da a a with wa f and pa denotes the vehicle state posterior stated in appendix a for now let us assume the da weights are given the undetected feature density f u f u and the detected feature density f d a f d z f are stated in appendix b equation does not factorize as f x f f p f f f f where f f f is a density such that it remains in the same form as the prior on the contrary there are many dependencies between the feature state rfs and the vehicle state this means that existing tracking frameworks can not be applied directly or introduce a significant increase in computational complexity to overcome this we approximate f u f u f u f d a f d z f a f d f where the functions on the right hand side need to be found towards this end we make the following approximations for the vehicle and feature dependent probability of detection u f f u pd x f f f d with f u f d f and where zz u pd x f x f dxdf zz pd x f x f dxdf for under approximation the updated density becomes x x f f f u f u f d x a a w p f a f d z f with f u and a f d z f given in appendix we observe in that the undetected feature density f u depends only on the undetected feature rfs f u and is independent of the other stochastic variables what remains are dependencies between the detected features and the vehicle to remove the dependency on the vehicle state in the detected feature density a f d z f in we map the vehicle state uncertainty onto the measurement uncertainty this is done by averaging the measurement likelihood by the vehicle state uncertainty in doing so the detected feature density under association a becomes independent of the vehicle state x leading to the approximation where the approximated updated feature set density a f d f is given in appendix the updated joint density is approximated as x f x f f f u f u f d x a a w p f a f d f in this form the vehicle state pdf is now independent on the feature rfs and so is the feature density on the vehicle state this allows to state the weights wa which are given in appendix note is a poisson mixture pmbm density where each da a a denotes a hypothesis on the posterior vehicle state x and the detected feature state f d weighted by wa it can be reduced to a pmb density using the variational approximation presented in or based on the marginal da probabilities we apply the latter approach where for the reduction of to the form of the marginal algorithm is used this results in a single hypothesis per detected feature described by a bernoulli process and per vehicle described by its pdf as well as the intensity of undetected feature described by a ppp this means the summation over the da space a has vanished in retaining the form of scenario using multiple vehicles with uncertain state here is the expected probability of detection for an undetected feature under the predictive distributions for x and f and is the expected probability of detection for a detected feature an alternative and stronger r approximation for would be pd with x dx and the estimated feature state section and similarly up to this point we discussed pmb filtering with a single vehicle and uncertain state where gnss and measurements are used to achieve feature tracking as described in section where sensors are mounted on several vehicles we have to consider the multisensor case furthermore depending on the infrastructure sensors are time synchronized take measurements at the same time step t or are not synchronized measurements from a sensor arrives timestamped but the time the sensor acquires the measurement is independent of other sensors let there be a single rfs f modeling the features state s vehicles with uncertain vehicle state xs with s the set of vehicles taking a measurement at time step t is given by c s where each vehicle c c provides a vector with a gnss measurement z c and measurements z f c furthermore t t t t t t t let xt g s f f t t t z g t z g z g and z c in the multisensor case the pmb filter with uncertain sensor state follows the unisensor case proposed in section iv the joint density is predicted and updated by the gnss and the measurements thereby the predicted density becomes f f updating the joint density by the gnss measurement results in in the multisensor case it becomes f f g f p g f u f d x f f wa pa a f d where f f pa a in order to obtain a low complexity implementation we describe the vehicle state pdf by a gaussian with pdf p x n with mean parameter and covariance matrix similarly the rfs density of a mb process of feature i is described by a bernoulli random variable ri and a gaussian pdf p f i n with parameters and under this description and the system models of section iii we can express the prediction and update steps of the proposed filter in closed form with low computational complexity whose steps are described next prediction step the predicted vehicle state pdf of is x n where with the use of after incorporating the gnss measurements we proceed with incorporating the measurements in the unisensor case this resulted in the approximated joint density in the multisensor case this density becomes x f f f f u gaussian density approximation a w the notation above means that if at time step t the vehicle state pdf x n then at time step t the predicted state pdf before updating by a measurement is x n the intensity of undetected features is modeled by gm consisting of newborn features with weight f and pdf p f n where are the birth mean and covariance matrix and undetected features survived to the current time step with prior parameters and predicted parameters f and a f d involves the marginalization over the vehicle prior pdf containing only vehicles which provide a measurement according to note that here is used as prior in the joint density furthermore the space a of all da increases with an increase in the number of communicating sensors for the predicted mb in terms of complexity this increase can be significant because of the increase of possible feature associations remark several different approaches exist to tackle this da problem in a tractable manner for instance by employing sequential measurement updates on or by performing variational inference or by solving the da in parallel on a basis here we employ the sequential measurement update strategy to limit the size of the da space a in doing so subsequent sensors will benefit from updated vehicle and feature information of preceding sensors in our application example section this means that an update of the joint density with measurements from a well localized vehicle certain vehicle state results in an improvement of feature tracking performance when prior information on the features is low an update of the joint density with measurements from a poorly localized vehicle uncertain vehicle state allows to reduce the uncertainty of its own vehicle state when prior information on the features is high t hps i b t v the predicted mb density of detected features stated in i predicted using has single feature bernoulli parameters i and single feature pdf f calculated similarly to and update step the joint state density is computed by updating the predicted density with the gnss measurement z g through the kalman update step where the vehicle state pdf is given by p g n here k z g h g k h g t k h g s t s h g h g the matrices h g and r are defined in the updated joint density is computed by updating the predicted density with the measurement z f note that depending on the time difference between the gnss and measurements may be used instead of as prior on the joint density in order to calculate the vehicle measurementstate likelihood used in and and the feature likelihood used in are needed the state likelihood for z given can be written in terms of f in closed form by n with t t h h q h h h z h here x n and denotes the moorepenrose pseudo inverse proof see appendix the vehicle likelihood for a given z and marginalized over a detected feature and in marginalized over an undetected feature can be written in terms of x in closed form by n described by a bernoulli component with gaussian pdf has a memory footprint of a bytes needed to store r f cov f where f rnf storing the vehicle state requires b bytes where the state of each vehicle is described by a gaussian pdf with parameters x cov x with x rnx without pruning of bernoulli components in f the memory footprint of the proposed filter is b bytes in the multisensor approach of section the rsu receives gnss measurement z g and measurement z f from a vehicle and then performs the filter update computation the rsu broadcasts the vehicle state estimates either whenever a new measurement has been processed or based on a fixed schedule should information of detected and tracked features be required at the vehicles then the pdfs need to be transmitted as well section n umerical r esults we consider a scenario similar to the one outlined in where we apply the proposed state tracking filter presented in section iv with t t h q h h h f h z h here equals for feature pdf p f f u n and in for feature pdf p f f n proof the proof is analogous to the proof of the feature likelihood with the only difference that the unknown is x instead of f computational complexity memory footprint and communication demand computational complexity is dominated by the matrix inversion needed to update the feature and vehicle densities and the state da updating the joint vehiclefeature density f x f g using gnss measurement z g requires a matrix inversion which scales as o the update of an mb component in a f z f by measurement z mc z f scales as o and consequently as o f for the whole measurement set computational complexity of da is o f hence the update of the joint vehiclefeature density by measurement set z f scales as o f f f where the last term comes from vehicle state update of z f in each time step t the size of undetected features rfs f u increases by f new born targets the number of existing increases by f a new per measurement using a bernoulli component per track for each existing f hypotheses plus one for a missed detection are computed the algorithm reduces each to a single hypothesis track pruning of with low probability of existence r allows to keep the number of tractable each hypothesis a setup the state of a vehicle at time step t is x pt v t t with position p and velocity v vehicle dynamics follow a linear constant velocity cv model described by with ts i where ts s and w ts with r and denoting the kronecker product the state of a feature at time t denoted f is comprised of cartesian position and velocity similar to the vehicle state x there are maximal five features present if not noted otherwise furthermore feature dynamics follow the cv model with the same parameters used for the vehicles to generate a challenging scenario for da we initialize the feature states f n at t for all features and run the cv model forward and backward in time similar to sec vi the first feature enters the scene after t the second after t and so on once present features stay alive for the remaining simulation time vehicle and feature trajectories are shown in fig the observation matrix of the gnss measurement model is h g i where r i for vehicle we assume it has low location uncertainty with and for vehicle high location uncertainty with corresponding to a vehicle with high quality gnss receiver and one with low quality gnss receiver in the single sensor case only vehicle is present and in the multisensor case both vehicles are present if not noted otherwise the measurement model follows where h h g h g and y dimension in m vehicle feature x dimension in m fig vehicle and feature trajectories fig measurements in x top panel and y dimension bottom panel for each time step q i in fig measurements are shown for each time step t including clutter measurements following u we set the initial undetected feature intensity to f t p where p diag to cover the ranges of interest of the feature state the feature birth intensity is set to db f p the average number of false alarms per scan to with uniform spatial distribution on rmax with parameter rmax furthermore the probability of survival is ps and the probability of detection is pd x f pd to asses feature tracking performance we use the optimal assignment ospa metric with parameter c m and order p the vehicle tracking performance is assessed in terms of the root mean square error rmse b discussion first we discuss the impact of an uncertain vehicle state on feature tracking performance using a single vehicle and after that we consider the multisensor case from section and show scaling results in terms of numbers of features and vehicles tracked impact of uncertain vehicle state on feature tracking performance the features and vehicle trajectories are outlined in fig with measurements in fig in fig the feature state ospa is plot for each time step we observe that there are peaks with a high ospa value when a new feature enters the scene these peaks are due to a cardinality mismatch between the feature rfs estimate and the true feature set furthermore there is a high ospa value around time step t at this point in time features are closely spaced together the measurement variance resulting in a challenging scenario for da in fig the cardinality of the feature rfs is plotted over time around time step t the filter overestimates the feature rfs cardinality which may be caused by clutter measurements note different mc realizations produce a slightly different outcome and feature ospa value but with the same tendency this behavior feature appearance and the effect when they are spatially close agrees with the findings in for a known sensor vehicle state furthermore we observe in fig that the ospa is low for time steps where features are spatially separated and already present in the scene then the mtt filter is able to produce feature estimates with low error in fig the feature state ospa averaged over all time steps is plot for different values of gnss measurement the increase of gnss measurement variance variance leads to an increased vehicle state uncertainty with the effect of an increase of the average feature ospa this ospa increase consists of two components first an increased feature state estimation error due to a higher value of second this results in features staying spatially close together the together and feature state measurement uncertainty for a longer period of time around time step t hence da is more challenging with the effect of an increased feature ospa in this regime in the same figure the average feature ospa without modeling the present vehicle state uncertainty is plotted using the conventional pmb filter we observe that not modeling the present vehicle state uncertainty has a negative effect on feature tracking performance in fig the average feature state ospa is plotted for different values of measurement noise variance we observe that a higher noise variance leads to an increased ospa value this is because the single feature state estimation error increases and da becomes more challenging note the results of fig and fig are averages over mc realizations tracking performance the rmse of the vehicle state is plotted for each time step t for small in fig vehicle with low location uncertainty and in fig for vehicle with high location uncertainty high as a benchmark results from a centralized kalman filter kf are plot as well where da is known and where the augmented state vector contains all vehicle and all feature states furthermore the tracking performance using a local kf is plot the local kf performs filtering only on the individual vehicle state separately using rmse feature ospa t and fig feature ospa with m cardinality t t cdf proposed conventional vehicle vehicle vehicle vehicle vehicle vehicle avg feature ospa estimate true and fig feature cardinality with m proposed local kf central kf known da proposed local kf central kf known da rmse in m fig cdf plot of vehicle state rmse g in fig average feature ospa for different values of gnss measurement the noise variance is set to variance m avg feature ospa in fig average feature ospa for different values of measurement variance the gnss noise variance is set to local kf central kf known da proposed rmse fig vehicle state rmse of vehicle local kf central kf known da proposed t fig vehicle state rmse of vehicle only gnss measurements and does not estimate feature states note the performance of the local kf can be considered as the performance on vehicle state estimation since measurements are not considered at all we observe from fig that for vehicle which has low gnss measurement noise all three filter methods deliver a similar performance the reason for this is that due to the high accuracy of gnss measurements not a lot of information to improve the vehicle state is provided from feature tracking feature tracking error is high the vehicle state tracking error of vehicle after updating by the gnss measurement in fig the cumulative distribution function cdf of the rmse is plot here the low rmse of vehicle using the three different filters can be observed as well moving the focus to vehicle we observe that the rmse of the local kf is much higher compared to the central kf which is caused by the high noise in the gnss measurements due to the low rmse of vehicle s state there is relevant position information in the system which can be transfered from vehicle to vehicle via the features utilizing the measurements in of all cases the rmse of vehicle is below m with the proposed filter compared to m with the local kf despite this great improvement of the proposed filter over the local kf it does not achieve the performance of the central kf where the rmse is below the reason for the difference is that the central kf has knowledge of the correct da knows the true number of features present and ignores clutter measurements furthermore it tracks any present correlations between features and vehicles not modeled time in s no of features fig average computation time per time step for different number of present features the number of vehicles is set to two time in s no of vehicles be incorporated either in a manner where sensor measurements are aggregated in a state or in a non manner through asynchronous update steps executed whenever sensor measurements arrive at the central node simulation results showed for a unisensormultitarget tracking scenario with known sensor state that tracking performance assessed by the ospa distance metric is equivalent to william s pmb filter in a scenario with present vehicle state uncertainty the proposed filter showed superior feature tracking performance over the conventional pmb filter due to the modeling of this type of additional uncertainty in a tracking scenario feature information from a well localized vehicle sensor allows to significantly reduce the vehicle state uncertainty of previously poor localized vehicles this improvement is possible through joint observation of a subset of the present features and is supported by simulation results a ppendix a vehicle state posterior fig average computation time per time step for different number of vehicles the number of features is set to five by the proposed filter the proposed filter needs to infer the da estimate the number of features currently present and needs to appropriately handle clutter in the measurement set z f scaling results in fig the average computation time per time step t is plot for a simulation with two vehicles and different number of present features we observe that computation time increases linearly as the number of present features increases this scaling result is different to the pf based implementation of the pmb filter in with a known vehicle state there the authors reported an exponential increase of computation time in fig the average computation time per time step t is plot for a simulation with five features and different number of vehicles here computation time increases linearly as the number of vehicles increases furthermore we investigated the average computation time per time step t for different values of the gnss measurement variance and of the measurement variance in the simulation scenario with five features and two vehicles the average computation time remained constant around s for for a measurement variance m the average computation time linearly increased from s to s as increased vi c onclusions this paper presented a poisson filter for tracking with uncertain sensor states two different kind of measurements observations of the sensor state and observations of the features were used to obtain accurate feature and sensor state tracking the proposed parametric filter implementation scales linearly with the number of features or sensors information from multiple sensors can the vehicle state posterior of and is proportional to the vehicle s prior pdf x times the measurement likelihood a z f as pa f x a z f where a y z f z z mc f f df f d y z u z mc f f df f c u y ic z mc f d y u z mc f u with z ic u z f f df u f f df here and map the feature uncertainty on the measurement likelihood b updated undetected and detected feature density the updated joint density has undetected feature density y u f u f u pd x f f f u where we average over the predicted vehicle state and marginalize over all subsets f c not equal to f c eqn has bernoulli parameters eqns to i i c i c m ic ic f ic if c i c m u i d h z mc e d c m m u u and detected feature density f d a d f f z x f c y d y ic f c f c ic f c z mc f c z d zmc y c u f c z mc f c f f i f zmc f f here the first product considers the cases where no measurement is associated to any of the existing features the second product considers the cases where a measurement is associated to an existing feature and the last line considers the case where a measurement is associated to the background clutter or undetected feature e i zmc u zmc f f e d u zmc d if c i c i c m c m where z z f f x dx approximated updated feature density weights updated undetected and detected feature density approximation the joint density approximation has undetected feature density y f u f the weight of a global association hypothesis a a stated in is eqn y y ic ic ic ic zmc wa f u f c y d y ic c f c here we proof with and x n known define ic z mc f c f c y proof of state likelihood u z mc zmc and detected feature density a f d z f x y y z h x q p y n z h h h t and consequently u z mc f c f c c where c for f and zero otherwise here the three products consider similar cases to now we have y h f and solve for f with the help of rule in table which results in eqs to r eferences approximated updated feature set density the approximated updated feature set density of and is a mb density x y a f d f f c f f c d with f c f z z y a f d z f x c dx c c leonard j how teller berger campbell fiore fletcher frazzoli huang karaman et a perceptiondriven autonomous urban vehicle journal of field robotics vol no pp cadena carlone carrillo latif scaramuzza neira reid and leonard past present and future of simultaneous localization and mapping toward the age ieee transactions on robotics vol no pp kim chong qin shen cheng liu and ang cooperative perception for autonomous vehicle control on the road motivation and experimental results in intelligent robots and systems iros international conference on ieee pp kim qin chong shen liu ang frazzoli and rus multivehicle cooperative driving using cooperative perception design and experimental validation ieee transactions on intelligent transportation systems vol no pp meyer hlinka wymeersch riegler and hlawatsch distributed localization and tracking of mobile networks including noncooperative objects ieee transactions on signal and information processing over networks vol no pp march soatti nicoli garcia denis raulefs and wymeersch enhanced vehicle positioning in cooperative its by joint sensing of passive features in ieee international conference on intelligent transportation systems itsc oct wymeersch lien and z win cooperative localization in wireless networks proceedings of the ieee vol no pp hoang denis and slock breaking the gridlock of spatial correlations in ieee cooperative positioning ieee transactions on vehicular technology vol no pp willett and tian tracking and data fusion a handbook of algorithms storrs ct ybs publishing streit and luginbuhl probabilistic tracking naval underwater systems center newport ri tech mahler statistical information fusion artech house vo and ma the gaussian mixture probability hypothesis density filter ieee transactions on signal processing vol no pp vo singh and doucet sequential monte carlo methods for multitarget filtering with random finite sets ieee transactions on aerospace and electronic systems vol no pp oct vo vo and cantoni analytic implementations of the cardinalized probability hypothesis density filter ieee transactions on signal processing vol no pp williams marginal filters rfs derivation of mht jipda and member ieee transactions on aerospace and electronic systems vol no pp williams and svensson poisson mixture filter direct derivation and implementation arxiv preprint vo vo and phung labeled random finite sets and the bayes tracking filter ieee transactions on signal processing vol no pp reuter vo vo and dietmayer the labeled multibernoulli filter ieee transactions on signal processing vol no pp ristic angley suvorova moran fletcher gaetjens and simakov gaussian mixture bernoulli tracker for multistatic sonobuoy fields iet radar sonar navigation meyer braca willett and hlawatsch scalable multitarget tracking using multiple sensors a belief propagation approach in international conference on information fusion pp a scalable algorithm for tracking an unknown number of targets using multiple sensors ieee transactions on signal processing vol no pp kropfreiter meyer and hlawatsch sequential monte carlo implementation of the marginal filter in information fusion fusion international conference on ieee pp berg and fusion for tracking using asynchronous and delayed data master s thesis department of signals and systems chalmers university of technology gothenburg sweden chong mori barker and chang architectures and algorithms for track association and fusion ieee aerospace and electronic systems magazine vol no pp liggins chong kadar alford vannicola and thomopoulos distributed fusion architectures and algorithms for target tracking proceedings of the ieee vol no pp and bailey simultaneous localization and mapping part i ieee robotics automation magazine vol no pp mullane vo adams and vo a approach to bayesian slam ieee transactions on robotics vol no pp brekke kalyan and chitre a novel formulation of the bayes recursion for filtering in aerospace conference ieee ieee pp julier and gning bernoulli filtering on a moving platform in information fusion fusion international conference on ieee pp ristic vo vo and farina a tutorial on bernoulli filters theory implementation and applications ieee transactions on signal processing vol no pp lindberg and wymeersch cooperative localization of vehicles without measurements in ieee wireless communications and networking conference april leitinger meyer tufvesson and witrisal factor graph based simultaneous localization and mapping using multipath channel information in communications workshops icc workshops ieee international conference on ieee pp simon optimal state estimation kalman h infinity and nonlinear approaches john wiley sons wan and van der merwe the unscented kalman filter for nonlinear estimation in adaptive systems for signal processing communications and control symposium the ieee ieee pp arulampalam maskell gordon and clapp a tutorial on particle filters for online bayesian tracking ieee transactions on signal processing vol no pp williams an efficient variational approximation of the best fitting filter ieee transactions on signal processing vol no pp williams and lau multiple scan data association by convex variational inference arxiv preprint williams and lau approximate evaluation of marginal association probabilities with belief propagation ieee transactions on aerospace and electronic systems vol no pp schuhmacher vo and vo a consistent metric for performance evaluation of filters ieee transactions on signal processing vol no pp loeliger an introduction to factor graphs ieee signal processing magazine vol no pp
3
implementation of tetris as a model atri rudra university at buffalo suny atri jan jimmy dobler university at buffalo suny jdobler abstract solving sat problems is an important area of work in this paper we discuss implementing tetris an algorithm originally designed for handling natural joins as an exact model counter for the sat problem tetris uses a simple geometric framework yet manages to achieve the fractional bound its design allows it to handle complex problems involving extremely large numbers of clauses on which other model counters do not perform well yet still performs strongly on standard sat benchmarks we have achieved the following objectives first we have found a natural set of model counting benchmarks on which tetris outperforms other model counters second we have constructed a data structure capable of efficiently handling and caching all of the data tetris needs to work on over the course of the algorithm third we have modified tetris in order to move from a theoretical environment to one that performs well in practice in particular we have managed to produce results keeping us within a single order of magnitude as compared to other solvers on most benchmarks and outperform those solvers by multiple orders of magnitude on others this research was supported in part by grant nsf introduction sat is the prototypical problem sat as well as its cousin sat are not only of great interest in computational complexity but by their completeness turn out to be a great tool to model a wide host of practical problems this has led to an explosion of sat solvers that try to solve practical instances of sat or sat by exploiting structure in these instances for this paper we will assume the importance of designing sat and sat solvers as a given we refer the reader to the book chapters by gomes sabharwal and selman on sat solvers also known as model counters and by gomes et al on sat solvers for more details a common technique is the dpll procedure a search procedure where the algorithm makes guesses on the assignments one variable at a time determines at each stage whether or not this produces a conflict and uses that information to learn new clauses and get closer to finding the satisfying assignment recently in the database literature the work of abo khamis et al connected the dpll procedure to computing natural joins in particular they presented the tetris algorithm which computes the natural join with beyond theoretical guarantees as a special case tetris also recovers some of the recent optimal join results abo khamis et al then showed that tetris is an dpll procedure and pointed out how one of the main step in their algorithm is exactly the resolution step that is ubiquitous in sat solvers given the close ties of sat solvers to dpll they left open the following intriguing possibility can tetris be implemented as a sat solver or model counter that can compete with solvers our contributions our main result in this paper is to show that tetris can indeed be implemented as a model counter that is competitive with model counters on actual datasets while presented a nice geometric framework to reason about algorithms to compute the natural join query some of its simplicity arose from inefficiencies that matter when implementing tetris as a model counter before we present the issues we tackle we give a quick overview of tetris the fundamental idea is that rather than working to create the output of a join directly it instead attempts to rule out large sections of the cross product of the joined tables initially tetris is given a set of sets whose union is the set of all incorrect solutions to the problem tetris is solving in other words any solution to the problem must not be a member of this union by efficiently querying this set of sets and by adding to it intelligently at various times such as by adding a new exclusion whenever an output point is found tetris is able to rule out increasingly large sets of potential solutions once it has ruled out all possible solutions it terminates and outputs the list of solutions we tackle the following three issues with the theoretical presentation of tetris in at any point tetris needs to keep track of the union of all the potential solutions it has ruled out to do this used a simple trie data structure to keep track of the union however this loses some factors and proves detrimental to practical performance to deal with this we design a new data structure that essentially compresses consecutive layers in the traditional trie into one mega inspired by the used of simd instructions by emptyheaded to speed up implementations of optimal join algorithms we set up the compression in a manner that lends itself to speedup via simd instructions the analysis of tetris in was for data complexity this implies they could afford to use exponential in the size of the join query time algorithm to find an appropriate ordering in which to explore different variables in sat instances we can no longer assume that the number of variables is a constant and hence we can not obtain an optimal ordering using a brute force algorithm we deal with this by designing heuristics that take the structure of tetris into account as mentioned earlier tetris like a dpll procedure performs a sequence of resolutions and theoretically it can store the outcomes of all the resolutions it performs however for practical efficiency we use a heuristic to decide which resolution results to cache and which ones to discard our experimental results are promising on some natural sat benchmarks based on counting number of occurrences of small subgraphs in a large graph which we created our implementation of tetris is at least two orders of magnitude and in most cases more than three orders of magnitude faster than the standard model counters sharpsat cachet and dsharp we also compared tetris with these model counters on standard sat benchmarks where tetris was either comparable or at most slower theoretical implications while this paper deals with an experimental validation of the theoretical result from we believe that it highlights certain theoretical questions that are worth investigating by the database community we highlight some of our favorite ones that correspond to each of our three main contributions extending tetris beyond join queries as our work has shown tetris can be used to solve problem beyond the original natural join computation recently the optimal join algorithms were shown to be powerful enough to solve problems in host of other areas such as csps of which maxsat is a prominent example probabilistic graphical models and logic also see the followup work the beyond results in have so far seemed more of a theoretical novelty however given that this paper demonstrates the viability of tetris in practice this work opens up the tantalizing possibility of extending the theoretical results of tetris to problems captured by such a result even for maxsat would be of interest in practice computing orderings efficiently as mentioned earlier the theoretical results for tetris assumes that the required ordering among variables can be computed in exponential time however for applications in sat as well as other areas such as probabilistic graphical models assuming the question in the item above can be answered we need to compute orderings that are approximately good in polynomial time thus a further avenue of theoretical investigation is to come up with a polynomial time algorithm to compute the ordering and to prove some guarantees on the loss of performance from the case where tetris has access to the optimal ordering some of the heuristics developed in our paper might prove to be good starting points for this investigation we would like to point that that the importance of efficiently computing variable orderings has been studied a lot in ai and database literature some of the very recent work on generalized hypertree decompositions which are well known to be equivalent to variable elimination orderings could potentially be useful towards this goal tradeoff recent results on optimal algorithms to compute natural joins and to compute joins with functional dependencies all focus exclusively on time complexity however as highlighted by our work being more prudent with space usage in fact benefits actual performance this point was also indirectly highlighted in where it was shown that resolution schemes that did not cache their intermediate results are strictly less powerful than those that do in the context of computing the natural join however we believe that a systematic theoretical study of the tradeoff between time and space needed to compute the natural join is an attractive route to pursue we will begin in section by introducing the fundamental concepts necessary to understand both sat problems and details on tetris itself all while giving a example of how tetris would handle a toy example from there we will move into section an analysis of our major contributions afterwards we will continue with our experimental results in section then we will discuss related work in the field in section background in this section we will introduce the concepts necessary to understand how tetris functions introduce the concept of resolutions and walk through how tetris would handle a simple input sat and boxes we begin by defining several key terms and ideas recall that a sat problem consists of a series of boolean variables x x x n joined together in a series of and and or clauses problems are generally presented in the conjunctive normal form cnf a simplification wherein the entire formula is written as a series of ands over a set of disjunctive clauses one such example would be x x x a solution or satisfying assignment to a sat problem is an assignment of true or false to each of the variables such that the boolean formula is satisfied that is that all clauses are satisfied next we will consider the idea of boxes which is how our algorithm will interpret sat problems each box is an structure in n where n is the number of variables in the original sat problem we will define this set as the output space that is all potential outputs will be elements of this set each of our boxes exists within this hypercube and along each dimension has the value the value or extends along the full length of the edge the reason for this is simple corresponds to false to true and the length of the edge to both henceforth we will use to refer to edges with length we thus form the following definition definition box notation a box takes the form b b n where each b i t f observe that from these definitions we can consider every assignment to be a box this will be important later then our goal will be to find the set of points within the output space that are not contained see definition by any boxes any such point will be termed an output point and the goal of an algorithm working on these boxes is to find all such output points definition containment a box b is said to contain another box c if for all points p n such that p c it is true that p equivalently the box b b n contains the box c c n if for all i b i c i or b i however there is one key difference between these two representations each clause in a cnf formula is essentially a subproblem wherein at least one variable s assignment must match its value in the clause for the assignment to possibly be satisfying but with boxes the exact opposite is true if an assignment matches the value for the boxes on all dimensions we reject the assignment in other words if we consider a geometric visualization of these boxes any and all assignments that fall within a box are rejected hence our next step is to devise a means by which to convert any given sat problem in cnf form to the boxes format that tetris can understand as follows from our above observation the most important step is simply the negation of the cnf formula the rest is all bookkeeping for the exact algorithm see algorithm algorithm conversion from cnf to boxes for each cnf clause do negate the clause set all to f and all x j to t set all variables not present in the clause to insert into the database see definition let us consider the following toy example cnf problem example x x x x x our first step is to negate each clause which will give us a disjunctive normal form dnf x next we will convert to boxes by replacing a variable with t and its negation with f and add all missing variables as after conversion our three clauses become f t and f f x x f x t x x f f figure our starting boxes and the corresponding sat clauses while boxes are technically speaking strictly the corners of what we depict as the boxes we depict them with the edges and surfaces drawn for the purpose of visual clarity at this point it is time to insert the boxes into our data structure let us list the fundamental operations the data structure must be able to perform definition tetris data structure the tetris data structure shall be able to perform the following operations insert input is the box to be inserted no output contains input is the box we are seeing if the structure contains output is the containing box see definition getallcontainingboxes input is the box we are seeing if the structure contains output is the set of all containing boxes we will return to the details of the data structure implementation in section resolution we then come to the concept of resolution a key aspect of tetris and most sat solvers resolution can be defined over both cnf clauses and boxes let us begin with the former let us consider two clauses in our cnf example once again specifically x and x we see that these are two very similar clauses they differ only in that the x term is negated in one and not the other therefore we can resolve these two clauses by removing the x term and then taking the or of all remaining variables in this case this gives us x we then remove the original two clauses from the cnf problem and insert this new clause in its place this is a significant simplification similarly we can resolve any two clauses such that there is exactly one pivot point by which we mean a variable that appears in both clauses but is negated in one and not the other for instance looking back at our example we can also resolve x with x x to form x x in this case we would not be able to remove the original two clauses but we would have gained information let us now formally define this process definition resolution on clauses two clauses x i x i x i m v and y j y j y j l i i j j i j n can be resolved if and only if there exists exactly one variable v the pivot point such that v x and y the resolution of the two clauses is x i x i x i m y j y j y j m since boxes are simply another representation of the same problem it follows that resolution can be performed on boxes as well first we will require that there must exist exactly one variable on which one box is true and the other box is call this the pivot variable in the output set this variable to then for each other variable if it is t in one or both boxes set it to t in the output box if it is f in one or both boxes set it to f in the output box and if both variables are then the resolution of the two is also we see two possible resolutions in our example the resolution of f and t two coplanar and parallel edges is the square as depicted in figure and the resolution of the askew edges t and f f is the edge f as depicted in figure for a formal definition of the resolution operator henceforth see definition figure the resolution of f and t on the vertex x is the square this is equivalent to x x resolved with x being the clause x this is exactly analogous to the requirement that we resolve on a pivot point in the clause version figure the resolution of t and f f on the vertex x is the edge f this is equivalent to x resolved with x x being the clause x x definition resolution on boxes two boxes b b n and c c n can be resolved if and only if there exists exactly one i such that b i is true and c i is false or in the resolved box a a i each a j j i is equal to b j c j where is defined as follows t t f f t f t t f f t f is undefined observe that resolution on boxes and resolution on clauses are identical lemma resolution on boxes with the additional restriction that exactly one variable must be true in one box and false in the other is exactly equivalent to resolution on sat clauses proof let x i i m and y j y j l i i j j w her e i and j n be the clauses we are resolving assume wlog that i j is the pivot point then the resolved clause is x i x i m y j y j l the boxes equivalent to our starting to clauses are b n and c n where b k f if x k x b k t if x and otherwise and c k is defined similarly with respect to y the resolution of these boxes a is then defined as a n where a k if k i j and a k b k c k otherwise now let us calculate the box equivalent of the output of the resolution t it can be shown that t k if k i j t k x k y k if k i k j and k i t k x k if k i and k j t k y k if k j and k i and t k if k i and k j inspection with the above definition of reveals that t is exactly equivalent to a since the same problem with arbitrary equivalent inputs produced equivalent outputs the two operations must be equivalent tetris introduces one additional restriction on resolution definition resolution on boxes in tetris two boxes b and c can be resolved if and only if there is exactly one spot i such that b i t and c i f or and for all j i b j c j in other words we will demand that that the pivot variable be the final variable therefore while tetris will perform the resolution of f and t figure it will not perform the resolution of t and f f figure we see then that the ordering of the variables determines whether or not a resolution is even possible this makes determining the global ordering of the variables a key issue as mentioned earlier as theoretical implication which we will address later in section in general tetris performs resolution on pairs of recently found boxes let k be the location of the last variable in a box b then b k must be either true or false if it is false we will store the box for future use if it is true then we will take this box b and resolve it with the stored box with the same value for k whose last variable was false by doing so we will guarantee the production of a box where the last n variables have the value for more details on how this works along with the reasoning for why such pairs can always be found see section tetris for now let us return to our example when we last left off we were just inserting the three clauses into our data structure which was loosely defined in definition for a formal definition of the database and details for how it allows the set of boxes tetris knows about to be quickly and efficiently queried see section for now one can simply assume it to be a structure additionally we can resolve the first two boxes while leaving the third untouched as in figure therefore the database will contain exactly the boxes and f f see figure furthermore we will prepare an empty array of boxes l of size n which will be used later the purpose of this array is to store and retrieve boxes that we wish to resolve with other boxes now that we have our database established it is time to perform tetris proper the basic idea here is very simple we will pick a point p in the output space which we will call the probe point recall that this point is itself a box we then determine whether or not any box in the database contains this point if one does we will store this box in an additional data structure referred to as the cache which functions identically to the main database and probe a new point if no box contains p we will list the point as a solution and furthermore add this point into the cache along the way we will perform resolution in order to create new and larger boxes this process continues until the entirety of the output space is covered by a single box at which point we must have found every output point and are done algorithm has the details it should be noted that this algorithm was originally presented recursively in here we present it iteratively both for the purposes of speed and because this allows for backtracking in other words we can backtrack more than one layer at a time algorithm advance box b probe point p note p is a global variable while b contains p do if the last variable of p is f then set that variable to t return to the previous branching point and take the right or true branch else while the last variable of p is t do set that variable to return to the most recent level where we branched left set the last variable of p to t branch right here replace all after this variable with f repeatedly branch left now let us consider how this algorithm behaves with regards to our earlier example we pick as our first probe point f f giving us the situation illustrated in figure we first scan our local cache c for any boxes that contain this point however since this is the first probe point the cache is trivially empty next we scan the database d contains all the boxes corresponding to the clauses in the original sat algorithm general tetris for sat establish variable ordering build the database d using algorithm c l an empty array of size n this array is implicit in p f f while c do if b c p is nonempty then advance b p advance see algorithm the probe point past b else if a p is nonempty then for all boxes b a do c b advance b p advance the probe point past b else add p to the output there is no containing box so p is an output point c p advance p p advance the probe point past itself k the location of the last variable in b the variable ordering if b k f then l k b store the most recent box for a given depth else r b l k resolve this box with the corresponding box c r problem the database just so happens to contain two containing boxes for reasons that will become clear shortly the operation will choose to output we insert the box into c our next task is to advance the probe point until it lies beyond our box to do this we proceed according to algorithm the idea is to think of the set of all possible probe points as a tree that we are performing a search on with f representing paths and t representing rightbranching paths we will continue along this search until we find a point not covered by the most recently discovered box this takes us to f f note that if the database had fetched f we would not have been able to advance the probe point as far finally we insert our containing box into the array l at location since only the first variable is for future use we know to do insertion here rather than trying to resolve with a box because the value of that first variable is false this takes us to the situation depicted in figure again we scan c this time for the probe point f f and again we find no containing box in c so we scan d once again and find the containing box f f we then insert this box into the cache and advance the probe point to f t this time although our containing box features a at the first location we determine the location in l into which we will insert based on the location of the last variable so we insert it into l this time we find no containing boxes in either the cache or the database therefore we have found an output point see figure for an illustration we add f t to our output set then add the box f t to our cache which marks the point as found at this juncture we find that the last variable is at location but this time it is true therefore probe point p cache c database d probe point map figure the initial state of the database d cache c and the location of the first probe point p which will search the output space in a manner that can be tracked using the map on its right d is simply the union of all the boxes we created from the initial sat problem while p is set to an initial value of f f l is currently empty probe point p cache c database d probe point map figure the state of the database cache and probe point after the first round of the algorithm our probe point found the box so it added this box to c then p was advanced until it reached a point not contained by this box which turned out to be f f l is the box which is the box corresponding to the orange vertex in the map the array is empty elsewhere we will extract the box we stored previously at l and resolve it with this box we know that this will be a legal resolution because we are scanning the output space in a fashion this means that when retreating from a right branch the box containing the corresponding left branch must be able to contain the right branch if the final variable were set to instead it follows that this final variable must be the one and only pivot point between the two therefore we can and do perform this resolution in this example it is f t resolved with f f this outputs the box f we furthermore store this box in l at location since this box ends with false at that index we continue forth with probe points t f and t t neither will be found in either d or c and are therefore output points both again have their final variables at index with t f being inserted into l at that index and then t t recovering that box so it can resolve with it to form the box t this time the output of our resolution ends with true so we recover the box at index in l f and take the resolution of these two boxes giving us once again this ends in t so we can resolve it with the box we found back at the beginning that has been waiting in slot to form the box this box completely covers the output space therefore the algorithm knows that it has found all possible output points and terminates see figure for an illustration probe point p cache c database d probe point map figure the state of the database cache and probe point after the first output point is found at f t in getting here we first found the box f f and then probed the output point after finding the output point that box was resolved with the aforementioned box to produce f which was also added to c note that while the box f could be produced at this juncture tetris will not do so l contains the orange dot l contains f the purple dot l is empty output points cache c database d probe point map figure the state of the database cache and probe point after the entire output space has been covered with each further output point discovered boxes were added to the cache which produced a chain of resolutions that eventually resulted in the production of the box note that there is no longer a probe point as there is nothing left to probe our improvements here we will discuss the major additions introduced into tetris in order to handle cnf inputs and increase practical efficiency these include a new data structure work on heuristically determining a global variable ordering and selectively caching only certain boxes data structure and compression for its data structure the original tetris paper simply states that a trie will suffice to achieve asymptotic runtime guarantees while this is true a simple trie still leaves much to be desired attempts to implement tetris in such a simple matter produced a system significantly slower than other model counters our contribution is to design a novel system of tries that takes advantage of the nature of the problem space to improve both runtime and memory usage data structure description as described above the database must allow each variable to store three values false true and therefore the immediate approach is to use as the base data structure however when sat instances routinely have hundreds of variables this results in an extremely deep problem space that requires a lot of time to probe our next step then is to compress multiple layers into a single node that can be queried in a single instruction to this end we will first come up with a means to enumerate all possible boxes definition let be a bijective function from the set of boxes onto the integers example one such way is as follows let be assigned false true then for each box b b n its numerical value is b b b n this gives us a bijective ternary numeration from there we observe that there is exactly one box for which n is namely the empty box three boxes with n equal to nine boxes with n equal to boxes with n equal to and boxes with n equal to we will require that a single node within the trie be able to record a box of any of these lengths additionally it must be able to store the children of all possible boxes with n equal to therefore by compacting four logical layers into a single layer within the database the result is a trie that can store possible boxes the sum of the five aforementioned values and can have possible children we will refer to this collection of variables as a cluster definition cluster a cluster is the set of variables that the database handles in a single operation by default each cluster contains variables of course this raises a new issue when checking if the database contains a given input string x there can exist up to sixteen children of x that must be checked since each t or f can be replaced by a and an even greater number of boxes that could be contained in this cluster may contain the input string this creates the need for a way to quickly and efficiently determine if a containing box exists in this cluster and to create the list of children to be searched trie here we take inspiration from emptyheaded a relational database engine and utilize simd as an example let us consider the simplified version of the data structure that contains only two layers and with n suppose that was known to be a box and that t f and t f are boxes that to be found necessitate traversing into child clusters we can see this depicted in figure now we will determine whether or not this data structure contains the box t f since each cluster contains two layers clusters with depth will look at only the first two variables in the input box to determine input therefore let us consider the t using we can find in a lookup table the two bitstrings corresponding to this input the first lists the set of boxes that if they exist within the cluster would contain t and is the second line of figure the second bitstring does much the same for the set of children that if truncated to two variables contain t additionally a cluster stores two more bitstrings boxes which marks the boxes contained by the cluster and children which marks the child nodes of the cluster the boxes bitstring is specifically the top line of figure boxes is marked for the box which is equivalent to while children has bits marked corresponding to the box prefixes t and t this is exactly the mapping used in our implementation it follows that the intersection of these two pairs of bitstrings is the set of boxes and child prefixes present in the data structure that contain the input a single and operation suffices to calculate it cluster cluster layer layer layer t t layer t f t f layer figure the simd operation shown as the associated clusters with each box marked by a blue box each layer corresponds to a variable with layer being empty because all boxes have only three variables and checkmarks mark the boxes that are found to be containing boxes the central branch is created from the box in the left branch created by t f a child is created corresponding to the t and then the box is inserted in the child cluster at t f in the right branch created by t f we similarly create a child corresponding to the t and then inserting t f in the child cluster we have drawn the database here as a to see it as a single flattened cluster see figure let us inspect our output in this particular example we find that we have matched the box and the child with prefix t at this point we are faced with an interesting choice suppose for the sake of argument that the child eventually leads to some containing box for the input now if this is a check for containment the algorithm can only return the one box that it considers the best containing box which one then should the algorithm choose in general it will choose the box in this example it will choose the box the reason for this is simple the more in a box the more space it covers and the more space it covers the more output space it can cover additionally when those are at the tail end of a box we know that we can immediately advance the probe point considerably on the other hand when the are in the middle the algorithm must these boxes every time it scans a point that is contained within this hypothetical box this repetition is costly we would rather avoid it now let us consider how our algorithm would handle this input if we were simply using traditional tries in other words consider what would happen if our cluster size were as in a standard this algorithm would query the input box for its first value find that it is f and know that it could check both the branch and the f branch since are generally to be preferred it would take the branch then it would match the t value to the t branch and proceed to the third layer where it would match the f value to the f branch and find a containing box which it will return in this way it has taken us three comparisons to find a containing box furthermore the box found in the version is of lower quality we would have preferred to find the box instead on the other hand let us consider what occurs when using full clusters figure in this case a single operation immediately finds all three containing boxes therefore we have database s boxes input t containing boxes figure the algorithm takes the logical and of the stored bitstrings top with the bitstring corresponding to the input row in order to produce the list of containing boxes and children bottom the boxes bitstring contains a single to mark the box the second line contains all the potential boxes that would contain t in addition to the boxes that do exist these include t and many more the output of the operation has bits set corresponding to the box formed the traditional variation both in terms of number of comparisons and in terms of the quality of the output in summary each box is stored as a single bit in a vector with the last bits unused as is a record of whether or not a given child exists a lookup table is used to find the vectors corresponding to the possible outputs then the former two are concatenated as are the latter two and all are compared using a single and operation it can be seen that the output of this operation must be the intersection of the potential containing boxes and children found from the lookup table and the ones that actually exist hence by calculating b for some box b we can quickly find a box containing b if it exists and if it does not we can quickly generate the exact list of children to examine additionally in practice it turns out that there is a certain value that shows up far more often than any other the sequence notably this is the very sequence that accepts every single possible input string therefore in the event that a layer contains only this child and no words it is in fact possible to skip the entire layer in practice this produces significant savings on both computational costs and memory usage which contributes towards theoretical implication let us now formally define the data structures and its algorithms used in our implementation definition data structure the data structure is a trie where each node of the trie is called a cluster the top level consists of a pointer to the root cluster which is the cluster covering the first four variables in the ordering and can perform the following operations insert algorithm contains algorithm and getallcontainingboxes algorithm definition cluster each cluster in the database contains two bitstrings boxes and children bitstrings identifying the sets of boxes and children respectively along with an integer depth that informs the cluster of its depth the operation i on a bitstring calculates the intersection of that bitstring with a bitstring retrieved from a lookup table that lists the set of boxes or children that contain or potentially contain respectively the box with value i setting i to sets the specific bit referring exactly to that box each cluster corresponds to four layers of a standard trie definition index of a box the index of a box b i nd ex b is the location of the last variable in that box cluster t f t f figure the clusters in figure flattened out into a single node with four variables to a cluster this is how the node would be stored in the actual database note that it is far more compressed than when drawn out as a trie as in figure this cluster would have a depth of have three bits set in boxes and no bits set in children example the index of the box b t f i nd ex b is algorithm insert given box b to insert b c c n where each c i here is a cluster consisting of b i b i k i nd ex b call insertcluster b k algorithm called on the root cluster algorithm insertcluster input a cluster t with depth depthto insert on box b to insert b c c n where each c i here is a cluster consisting of b i b i and the location of the final cluster k if depth k then if c depth is nonempty then return if a containing box of b is already in the data structure stop else set c depth to else if c depth then create a child cluster at location c depth with depth depth set c depth to call insertcluster on the cluster indexed as c depth return in the insert algorithms algorithms and we recursively traverse clusters until we find the appropriate location in the data structure and then set the appropriate bit to along the way we will check for containing boxes and immediately cease operation if one is found furthermore if a child cluster that contains the box we are inserting or that contains a cluster along the path to that cluster does not exist we create it algorithm contains given box b to find a containing box of b c c n where each c i here is a cluster consisting of b i b i out put containscluster b algorithm called on the root cluster return out put algorithm containscluster input cluster t with depth depththat is being checked for a containing box box b to check containment of b c c n where each c i here is a cluster consisting of b i b i if f c depth is nonempty there is at least one box in the intersection then o mi n f i nd ex x return o else if f c depth is nonempty there is at least one child in the intersection then for all children k f do a b scan these in order of increasing index if a is nonempty then return a else return in the contains algorithms algorithms and we check the database to see if it contains any box that contains some box b therefore we traverse along clusters in our path first checking to see if any containing boxes exist if we find one we return that box immediately and cease checking further if none exists we will perform a search of the children that could potentially contain a containing box this process continues until either a containing box is found or else the search space is exhausted and it is determined that no containing box exists algorithm getallcontainingboxes given box b to find all containing boxes of b c c n where each c i here is a cluster consisting of b i b i out put getallcontainingboxescluster b algorithm called on the root cluster return out put algorithm getallcontainingboxescluster input cluster t with depth depth that is being checked box b to find all containing boxes of b c c n where each c i here is a cluster consisting of b i b i o if f c depth is nonempty there is at least one box in the intersection then o else if f c depth is nonempty there is at least one child in the intersection then for all children k f do a b o else return o the getallcontainingboxes algorithms algorithms and are similar to the contains algorithms there are however two key differences first while contains terminates as soon as it found a single containing box getallcontainingboxes will continue secondly it returns the set of all containing boxes rather than just one hence the name in all other regards it behaves exactly as contains does global variable ordering up until this point we have simply been assuming that all boxes must order their variables in exactly the same order as they appear in the original sat formulas in other words in each box b b must correspond to x b must correspond to x and so on however this does not need to be the case we can reorder the variables and by doing so can greatly improve the runtime of our system the original tetris paper cites the importance of the variable ordering however it assumes that there exists an exponential in n time algorithm to compute the optimal variable ordering while this is justifiable in the context of join problems where n is small compared to the size of the database in sat problems it is unacceptable furthermore as computing the optimal ordering is we can not hope to improve upon this result indeed even approximating the ordering is intractable nevertheless initial experimentation with tetris made clear how impactful this choice can be a slight variation in ordering can result in a large difference in runtime thus we turned to various heuristics and intuitions in order to find a quick and effective means to generate an ordering that works well in practice and thereby contribute to theoretical implication first let us define a few terms that we will use in our discussion of various ordering strategies definition degree the degree of a variable is the number of clauses of which the variable is a part example in example x x x x has degree x has degree and x has degree definition closeness variables x i and x j are said to be close if there exists a clause that includes both x i and x j the fewer terms in the clause the closer the two variables are said to be specifically the closeness of two variables x x is equal to divided by the size of the smallest clause containing both variables minus example in the clause x x x and x would have a closeness of and in the clause x x x x and x would have a closeness of if both clauses were part of the same sat problem x and x would still have a closeness of because the first clause has a smaller size than the second definition interconnectedness the interconnectedness of a cluster c ic c is the sum of the ness values for each pairs of variables in the cluster example using example if x x and x compose a cluster its interconnectedness would be since x x x x and x x in general we note two strategies that we have found improve the performance of a given global variable ordering a first in tetris handling variables early tends to improve performance to see the reason for this we must consider the nature of the algorithm scattering variables throughout the ordering forces the algorithm to branch frequently which means that when testing for inclusion or for containing boxes the algorithm must scan all possible branches this is highly inefficient instead we focus the branches as much as possible to the beginning with the hope being that as the algorithm progresses down from there most layers will have few if any divergent choices if this is true then the inclusion check can be handled quickly let us proceed to see why this is so first we will introduce an example of why placing variables early in the ordering proves effective let us consider the following sample problem example consider the sat formula x x the equivalent box problem using the ordering x x x would begin with database d containing the boxes f f f x has degree x has degree and x has degree now let us consider how the algorithm would attack this problem if we used this naive first ordering which would be x x x we immediately note two things first there are no boxes that have for the final variable this means that the algorithm will never find a box that allows it to skip multiple probe points unless it can use resolution to create a new box that happens to have that property in fact that will not occur additionally we can consider all possible probe points and track how many comparisons the algorithm would need to make on each point assuming that each cluster only covers a single variable instead of four we can see that the algorithm will always have to calculate the set intersection for the cluster of depth will have to perform the set intersection for the cluster with depth on of probe points and will have to perform the set intersection for the cluster with depth on of probe points let us contrast this with the ordering which moves x to the front and x to the back to give us the ordering x x x now our database contains the boxes f and f this time we do have a box with a at the back specifically f therefore after the probe point f f finds this box the algorithm will advance past f t entirely additionally while we still have to perform a set intersection for the cluster with depth of the time and of the time for the cluster with depth we only have to perform the set intersection at depth of the time and all of these numbers ignore how we skipped one of the probe points entirely while this is of course a very simple example this illustrates the principles that cause the strategy to be effective in larger datasets b local interconnectedness as a direct result of the system described in section if a box has multiple variables within the same cluster they can all be recovered with a single operation therefore maximizing the interconnectedness of these blocks provides an advantage to illustrate let us consider the following example example x x x x with each cluster containing variables rather than if we use the naive strategy of keeping the variables ordered the first cluster contains x and x while the second cluster contains x and x therefore both clusters have interconnectedness and we find that all boxes transcend a cluster boundary in other words it will always take at least two comparisons to find either of these boxes however if we had gone with the ordering x x x x instead the box corresponding to x x would be entirely contained within the first cluster and the box corresponding to x x would be entirely contained within the second cluster therefore each box would be entirely contained within a cluster and each cluster would have had an interconnectedness of and each box could have been recovered with only a single comparison this saves a large number of comparisons over the long term ordering algorithms two major methods were employed in order to achieve these aims the first was a descending degree sort while this only directly achieved goal a in practice it did an acceptable job with goal b additionally we constructed three variations on this method the first naive degree descent is where we simply order the variables according to their degree using algorithm algorithm naive degree descent ordering given a set of variables v and for each variable v its degree v d o sort v on v d descending return o the second optimally grouped degree descent forms all possible groups of four variables finds the greatest possible interconnectivity among these groups and then selects from all groups with the greatest interconnectivity on the basis of the combined degree of the group of four using algorithm while this proved effective it is a slow algorithm with a runtime of n algorithm optimally grouped degree descent ordering given a set of variables v and for each variable v its degree v d o g all possible sets of four variables v i v j v k v l from v while g is nonempty do max ic max g ic g determine the maximum possible interconnectedness of all remaining groups p x max g ic g v d of the groups with max interconnectedness select the group for which the sum of the degrees of all variables is the greatest o o x append this group to the ordering for v x do for y g do if v y then g g y remove each grouping that contains one of the variables in the group that was selected return o this necessitated the creation of the third subtype the heuristically grouped degree descent ordering this ordering works in groups of four when creating a group the first node chosen is the highestdegree remaining variable then for each of the remaining three variables the algorithm picks the variable with the highest interconnectedness to the nodes already chosen for this group of four breaking inevitable ties based on degree the result algorithm is an algorithm that can compute its ordering significantly faster than the optimal ordering while tetris run on this ordering runs is competitive with the optimal ordering algorithm heuristically grouped degree descent ordering given a set of variables v and for each variable v its degree d v i x o while v is nonempty do if i then x y where de g r ee y max d v else p max ic max v x calculate which variable has the best interconnectedness with the already chosen variables x y where d y max d v break ties based on degree o o x x x i i if i then i x return o additionally we employ the treewidth tree decomposition which was introduced in in essence the idea here is to minimize the width of the search tree in our domain this corresponds to increasing the locality and local interconnectedness of variables this naturally did a very good job with interconnectivity while a decent job with placing variables early we also experimented with the minfill ordering described in this ordering sets the elimination order such that the node to be eliminated is the node whose removal makes the smallest impact on the overall graph while this ordering has proved effective in similar applications we found it to perform poorly with tetris in table we can see how these various orderings performed in practice on representative graphbased and benchmarks for instance while the treewidth sort outperformed all others on the dataset on the wikivotes dataset created using a snap dataset see section the ordering caused tetris to timeout notably we see that the heuristically grouped degree descent takes only slightly longer to process the input compared to the naive degree descent but significantly less time than the optimally grouped degree descent however the runtime does not suffer significantly when going from the optimal ordering to the heuristic one selective insertion while the original tetris paper calls for the insertion of every box created through the resolution process to be inserted into the database this proved to be inefficient in practice very frequently this will result in a huge increase in the number of branches that the algorithm must scan while trying to find the output point without notably improving the quality of the containing boxes found therefore we only insert those boxes that contain a suitably high percentage of the best results generally come from requiring slightly less than percent of the layers to be composed of entirely in table we have posted the runtime for the dataset showing the relationship between the number of we require in storing a box and the runtime of tetris performance suffers at extreme dataset wikivotes runtime with d ifferent o rderings ordering load time seconds naive degree descent heuristically grouped degree descent treewidth minfill optimally grouped degree descent naive degree descent heuristically grouped degree descent treewidth minfill optimally grouped degree descent runtime seconds timeout table the performance of various ordering schemes on two datasets as one can see no ordering does best on both datasets indeed the best ordering on one is the worst on the other insertion ratio see section was set to for these tests settings with optimal performance resulting from an insertion ratio of close to hence with regard to theoretical implication we find that by decreasing space complexity we furthermore improve runtime our experimental results here we compare cnftetris that is tetris designed to solve cnf problems with other model counters in order to compare and contrast their ability to tackle model counting problems a model counting problem simply put is given a cnf formula output the number of satisfying solutions to that formula since tetris was originally designed to handle database joins these are more natural problems for the algorithm to solve than the corresponding sat problems which are to simply determine whether or not any solution exists while the model counting problem allows for a solver to simply find the number of solutions without finding the solutions themselves cnftetris does in fact output all of the solutions while this admittedly poses a disadvantage compared to the solvers we are comparing against for some datasets cnftetris runs faster in spite of this we compare our results with those of the sharpsat dsharp and cachet model counters due to their recognition as model counters all tests were performed using a single thread on an processor with of ram additionally we include two types of datasets the first is derived from join problems on graphs these are the sort of problems that tetris was originally designed to solve as such cnftetris does a very good job on them the second set is a selection of standard model counting benchmarks from various competitions held over the past several years most model counters have been trained to solve such problems so they serve as an apt second set of benchmarks for cnftetris to compete against i nsertion r atio vs runtime ratio time seconds table a comparison of insertion ratios with the time to solve the dataset for these tests we used the treewidth ordering similar behavior was observed from all ordering strategies graph results here we will compare and contrast how various solvers performed on model counting problems created from graphs dataset generation the cnf graph datasets were created using the publicly available snap datasets these are graph datasets that is each consists of a set of vertices and a set of edges connecting those vertices each of these datasets is a natural problem some arose from social networks while others are anonymized data from other corners on the internet we can then use this data to run various queries for instance we can determine how many triangles exist in the graph our goal then is to convert these problems into an equivalent cnf problem so that we can use cnftetris and the other model counters to solve them to do this each vertex is first assigned a unique binary encoding using log n bits we furthermore increase the number of bits such that there are log n bits times the size of the data structure being looked for in the graph for instance if we are performing the triangle query on the dataset there will be log n bits used in the encoding henceforth let k represent the size of this query each of these bits will correspond to a variable in the cnf encoding of the problem in essence each of these repetitions runtime query base graph wikivotes facebook wikivotes facebook firstr cnftetris on various d atasets sharpsat dsharp cachet loadtime runtime runtime timeout timeout timeout timeout speedup runtime timeout timeout timeout timeout speedup runtime timeout timeout timeout timeout speedup table this table shows the comparative results of various solvers on cnf datasets created using various snap graphical datasets and sat datasets all runtimes are in seconds timeout was set at seconds for these tests we used a insertion ratio of and the heuristic degree descent ordering for cnftetris wikivote contains variables and clauses facebook contains variables and clauses and contains variables and clauses all clause data is for has approximately that number of clauses for the sat datasets has variables and clauses has variables and clauses has variables and clauses has variables and clauses has variables and clauses and has variables and clauses in cnftetris loadtime refers to the time to determine the variable ordering and insert the boxes into the database while runtime is the time to find all satisfying solutions represents a vertex in the triangle next we will encode each absent edge a pair of vertices v and v such that the edge v v e where e is the edge set of the graph as total boolean formulas for the query and k formulas for the query each of these formulas corresponds to one of the three edges of a triangle observe that any possible satisfying solution to the sat problem can not select an edge that does not exist therefore any assignment that matches one of these formulas on all variables must be rejected equivalently any accepting assignment must match at least one variable in the inverse this naturally leads to a cnf definition which is what we create we will repeat this encoding over each possible set of vertex pairings such that the vertex is always written before the vertex while adding additional clauses to reject all edges that would be from the vertex to the vertex simplifying resolutions are also performed where possible therefore we have created a cnf problem where for each output to the query in the original problem there exists a solution we can and do use this problem instance as input for both tetris and other model counters let us examine an example instance of this consider the very simple example graph depicted in figure below using the triangle query in order to encode the v v we must first calculate the binary encodings of each of these vertices these are and respectively we then flip all of the bits giving us and then we construct three cnf clauses each of which corresponds to being the first the second or the third edge of the triangle the first will be x with x corresponding to and corresponding to similarly the second will be x and the third will be x additionally we insert clauses forbidding bad orderings of the points in other words we are making sure we do not count v v v and v v v as separate triangles then when tetris run on this cnf input attempts to recover the number of triangles when it uses the probe point assuming naive ordering t t f f t that is the probe point that corresponds to the inverted binary representations of v v and v the three vertices in the triangle let us consider what happens since each of the selected edges does not correspond to the missing edge we know that all three of those clauses must be satisfied and since each edge is in the index order we know that the additional clauses that we added will also accept our input therefore tetris will add this probe point to the output list this continues for all other probe points until tetris has found all triangles figure a sample graph on four vertices used in the above example we will encode the v v as our sat formula along with additional clauses that ensure we only count triangles once which will allow us to run the query as a sat problem results analysis as can be seen in table while all of the other solvers find these problems to be difficult cnftetris solves them quickly queries that take seconds on cnftetris wind up taking hours on the competition with cnftetris running nearly a thousand times faster on some problems this is largely due to the extremely high number of clauses relative to the number of variables along with the fact that these clauses contain a large number of variables these factors are not present in many of the standard sat benchmarks for instance while the average clause in many sat benchmarks contains two or three variables here the average clause has thirty or more and while sat benchmarks rarely have over ten times as many clauses as variables here the system is forced to tackle an environment where the number of clauses is exponentially larger than the number of variables note that all of the solvers we are comparing against use unit propagation techniques in order to count models see section for details because of this the increased number of clauses directly corresponds to increased work for these solvers nongraph results in this section we will discuss how cnftetris performed as compared to other solvers on standard model counting benchmarks about the datasets these datasets are a combination of datasets from the satlib datasets and the samplecount benchmarks for model counting taken from international joint conference on artificial intelligence we chose to use the ais datasets for several reasons first each of the datasets terminates in a reasonable amount of time on all solvers allowing us to find interesting comparisons secondly due to the existence of versions of this dataset we can use this as insight into whether or not tetris is scaling efficiently with the size of the dataset additionally we featured the and datasets which gave us insights into our implementation s strengths and weaknesses results analysis as table shows tetris is competitive with dsharp and cachet on many of the datasets indeed a factor of separates us from either solver on all of the ais datasets a difference that engineering work alone can easily overcome while there is significantly more space between it and sharpsat a factor of on average we believe the distance is not insurmountable the largest gaps exist on the and datasets on these cnftetris is roughly a factor of off of the worst of the competition and a factor of over as compared to sharpsat the reason for this is simple both of these datasets contain pure variables in other words there exist variables x such that in all clauses never appears or this is an important piece of information one that can and must be utilized but cnftetris in its current state does not know how to do so however since we know what the problem is we expect to be able to quickly and efficiently attack this issue related work our work builds on tetris as developed by abo khamis et al in in that work the authors introduced tetris as a algorithm for geometrically solving the database join problem this in turn built on work on the minesweeper nprr and leapfrog algorithms of which tetris is a generalization furthermore tetris itself is can be considered a version of the dpll algorithm with clause learning in dpll which is itself an evolution of the earlier dp algorithm a variable is chosen at every stage and assigned to be either true or false the algorithm then uses unit propagation in order to simplify clauses under these assumptions in this techniques after the solver assigns a value to a variable every other clause is inspected to see if this assignment creates a unit clause a clause with only one variable in it and to see if resolutions can be performed this process continues until a conflicting clause that is a clause that is violated by the assignments is found at which point the algorithm is forced to backtrack in the clause learning versions introduced in the solver takes this as an opportunity it determines where it went astray adds a new clause to its cache that is the negation of this errant assignment backtracks to where this took place and then proceeds in the opposite direction the reasoning why cnftetris is a form of this algorithm follows from the aforementioned method of converting from sat clauses to boxes and see algorithm since these two representations are exactly equivalent any operation performed on one representation can be translated into an operation on the other hence every single operation tetris performs on the boxes over its execution must correspond exactly to a set of operations on the original clauses for instance the contains operation matches up with the idea of a conflicting clause when a containing box is found we can consider this as finding a box that rejects the current probe point as a tential output point meanwhile a conflicting clause rejects a potential satisfying assignment in much the same way furthermore over the course of tetris the algorithm tentatively assigns a variable to either true or false and then proceeds along with this assumption until a contradiction is found all while learning additional clauses where possible through the resolution process when a containing box is found or synthesized through resolution and we advance the probe point accordingly we are in essence backtracking to the earliest decision point and choosing to go in the opposite direction just as dpll with clause learning does therefore this is exactly the dpll algorithm with clause learning with the added restriction of a fixed global variable ordering tetris additionally utilizes a logic system while similar systems have been utilized in database schemes such as by zaniolo in in these systems the three values are true false and unknown here however the three values we are considering can be summarized as true false and both this causes a number of key differences for instance true unknown is equivalent to unknown while true both is equivalent to true similarly true unknown is equivalent to true while true both is equivalent to both much work has been done in creating sat solvers let us briefly discuss those solvers we are comparing our work against first let us consider cachet this solver was originally released in with minor compatibility updates continuing through the most recent version which came out in next there is sharpsat first released in sharpsat significantly eclipsed contemporary solvers sharpsat has been maintained over time with the most recent release in finally we come to dsharp the most recently released of our three competitors dsharpwas introduced in in order to efficiently compile cnf problems into the decomposable negation normal form language further work allowed it to function as a model counter which is how we utilize it the version we use was released in what all of these solvers have in common including cnftetris is that they have at their core a form of the dpll algorithm with clause learning indeed almost all modern sat and sat solvers do so the differences then come in terms of efficiency each solver uses a different array of techniques in order to effectively cache and recover learned clauses to determine the variable ordering and to identify clause conflicts with cachet the authors focused on adding component caching capabilities on top of an existing sat solver zchaff the theoretical grounds for which were themselves introduced in this caching involved the storing of subproblems in a local cache so that these clauses would not have to be by cachet at a later juncture thereby reducing redundant calculations over the course of the algorithm this can be viewed as analogous to how cnftetris stores learned boxes in a local cache which it checks for containing boxes before examining the original database a subproblem meanwhile could be thought of as a box with a high percentage of variables set to however one key difference here is the nature of the cached components in cachet due to how the algorithm functions it must regularly prune the cache of siblings that would otherwise cause it to undercount the number of models cnftetris in contrast needs to perform no such pruning it will naturally determine the exact number of models without any additional work sharpsat built on the work in cachet while adding new ideas of its own boolean constraint propagation also known as the failed literal rule and unit propagation heuristics are used by sharpsat to identify failed literals with greater efficiency than was done in cachet however by fixing the variable order cnftetris simplifies this process ultimately this means that it finds its conflicting boxes in a fundamentally different manner than sharpsat does which provides room for cnftetris to outperform sharpsat dsharp much like how sharpsat built on cachet uses sharpsat as a core component the authors perform a dnnf translation and then use properties of decomposability and determinism to perform model counting though these differences do allow it to outperform more pure dpllbased solvers on some benchmarks since this system still uses sharpsat as a core component it still shares many of the same advantages and disadvantages in comparison to cnftetris as we have seen all of the competing solvers can be viewed as evolutions along a single line while cnftetris does not throw the baby out with the bathwater that is while cnftetris still continues to implement the classic dpll algorithm it does represent a distinct deviation from that line challenging assumptions such as the necessity of allowing a global variable ordering and the much more complex data storage scheme necessary in order to accommodate this while this has necessitated much work in order to implement it has also shown vast promise acknowledgments we would like to thank mahmoud abo khamis hung ngo christopher and ce zhang for very helpful discussions references cachet http cessed international joint conference on artificial intelligence dataset collection http cessed series problems http accessed sharpsat marc thurley https accessed christopher aberger susan tu kunle olukotun and christopher emptyheaded a relational engine for graph processing in proceedings of the international conference on management of data sigmod conference san francisco ca usa june july pages mahmoud abo khamis hung ngo christopher and atri rudra joins via geometric resolutions and beyond in proceedings of the acm symposium on principles of database systems pods pages new york ny usa acm mahmoud abo khamis hung ngo and atri rudra faq questions asked frequently in proceedings of the acm symposium on principles of database systems pods san francisco ca usa june july pages mahmoud abo khamis hung ngo and dan suciu computing join queries with functional dependencies in proceedings of the acm symposium on principles of database systems pods san francisco ca usa june july pages armin biere marijn heule and hans van maaren clause learning sat solvers pages martin davis george logemann and donald loveland a machine program for commun acm july martin davis and hilary putnam a computing procedure for quantification theory journal of the acm jacm rina dechter constraint processing morgan kaufmann publishers san francisco ca usa fischl gottlob and pichler general and fractional hypertree decompositions hard and easy cases arxiv november carla gomes henry kautz ashish sabharwal and bart selman satisfiability solvers in handbook of knowledge representation pages carla gomes ashish sabharwal and bart selman model counting in handbook of satisfiability pages rudolf halin for graphs journal of geometry federico heras javier larrosa and albert oliveras minimaxsat an efficient weighted solver artif intell res jair manas joglekar rohan puttagunta and christopher ajar aggregations and joins over annotated relations in proceedings of the acm symposium on principles of database systems pods san francisco ca usa june july pages jure leskovec and andrej krevl snap datasets stanford large network dataset collection http june marx approximating fractional hypertree width acm trans algorithms april matthew w moskewicz conor f madigan ying zhao lintao zhang and sharad malik chaff engineering an efficient sat solver in proceedings of the annual design automation conference pages acm christian muise sheila a mcilraith j christopher beck and eric i hsu dsharp fast compilation with sharpsat in canadian conference on artificial intelligence pages springer hung ngo dung nguyen christopher and atri rudra towards instance optimal join algorithms for data in indexes corr hung q ngo ely porat christopher and atri rudra optimal join algorithms extended abstract in proceedings of the acm symposium on principles of database systems pages acm hung ngo christopher and atri rudra skew strikes back new developments in the theory of join algorithms sigmod record tian sang fahiem bacchus paul beame henry a kautz and toniann pitassi combining component caching and clause learning for effective model counting p marques silva and karem a sakallah new search algorithm for satisfiability in proceedings of the international conference on design pages ieee computer society marc thurley models with advanced component caching and implicit bcp in international conference on theory and applications of satisfiability testing pages springer todd l veldhuizen leapfrog triejoin a simple optimal join algorithm arxiv preprint todd veldhuizen triejoin a simple optimal join algorithm in proc international conference on database theory icdt athens greece march pages carlo zaniolo database relations with null values in proceedings of the acm symposium on principles of database systems pods pages new york ny usa acm
8
neural affine grayscale image denoising sep sungmin cha taesup moon college of information and communication engineering sungkyunkwan university suwon korea tsmoon abstract we propose a new grayscale image denoiser dubbed as neural affine image denoiser neural aide which utilizes neural network in a novel way unlike other neural network based image denoising methods which typically apply simple supervised learning to learn a mapping from a noisy patch to a clean patch we formulate to train a neural network to learn an affine mapping that gets applied to a noisy pixel based on its context our formulation enables both supervised training of the network from the labeled training dataset and adaptive of the network parameters using the given noisy image subject to denoising the key tool for devising neural aide is to devise an estimated loss function of the mse of the affine mapping solely based on the noisy data as a result our algorithm can outperform most of the recent methods in the standard benchmark datasets moreover our method can nicely overcome one of the drawbacks of the supervised learning methods in image denoising namely a supervised trained model with a mismatched noise variance can be mostly corrected as long as we have the matched noise variance during the step introduction image denoising is one of the oldest problems in image processing and various denoising methods have been proposed over the past several decades wavelet shrinkage field of experts based approach wnnm epll and csf etc in this paper we propose a new image denoiser dubbed as neural affine image denoiser neural aide which utilizes neural network in a novel way the method is inspired by the recent work in discrete denoising in which a novel were devised to train a denoiser solely based on the noisy data we extend the approach to the data case and devise a novel estimated loss function based on the noisy data that is an unbiased estimate of the true mse by investigating the devised estimated loss function we formulate to train a neural network to learn an affine mapping that gets applied to a noisy pixel based on its context such formulation enables both supervised training of the network from the labeled training dataset and adaptive of the network parameters using the given noisy image subject to denoising our experimental results extensively show how we made subtle design choices in developing our algorithm furthermore we show that neural aide significantly outperforms strong baselines in the standard benchmark test datasets notations and problem setting we denote as the clean grascale image and each pixel xi is corrupted by an independent additive noise to result in a noisy pixel zi zi xi ni i where the continuous noise variables ni s are independent not necessarily identically distributed nor gaussian over i and e ni e for all i moreover as in the standard processing in grayscale image denoising we normalize both xi s and zi s with and treat them as real numbers importantly following the universal setting in discrete denoising we treat the clean image as an individual image without any probabilistic model and only treat z as random generally a denoiser can be denoted as z denoting that each reconstruction at location i is a function of the noisy image z the standard loss function used for the grayscale image denoising to measure the denoising quality is the error mse denoted as x z n x xi z n where x x is the conventionally the mse is compared in the using the peak psnr defined as z estimated loss function for the affine denoiser in this paper we consider the denoiser of the form z a z zi b z for each i in which z stands for the entire noisy image except for zi namely the reconstruction at location i has the affine function form of the noisy symbol zi but the slope and the intercept parameters a z and b z of the affine function can be functions of the surrounding pixels hence separete parameters can be learned from data for each location before presenting more concrete form of our denoiser we first consider the following lemma lemma consider a case z x n with e n and e n and suppose a denoiser has the form of z az b then l z a b z az b is an unbiased estimate of ex x z in which x x and ex notation stands for the expectation over z given that the clean symbol is x remark note while the true mse x z can be evaluated only when the clean symbol x is known the estimated loss l z a b can be evaluated soley with the noisy symbol z the affine mapping a b and the noisy variance thus l z a b plays a key role in adaptively learning the neural affine denoiser as shown in the next section proof by simple algebra we have the following equalities ex x z ex az b az b ex az b ex z az b z ex z az b ex l z a b in which follows from ex z x follows from ex z and replacing with ex z and follows from simply rearranging the terms thus we have the lemma from lemma we can also show that for the denoisers of the form z a z z exi xi z z exi l zi a z b z holds since a z and b z become constant given z and the noise is independent over i the exi in stands for the conditional expectation of zi given the clean symbol xi and the noisy symbols z note the estimated loss function similar to has been also used to the filtering problem neural aide neural affine image denoiser neural affine denoiser our proposing neural affine image denoiser neural aide considers the denoiser of the form z a zi b i n n in which stands for the noisy image patch or the context of size k k surrounding zi that does not include zi thus the patch has a hole in the center then we define a neural network g w k that takes the context as input and outputs the slope and intercept parameters a and b for each location i we denote w as the weight parameters of the neural network which will be learned by the process described in the later sections as it will get clear in our arguments below the specific form of our denoiser in enables learning the parameters by both supervised learning with labelled training data and adaptive with the given noisy image note in we put a constraint that the slope and intercept of the affine function the output of the network should be nonnegative while such constraint would appear apparent in our experimental results it also makes an intuitive sense the denoiser tries to estimate xi from zi which are both in the interval hence the nonnegative slope and intercept parameters should suffice the nonnegativity constraint is realized in the neural network by applying f x log ex as the activation function for the final output layer of the neural network the rest of the network architecture is the ordinary neural network with relu activation functions as depicted in figure there are two sharp differences with our neural aide and other neural network based denoisers first the other schemes take the full noisy image patch including the center location as input to the network and the network is trained to directly infer the corresponding clean image patches in contrast neural aide is trained to first learn an affine mapping based on figure the arthe noisy image patch with a hole the context of zi then the learned chitecture of neural mapping is applied to zi to obtain the recostruction such difference aide enables the development of the estimated loss function in lemma and the adaptive training process described in the next section the principle of learning a mapping first and applying the mapping to the noisy symbol for denoising or filtering has been utilized in second unlike the other schemes in which the reconstructions should somehow be aggregated to generate the final denoised image neural aide simply generates the final reconstructions thus there is no need for a step to aggregate multiple number of reconstructed patches which simplifies the denoising step furthermore since the neural network of neural aide only has to estimate the two parameters of the affine mapping from each context neural aide can make much more efficient usage of the data with a simpler model compared to the networks in other schemes that need to estimate the full k adaptive training with noisy image we first describe how the network parameters w can be adaptively learned from the given noisy image z without any additional labelled training data that is by denoting each output element of the neural network g w for the context as g w a and g w b we can define an objective function for the neural network to minimize as ladaptive w z x l zi g w g w n by using the estimated loss function l z a b defined in lemma the training process using is identical to the ordinary neural network learning start with randomly initiallized w then use backprogagation and variants of sgd for updating the parameters the formulation may seem similar to training a neural network for a regression problem namely zi which are solely obtained from the noisy image z can be analogously thought of as the label pairs for the supervised regression but unlike regression which tries to directly learn a mapping from input to the target label our network learns the affine mapping for each context and apply it to zi to estimate the unobserved clean symbol xi the fact that only depends on the given noisy image z and the assumed makes the learning adaptive the rationale behind using l z a b in is the following as shown in the estimated loss is an unbiased estimate of the true expected given the context therefore minimizing may result in the network that produces the slope and intercept parameters that minimize the true mse for the reconstrunctions of the corresponding affine mappings this formulation of training neural network parameters solely based on the noisy data is inspired by the recent work in discrete denoising once the training is done we can then denoise the very noisy image z used for training by applying the affine mapping at each location as that is by denoting as the learned parameter by minimizing the reconstruction at location i by neural aide becomes neural aide z g zi g supervised training and adaptive while the formulation in gives an effective way of adaptively training a denoiser based on the given noisy image z the specific form of the denoiser in makes it possible to carry out the supervised of w before the adaptive training step that is we can collect abundant clean images from the various image sources world wide web and corrupt them with the assumed additive noise with variance in to generate the correspoding noisy images and the labelled training data of size n d n in stands for the noisy image patch of size k k at location i that includes the noisy symbol and is the clean symbol that correspond to now the subtle point is that unlike the usual supervised learning that may directly learn a mapping from to we remain in using the neural network defined in and learn w by minimizing n x g w g w lsupervised w d n note x x as before the training process of minimizing is again done by the usual backpropagation and the variants of sgd once the objective function converges after sufficient iteration of weight updates we denote the converged parameter as then for a given noisy image to denoise z we can further update adaptively for z by minimizing ladaptive w z in starting from that is we adaptively until ladaptive w z converges then denoise z with the converged parameter as this capability of adaptively the supervised trained weight parameter is the unique characteristic of neural aide that differentiates it from other neural denoisers experimental results we compared the denoising performance of the proposed neural aide with several denoising methods including mlp epll wnnm and csf data and experimental setup for the supervised training we generated the labelled training set using images available in public datasets out of images images are taken from set in the berkeley segmentation dataset and the remaining images are taken from pascal voc dataset for the pascal voc images we resized them to match the resolution of the berkeley segmentation dataset we corrupted the images with additive gaussian noise and tested with multiple noise levels namely that is we built separate training set of size for each noise level the total number of training data points n in in each dataset was thus about million we evaluated the performance of the denoisers with standard test images barbara boat couple hill house lena man montage and peppers and standard berkeley images our network had fully connected layers with nodes in each layer which showed the best result among a few tried models relu was used as activation functions and we used adam as the optimizer to train the network for the supervised training we trained the network up to epochs and halved the learning rate every epochs starting from for the adaptive we also trained up to epochs and halved the learning rate every epochs starting from we did not use any regularization methods while training moreover for the context data we subtracted from the values to make the input to the network get centered around note zi that the affine mappping gets applied to in still is in the original scale for all our experiments we used keras version with tensorflow version backend and nvidia s gpu geforce with cuda library version training neural aide in this section we systematically show the reasoning behind choosing the context size k the empirical justification of the nonnegative contraint on the outputs of g w and the validity of the combination of the supervised with adaptive adaptive training with noisy image we first carried out the adaptive training solely with the given noisy image as described in section that is for each given noisy image we randomly initialized the weight parameters of the neural network and trained with the objective function after training the image was denoised as figure a shows the psnr results on the standard test images with varying k values and output activation functions linear f x x positive f x log ex in and sigmoid f x the noise level was from the figure we can see that the adaptive training alone can still result in a decent denoiser although some psnr gap exists compared to the as shown in table we see that k tend to be the best context size for adaptive training moreover the choice of the output activation functions turns out to be important and more discussion is given on the activation function in the next section supervised training and adaptive since the limitation of the adaptive training alone was apparent we then carried out the supervised training in section that is we took the images from the berkeley segmentation dataset and trained the network with varying k values as shown in figure b denoising of the noisy image was done identically as before by applying the learned affine mapping to each noisy pixel note in this case we only carried out the experiments with the linear activation function we can see that the supervised training can result in a much higher psnr values than the adaptive training already very close to the also the performance seems to get saturated around k so in all our experiments below we used k encouraged by this result we moved on to adaptively the weight parameters by minimizing the objective function for each image initialized with the parameters learned by supervised learning this is when the subtle issue regarding the activation function we describe below comes the difference among the models were not huge a adaptive training random initialization b supervised training training images figure adaptive and supervised training results on the standard test images up in figure we trained supervised learning models with linear and positive output activation functions using images for then adaptively the parameters for given noisy image and montage image figure a d show the distributions of the slope a and intercept b paramters that each model outputs for the given image and i shows the change of psnr value in the process of adaptive from figure a and e we can see that when trained with supervised learning with linear output activation function the values of a and b all lie in the interval however when for each image figure b and f show that many negative a values are produced for the linear activation this can be readily seen by examining the form of l z a b in which does not hinder a from having negative values when there is no constraint as shown in figure i such negative a values for the affine mapping sometime does not have big effect on the process and the final denoising performance as in the case of in which the psnr increases significantly from the supervised model by however as in the case of montage in figure i we suspect such negative a values sometimes hurt the denoising performance greatly in contrast when we put the nonnegativitiy contstraint on a and b in the neural network we observe a stable process as is observed in figure d h and i thus the results of neural aide from now on all uses the positive activation function figure shows the adaptive process of the standard images for the supervised model was trained with the full training set of images from the figures we can see that the learning is done appropriately and the psnr does improve with we also tested with the sigmoid activation and the result was more or less the same a s b ft e montage s f montage ft c s d ft g montage s h montage ft i psnr values during adaptive figure distribution of a and b values for and montage after supervised training s and ft for linear lin and positive pos activation functions the distributions obtained for are from the models at epoch i psnr values during a psnr b objective function figure psnr and objective function value during for the standard images quantitative evaluation standard images table summarizes our denoising results compared to the recent on the standard images for various noise levels we show both mean and standard deviation of psnr values for the baseline methods we downloaded the codes from the authors webpages and ran the code on the noisy images thus the numbers can be compared fairly mlp and could run only on selected noise levels stands for the neural aide that is only supervised trained with images and are models after supervised learning is the best model in terms of epoch chosen based on psnr thus not practical and is the model that is chosen with a heuristic rule stop when the training loss becomes smaller than otherwise until epochs from the table we can see that significantly outperforms all other baselines on average except for wnnm the difference of mean psnr between wnnm and is almost negligible and tend to have smaller variance in terms of psnr than wnnm by comparing and we can definitely see that adaptive is effective also when the noise level is low the improvement gets larger furthermore by comparing with mlp which is another neural network based denoiser and uses much more data points million exmample and larger model we can confirm that our model more efficiently uses the data psnr mean std mean std mean std mean std mean std mlp epll wnnm table psnr comparsions on the standard benchmark images for figure a shows the competitive comparison between and the baselines that is the figure plots the number of images of which the psnr of is better than the baseline methods we can see that our method mostly outperforms all baselines competitively including wnnm one of the main drawbacks of mlp is that the neural networks have to be trained separately for all noise levels and the mismatch of significantly hurts the denoising performance while the supervised training of neural aide is also done in the similar way figure b c show that the adaptive can be very effective in overcoming such limitation figure b shows the psnr results of the mismatched models before each row is normalized with the psnr of the matched case the diagonal element and the psnr values are we clearly see the sensitivity of psnr in the mismatch of as the values show significant gaps compared to the diagonal values in each row on the other hand figure c shows the psnr values of s that have mismatched supervised models but are adaptively with the correct s we can clearly see that the psnr gaps of the mismatched supervised models can be significantly closed by adaptive which gives a significant edge over mlp in a competitive comparison b psnr of c psnr of figure a competitive comparison of with baselines b psnr of mismatched c psnr of with mismatched but with correct standard berkeley images table shows the psnr results on the standard berkeley images from we can clear see that again outperforms the baseline methods including wnnm with significant margins mlp epll wnnm table psnr comparisons on the standard berkeley images concluding remarks we devised a novel neural network based image denoiser neural aide the algorithm is devised with a different principle from the other methods as a result we show that a very simple adaptive affine model which neural aide learns differently for each pixel can significantly outperform many strong baselines also the adaptive of neural aide can successfully overcome the mismatch problem which is a serious drawback of other neural network based methods as a future work we would like to more thoroughly carry out the experiments in even noisier regime also since our algorithm does not require the noise to be gaussian only the additivity of the noise and are assumed we would try to other types of noise laplacian noise furthermore extending our framework to noise such as multiplicative noise would be another interesting direction finally theoretical anayses of our method based on information theory and learning theory would be another direction worth pursuing references dabov foi katkovnik and egiazarian image denoising by sparse transformdomain collaborative filtering ieee trans image processing simoncelli and adelson noise removal via bayesian wavelet coring in icip roth and black field of experts ijcv mairal bach ponce sapiro and zisserman sparse models for image restoration in iccv gu zhang zuo and feng weighted nuclear norm minimization with applicaitons to image denoising in cvpr zoran and weiss from learning models of natural image patches to whole image restoration in iccv schmidt and roth shrinkage fields for effective image restoration in cvpr moon min lee and yoon neural universal discrete denosier in nips weissman ordentlich seroussi verdu and weinberger universal discrete denoising known channel ieee trans inform theory moon and weissman universal fir mmse filtering ieee transactions on signal processing burger schuler and harmeling image denoising can plain neural networks compete with in cvpr xie xu and chen image denoising and inpainting with deep neural networks in nips weissman ordentlich weinberger and merhav universal filtering via prediction ieee trans inform theory martin fowlkes tal and malik a database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics in iccv kingma and ba adam a method for stochastic optimization in iclr
1
convolutional neural networks for histopathology image classification training vs using networks brady morteza shivam and kimia lab university of waterloo on canada mathematics and computer science department amirkabir university of technology tehran iran oct bwkieffe tizhoosh we explore the problem of classification within a medical image based on a feature vector extracted from the deepest layer of convolution neural networks we have used feature vectors from several structures including networks transfer learning to evaluate the performance of deep features versus cnns which have been trained by that specific dataset as well as the impact of transfer learning with a small number of samples all experiments are done on kimia dataset which consists of histopathology training patches in tissue texture classes along with test patches for evaluation the result shows that networks are quite competitive against training from scratch as well does not seem to add any tangible improvement for to justify additional training while we observed considerable improvement in retrieval and classification accuracy when we the inception structure image retrieval medical imaging deep learning cnns digital pathology image classification deep features vgg inception i ntroduction we are amid a transition from traditional pathology to digital pathology where scanners are replacing microscopes rapidly capturing the tissue characteristics in digital formats opens new horizons for diagnosis in medicine on on hand we will need to store thousands and thousands of specimens in large physical archives of glass samples this will be a relief for many hospitals with limited space on the other hand acquiring an image from the specimen enables more systematic analysis collaborations possibilities and last but not least the diagnosis for pathology arguable the final frontier of disease diagnosis however like any other technology digital pathology comes with its own challenges imaging generally generates gigapixel files that also require digital storage and are not easy to analyze via computer algorithms detection segmentation and identification of tissue types in huge digital images pixels appears to be a quite daunting task for computer vision algorithms looking at the computer vision community the emergence of deep learning and its vast possibilities for recognition and classification seems to be a lucky coincidence when we intend to address the obstacles of digital pathology diverse deep architectures have been trained with large set of images imagenet project or faces in the wild database to perform difficult tasks like object classification and face recognition the results have been more than impressive one may objectively speak of a computational revolution accuracy numbers in mid and high have become quite common when deep networks trained with millions of images are tested to recognize unseen samples in spite of all progress one can observe that the applications of deep learning in digital pathology hast not fully started yet the major obstacle appears to be the lack of large labelled datasets of histopathology scans to properly train some type of neural networks a requirement that may still be missing for some years to come hence we have to start designing and training deep nets with the available datasets training from scratch when we artificially increase the number of images data augmentation is certainly the most obvious action but we can also use nets that have been trained with millions of images to extract deep features as a last possibility we could slightly train finetune the nets to adjust them to the nature of or data before we use them as feature extractors or classifiers in this paper we investigate the usage of deep networks for kimia via training from scratch feature extraction and the results show that employing a network trained with images may be the most viable option ii background over recent years researchers have shown interest in leveraging techniques for digital pathology images these images pose unique issues due to their high variation rich structures and large dimensionality this has lead researchers to investigate various image analysis techniques and their application to digital pathology for dealing with the large rich structures within a scan researchers have attempted segmentation on both local and global scales for example researchers have conducted works on the segmentation of various structures in breast histopathology images using methods such as thresholding fuzzy clustering and adaptive thresholding with varying levels of success when applying these methods to histopathological images it is often desired that a computer aided diagnosis cad method be adopted for use in a image retrieval cbir system work has been done to propose various cbir systems for cad by multiple groups recently hashing to appear in proceedings of the intern conf on image processing theory tools and applications ipta nov montreal canada methods have been employed for image retrieval among the hashing methods kernelized and supervised hashing are considered the most effective more recently radon barcodes have been investigated as a potential method for creating a cbir yi et al utilized cnns on a relatively small mammography dataset to achieve a classification accuracy of and an roc auc of whereas handcrafted features were only able to obtain an accuracy of currently there is interest in using networks to accomplish a variety of tasks outside of the original domain this is of great interest for medical tasks where there is often a lack of comprehensive labeled data to train a deep network thus other groups have leveraged networks trained on the imagenet database which consists of more than million categorized images of classes these groups have reported a general success when attempting to utilize networks for medical imaging tasks in this study we explore and evaluate the performance of a cnn when on imaging data specifically when used as feature extractors with and without fine tuning for a digital pathology task for the process we do not use all of them to emulate cases where no large dataset is available besides more extensive training may destroy what a network has already learned the values of each patch were subsequently normalized into the patches were finally downsized to a to be fed into the cnn architecture following the above steps we first obtained patches from each scan based purely on the homogeneity threshold then randomly sampled patches from each class leading to the much smaller training set of patches a selection of patches from the training set can be viewed within fig as fig shows the testing samples are relatively balanced in kimia dataset whereas the training set is rather imbalanced different size and frequency of specimens are the main reasons for the imbalance b accuracy calculation the accuracy measures used for the experiments are adopted from these were chosen so that results between the papers could be compared there are ntot testing patches psj that belong to sets psi s i with s looking at the set of retrieved images for an experiment r the accuracy can be defined as iii data s et the data used to train and test the cnns was the kimia consisting of whole scan images wsis manually selected from more than scans depicting diverse body parts with distinct texture patterns the images were captured by tissuescope le bright field using a na lens for each image one can determine the resolution by checking the description tag in the header of the file for instance if the resolution is then the magnification is and if the resolution is then the magnification is the dataset offers training patches and manually selected test patches of size the locations of the test patches in the scans have been removed whitened such that they can not be mistakenly used for training the color staining is neglected in kimia dataset all patches are saved as grayscale images the kimia dataset is publicly x ntot the accuracy can be defined as x with the total accuracy is defined as by incorporating both the accuracy measurements the resulting problem becomes much more difficult when attempting to obtain acceptable results iv m ethods each experiment was run using the architecture for both the and networks as provided in the keras python package utilizing a network we then analyze the effectiveness of the network when using it just as a feature extractor and when transferring the network some of its weights to the medical imaging domain patch selection to create the kimia dataset each scan is divided into patches that are pixels in size with no overlap between patches background pixels very bright pixels are set to white and ignored using a homogeneity measure for each patch the homogeneity for selection criterion is that every patch with a homogeneity of less than is ignored the high threshold ascertains that no patch with significant texture pattern is ignored from the set of patches each scan had randomly sampled patches are selected to be used a protocols when a deep network the optimal setup varies between applications however using a network and applying it to other domains has yielded better performing models it was decided that only the final convolutional block block within and the final two inception blocks within would be as in a single fully connected layer http http to appear in proceedings of the intern conf on image processing theory tools and applications ipta nov montreal canada fig a selection of patches from each training scan within the kimia dataset the patches are pixels in size or from top left to bottom right to to appear in proceedings of the intern conf on image processing theory tools and applications ipta nov montreal canada fig instance distribution for training set left and testing set right of kimia of size followed by an output layer of size was chosen to replace the default fully connected layers when this was found to give better results the optimizer we used follows the logic from where the learning rate chosen was very small and the momentum used was large both of which were selected to ensure no drastic changes within the weights of the network during training which would destroy what had been already learned the keras data augmentation api was used to generate extra training samples and the network was trained for a total of epochs after which the accuracy was no longer changing with a batch size of softmax classification layer the fully connected layers were pretrained on bottleneck features and then attached to the convolutional layers and training on the final two inception blocks was then performed the resulting networks transfer learned or and were then used to classify the test patches the class activation mappings cams for the network on randomly selected test patches can be viewed in fig r esults the results of our experiments are summarized in table it can be stated the results for and are quite similar training from scratch using a network as feature extractor and a network are all delivering comparable results for kimia whereas the results for are similar with the model outperforming the feature extractor as produced the best results and minimally updating the weights of a network is not a time consuming task one may prefer to utilize it however one may prefer using to training from scratch and a net as it requires no extra effort and produces similar results with a linear svm cnn as a feature extractor by using the provided implementation of the specified architectures within keras the network was first used as a feature extractor without any feature extractor or and the last fully connected layer of the network prior to classification was used extracted to be used a feature vector as networks are trained in other domains very different image categories and hence can not be used as classifier we used the deep features to train a linear support vector machine svm for classification the python package as well as libsvm were used to train svm classifiers with a linear kernel both numpy and scipy were leveraged to manipulate and store data during these experiments vi d iscussions it was surprising to find out that simply using features from a network trained on images see fig can deliver results comparable with a network that with considerable effort and resources has been trained from scratch for the domain in focus here histopathology as well such simpler approach was even able to achieve a noticeable accuracy increase of in overall performance for kimia dataset another surprising effect was that transfer learning via for was not able to provide any improvement compared to extracting deep features from a network without any change in the learned of its weights whereas with the improvement was immediate perhaps the most obvious reaction to this finding is that if we had enough samples millions of histopathological images and if we would use proper computational devices for efficient training then cnn would perhaps deliver the cnn as a classifier the proposed network was then to the kimia dataset using the keras library the convolutional layers were first separated from the top fully connected layers the training patches were fed through the model to create a set of bottleneck features to initially the new fullyconnected layers these features were used to initialize the weights of a fully connected mlp consisting of one dense relu layer and a softmax classification layer next the fully connected model was attached to the convolutional layers and training on each convolutional block except the last block was performed to adjust classification weights similarily for the network the fully connected layers were replaced with one dense relu layer and a to appear in proceedings of the intern conf on image processing theory tools and applications ipta nov montreal canada table comparing the results training form scratch reported in using deep features via a network with no change and classification after a network the best scores are highlighted in bold scheme train from scratch features the net features the net approach fig sample images from imagenet project one may object to using features that have been learned from such images in order to classify highly sensitive images of histopathology for medical diagnosis however experiments with kimia dataset shows that features extracted from these images are expressive enough to compete against networks trained by histopathology images from scratch source http a deep network architecture not well suited to the problem or an overly simplistic fully connected network however as previously discussed in the problem given by the kimia dataset is indeed a hard problem most likely due to the high variance between the different patches within a given scan variability this is further validated when looking at the results in fig the two columns contain patches that have distinct patterns with their own unique features the cam from the first column shows that the network responds strongly to the unique structures within the label very strongly for the final patch whereas when presented with completely different patterns in the second column the network responds strongly to other areas typically ones that embody inner edges within the sample this shows evidence that the model has at the very least begun to learn higher level fig activation maps using randomly selected patches from the kimia testing data the patches within each column are the same class and the labels per column are and respectively the activation maps are created using the keras visualization toolkit and the algorithm red areas had more influence on the label prediction best results clearly better than transfer learning although this statement is supported by comparable empirical evidence it remains speculation for a sensitive field like medical imaging but why is so difficult to train a cnn for this case it is most likely due to a number of factors such as a relative lack of image data the effect of scaling down a patch for use within to appear in proceedings of the intern conf on image processing theory tools and applications ipta nov montreal canada structures within individual patches further investigation with different architectures would likely improve upon these results as would more aggressive augmentation yi sawyer iii dunnmon lam xiao and rubin optimizing and visualizing deep learning for classification in breast tumors corr vol online available http pan and yang a survey on transfer learning ieee transactions on knowledge and data engineering vol no pp oct shin roth gao lu xu nogues yao mollura and summers deep convolutional neural networks for detection cnn architectures dataset characteristics and transfer learning ieee transactions on medical imaging vol no pp may deng dong socher li li and imagenet a hierarchical image database in computer vision and pattern recognition cvpr ieee conference on ieee pp girshick donahue darrell and malik convolutional networks for accurate object detection and segmentation ieee transactions on pattern analysis and machine intelligence vol no pp jan y bar diamant wolf and greenspan deep learning with training used for chest pathology identification in proc spie vol lecun bottou bengio and haffner learning applied to document recognition proceedings of the ieee vol no pp nov babaie kalra sriram mitcheltree zhu khatami rahnamayan and tizhoosh classification and retrieval of digital pathology scans a new online available http simonyan and zisserman very deep convolutional networks for image recognition corr vol szegedy vanhoucke ioffe shlens and wojna rethinking the inception architecture for computer vision in proceedings of the ieee conference on computer vision and pattern recognition pp chollet et keras https tajbakhsh shin gurudu hurst kendall gotway and liang convolutional neural networks for medical image analysis full training or fine tuning ieee transactions on medical imaging vol no pp may pedregosa varoquaux gramfort michel thirion grisel blondel prettenhofer weiss dubourg vanderplas passos cournapeau brucher perrot and duchesnay machine learning in python journal of machine learning research vol pp chang and lin libsvm a library for support vector machines acm transactions on intelligent systems and technology vol pp software available at http van der walt colbert and varoquaux the numpy array a structure for efficient numerical computation computing in science engineering vol no pp online available http jones oliphant peterson et scipy open source scientific tools for python online accessed online available http yu and seltzer improved bottleneck features using pretrained deep neural networks in twelfth annual conference of the international speech communication association kotikalapudi and contributors https selvaraju cogswell das vedantam parikh and batra visual explanations from deep networks via localization see https zeiler and fergus visualizing and understanding convolutional networks cham springer international publishing pp online available http vii c onclusions retrieval and classification of histopathological images are useful but challenging tasks in analysis for diagnostic pathology whole scan imaging wsi generates gigapixel images that are immensely rich in details and exhibit tremendous interand variance both a feature extractor and transferlearned network were able to offer increases in classification accuracy on the kimia dataset when compared to a cnn trained from scratch comparatively low performance of the latter could be due to the architecture not being well suited for the problem lack of sufficient number of training images the inherent difficulty of the classification task for and highly variable histopathology images further work would warrant using different architectures for comparison more aggressive data augmentation and potentially increasing the size of training samples used from the kimia dataset however both and feature extractor models were able to compete with the methods reported in literature and therefore show potential for further improvements acknowledgements the authors would like to thank huron digital pathology waterloo on canada for its continuing support r eferences gurcan boucheron a can madabhushi rajpoot and yener histopathological image analysis a review ieee reviews in biomedical engineering vol pp naik doyle feldman tomaszewski and madabhushi gland segmentation and computerized gleason grading of prostate histology by integrating and domain specific information in miaab workshop pp karvelis fotiadis georgiou and syrrou a watershed based segmentation method for multispectral chromosome images classification in engineering in medicine and biology society embs annual international conference of the ieee ieee pp petushi garcia haber katsinis and tozeren computations on histology images reveal gradedifferentiating parameters for breast cancer bmc medical imaging vol no zhang liu dundar badve and zhang towards largescale histopathological image analysis image retrieval ieee transactions on medical imaging vol no pp feb liu wang ji jiang and chang supervised hashing with kernels in ieee conference on computer vision and pattern recognition june pp tizhoosh barcode annotations for medical image retrieval a preliminary investigation in image processing icip ieee international conference on ieee pp tizhoosh zhu lo chaudhari and mehdi minmax radon barcodes for medical image retrieval in international symposium on visual computing springer pp khatami babaie khosravi tizhoosh salaken and nahavandi a medical image classification for a image retrieval in ieee canadian conference on electrical and computer engineering ccece april pp
1
dec triangulated equivalences and reconstruction of classifying spaces hiroki matsui abstract in algebra such as algebraic geometry modular representation theory and commutative ring theory we study algebraic objects through associated triangulated categories and topological spaces in this paper we consider the relationship between such triangulated categories and topological spaces to be precise we explore necessary conditions for derived equivalence of noetherian schemes stable equivalence of finite groups and singular equivalence of commutative noetherian rings by using associated topological spaces introduction as is a common approach in many branches of algebra including algebraic geometry modular representation theory and commutative ring theory we assign to an algebraic object a a scheme x a finite group g a commutative noetherianring r a triangulated category t the perfect derived category dperf x the stable module category mod kg the singularity category dsg r and a topological space s the underlying topological spaces x proj g k sing r by studying such a triangulated category and a topological space we aim to grasp the structure of the original algebraic object from this motivation it is natural to ask what kind of relationship there exists between t and algebraic objects a x g r w triangulated categories t dperf x mod kg dsg r o topological spaces s x proj g k sing r in this paper we consider this question more precisely the following question let a be algebraic objects t t corresponding triangulated categories and s s corresponding topological spaces respectively does the implication t t s hold mathematics subject classification key words and phrases triangulated category triangulated equivalence classifying space classifying support data scheme finite complete intersection the author is partly supported by for jsps fellows hiroki matsui we introduce the notion of a classifying space of a triangulated category see definition and prove the following result which gives a machinery to answer the above question theorem theorem let t t be essentially small triangulated categories and s s classifying spaces for t and t respectively then the implication t t s holds the key role to prove this theorem is played by the support theory for triangulated categories for tensor triangulated categories the support theory has been developed by balmer and is a powerful tool to show such a reconstruction theorem since we focus on triangulated categories without tensor structure we need to invent the support theory without tensor structure algebraic geometry let x be a scheme the derived category of perfect complexes on x is called the perfect derived category and denoted by dperf x the case where x spec r is affine it is well known that the original scheme is reconstructed from dperf r dperf x indeed for two commutative rings r and s if the perfect derived categories of r and s are equivalent then r is isomorphic to s see ric proposition and hence dperf r dperf s spec r spec s as topological spaces however such a result no longer holds for schemes in fact there exist a lot of schemes x and y such that dperf x dperf y see muk when perf perf there is a triangulated equivalence d x d y x and y are said to be derived equivalent in section we shall prove that the underlying topological spaces of a certain class of schemes can be reconstructed from their perfect derived categories theorem theorem let x and y be noetherian schemes open subschemes of affine schemes then the implication dperf x dperf y x y as topological spaces holds this theorem recovers for noetherian rings as any affine scheme is a typical example of a scheme is the punctured spectrum of a local ring as an application of this theorem we obtain that a derived equivalence of x and y yields the equality of the dimensions of x and y modular representation theory in modular representation theory finite groups are studied in various contexts from an algebraic viewpoint a finite group g has been studied through its group algebra kg and stable module category mod kg where k is a field whose characteristic divides the order of here mod kg is a triangulated category consisting of finitely generated modulo projectives on the other hand the cohomology ring g k gives an approach to study a finite group g from the topological aspect because it is isomorphic to the cohomology ring of a classifying space bg of g see ben chapter for instance the second main result in section is the following triangulated equivalences and reconstruction of classifying spaces theorem theorem let k resp l be a field of characteristic p resp q and let g resp h be a finite resp then the implication mod kg mod lh proj g k proj h l as topological spaces holds if there exists a triangulated equivalence mod kg mod lh we say that kg and lh are stably equivalent as an application of this theorem we have that a stable equivalence of kg and lh yields that the of g and the of h are equal commutative ring theory let r be a left noetherian ring the singularity category of r is by definition the verdier quotient dsg r db modr r which has been introduced by buchweitz buc in here mod r stands for the category of finitely generated left and db modr its bounded derived category the singularity categories have been deeply investigated from and motivations che iw ste tak and connected to the homological mirror symmetry conjecture by orlov one of the important subjects in representation theory of rings is to classify rings up to certain category equivalence for example left noetherian rings r and s are said to be morita equivalent if mod r mod s as abelian categories derived equivalent if db mod r db mod s as triangulated categories singularly equivalent if dsg r dsg s as triangulated categories it is well known that these equivalences have the following relations morita equivalence derived equivalence singular equivalence complete characterizations of morita and derived equivalence have already been obtained in mor ric while singular equivalence is quite difficult to characterize even in the case of commutative rings indeed only a few examples of singular equivalences of commutative noetherian rings are known furthermore for all of such known examples the singular loci of rings are homeomorphic thus it is natural to ask the following question question let r and s be commutative noetherian rings are their singular loci homeomorphic if r and s are singularly equivalent in section we show that this question is affirmative for certain classes of commutative noetherian rings to be precise we shall prove the following theorem theorem theorem let r and s be commutative noetherian local rings that are locally hypersurfaces on the punctured spectra assume that r and s are either a complete intersection rings or b rings with maximal ideal then the implication dsg r dsg s sing r sing s as topological spaces holds hiroki matsui here we say that an ideal i of a commutative ring r is if there is an sequence x in i such that x is decomposable as an moreover we prove that singular equivalence localizes by using such a homeomorphism the organization of this paper is as follows in section we introduce the notions of a support data and a classifying support data for a given triangulated category and develop the support theory without tensor structure and finally prove theorem in section we connect the results obtained in section with the support theory for tensor triangulated categories and study reconstructing the topologies of the balmer spectra without tensor structure using this method we prove theorem and in section we prove theorem and give examples of commutative rings which are not singularly equivalent throughout this paper all categories are assumed to be essentially small for two triangulated category t t resp topological spaces x x the notation t t resp x means that t and t are equivalent as triangulated categories resp x and x are homeomorphic unless otherwise specified the support theory without tensor structure in this section we discuss the support theory for triangulated categories without tensor structure throughout this section t denotes a triangulated category with shift functor first of all let us recall some basic definitions which are used in this section definition let x be a topological space and t a triangulated category we say that x is sober if every irreducible closed subset of x is the closure of exactly one point we say that x is noetherian if every descending chain of closed subspaces stabilizes we say that a subset w of x is if it is closed under specialization namely if an element x of x belongs to w then the closure x is contained in w note that w is if and only if it is a union of closed subspaces of x we say that a additive full subcategory x of t is thick if it satisfies the following conditions i closed under taking shifts x ii closed under taking extensions for a triangle l m n in t if l and n belong to x then so does iii closed under taking direct summands for two objects l m of t if the direct sum l m belongs to x then so do l and for a subcategory x of t denote by thickt x the smallest thick subcategory of t containing x we introduce the notion of a support data for a triangulated category definition let t be a triangulated category a support data for t is a pair x where x is a topological space and is an assignment which assigns to an object m of t a closed subset m of x satisfying the following conditions m m for any m t and n z m n m n for any m n t triangulated equivalences and reconstruction of classifying spaces m l n for any triangle l m n in t support data naturally appear in various areas of algebras example let r be a commutative noetherian ring for m dsg r we define the singular support of m by ssupp m p sing r mp in dsg rp r then sing r ssuppr is a support data for dsg r indeed it follows from ail theorem and bm lemma that ssuppr m is a closed subset of sing r and that ssuppr satisfies the condition in definition the remained conditions are clear because the localization functor dsg r dsg rp is exact assume that r is gorenstein denote by cm r the category of maximal cohenmacaulay modules m satisfying extir m r for all integers i recall that the stable category cm r of cm r is the category whose objects are the same as cm r and the set of morphisms from m to n is given by homr m n homr m n m n where pr m n consists of all maps from m to n factoring through some free then the stable category cm r has the structure of a triangulated category see hap moreover the natural inclusion induces a triangle equivalence dsg r by buc thus we obtain the support data sing r suppr for f cm r cm r by using this equivalence here supp m ssupp f m p sing r mp in cm rp r r for m cm r let x be a noetherian scheme for f dperf x we define the cohomological support of f by suppx f x x fx in dperf ox x s then suppx f suppx hn f is a finite union of supports of coherent ox modules and hence is a closed subspace of x moreover x suppx is a support data for dperf x because the localization is exact for details please see tho let k be a field of characteristic p and g a finite group such that p divides the order of then as in the case of gorenstein rings we can define the stable category mod kg of mod kg and it is also a triangulated category we denote by hi g k p g k hi g k p odd the direct sum of cohomologies of g with coefficient then g k has the structure of a noetherian ring by using the cup product and we can consider its homogeneous prime spectrum proj g k denote by vg m the support variety for a finitely generated m which is a closed space of proj g k then the pair proj g k vg becomes a support data for mod kg for details please refer to ben chapter remark actually the above examples of support data satisfy the following stronger condition m if and only if m hiroki matsui definition let u be a full subcategory of t we say that u is a if it satisfies m u n t m n u remark u t is a if and only if t u is closed under taking direct summands example the full subcategory t is a the full subcategory t t of test objects see definition below of t is a let us fix the following notations notation let t be a triangulated category u t a and x a topological space then we set th t thick subcategories of t thu t thick subcategories of t containing an object of u spcl x specialization closed subsets of x nesc x subsets of x nec x closed subsets of x irr x irreducible closed subsets of x let x be a support data for t x a thick subcategory of t and w s a specializationclosed subset of x then one can easily check that x x m m is a subset of x and w w m t m w is a thick subcategory of t therefore we obtain two maps with respect to the inclusion relations definition let x be a support data for t and u t a then we say that x is a classifying support data for t with respect to u if i x is a noetherian sober space and ii the above maps and restrict to mutually inverse bijections thu t o nesc x when this is the case we say that x is a classifying space of t with respect to u we say simply a classifying support data for t resp a classifying space of t we mean a classifying support data for t resp a classifying space of t with respect to t remark a classifying support data x for t classifies all thick subcategories of t containing indeed the map nesc x thu t is injective with image x th t x thus we obtain a correspondence x th t x o spcl x in particular if x satisfies the condition in remark we obtain a correspondence th t o spcl x every classifying support data automatically satisfies the following realization property triangulated equivalences and reconstruction of classifying spaces lemma let x be a classifying support data for t with respect to u then for any closed subset z of x there is an object m of u such that z m proof since x is a noetherian sober space and m n m n we may assume that z x for some x x from the assumption one has z z s m z m hence there is an element x of m for some m z then we obtain x m z x and this implies that m x z by definition of a classifying support data with respect to u z n t n m contains a object t of u we conclude that t t m m z for t m u let me give two more notations definition let u be a of t we say that a thick subcategory x of t is if there is an object m of u such that x thickt denote by pthu t the set of all thick subcategories of t we say that a thick subcategory x of t is if x thickt pthu t implies that x or x denote by irru t the set of all thick subcategories of t the following lemma shows that by using classifying support data with respect to u we can also classify thick subcategories and thick subcategories lemma let x be a classifying support data for t with respect to u then the correspondence thu t o nesc x restricts to correspondences pthu t o irru t o nec x irr x proof note that thickt m m for any m t therefore the injective map thu t nesc x induces a well defined injective map pthu t nec x the surjectivity has already been shown in lemma next we show the second correspondence for thu t one has thickt m m m m m m m m hiroki matsui on the other hand for nesc x one has thickt applying to this equality we get thickt let w be an irreducible closed subset of x assume w thickt for some pthu t then from the above equality we obtain an equality w w thickt since w is irreducible w or w and hence w or w this shows that w is conversely take a thick subcategory x of t and assume x for some closed subsets of x from the above equality we get x x thickt since x is x or x and therefore x or x thus x is irreducible these observations show the second correspondence from this lemma we can show the following uniqueness result for classifying support data with respect to u proposition let x and y be classifying support data for t with respect to a u then x and y are homeomorphic proof first note that for a topological space x the natural map x irr x x x is bijective if and only if x is sober define maps x y and y x to be the composites y x y irr x irru t irr y x y x y irr y irru t irr x x then and are well defined and mutually inverse bijections by lemma fix x x for x one has x and hence x x x in particular belongs to x therefore x x conversely for y x the above argument shows y x x x applying to this inclusion we obtain y x and therefore x x thus we conclude that x x since x is noetherian this equation means that is a closed map similarly is also a closed map the following theorem is the main result of this section theorem consider the following setting t and t are triangulated categories triangulated equivalences and reconstruction of classifying spaces u and u are of t and t respectively x and y are classifying support data for t and t with respect to u and u respectively suppose that there is a triangle equivalence f t t with f u u then x and y are homeomorphic proof from the assumption f induces a correspondence thu t x x thu t where x n t x such that n f m for an object m of t set f m f m then we can easily verify that the pair y f is a support data for t furthermore it becomes a classifying support data for t with respect to u indeed for x thu t and w nesc y we obtain f x f m f m n x m m n x f w m t f m w n t n w w from these equalities we get equalities f and f and thus f and f give mutually inverse bijections between thu t and nesc y consequently we obtain two classifying support data x and y f for t with respect to u and hence x and y are homeomorphic by proposition comparison with tensor triangulated structure in this section we discuss relation between the support theory we discussed in section and the support theory for tensor triangulated categories recall that a tensor triangulated category t consists of a triangulated category t together with a symmetric monoidal tensor product with unit object which is compatible with the triangulated structure of t for the precise definition please refer to hps appendix a example let x be a noetherian scheme then dperf x ox is a tensor triangulated category here denotes the derived tensor product let k be a field and g a finite group then mod kg k is a tensor triangulated category throughout this section fix a tensor triangulated category t we begin with recalling some basic definitions which are used in the support theory of tensor triangulated categories definition a full subcategory x of t is called a thick tensor ideal if it is a thick subcategory of t and is closed under the action of t by m n x for any m x and n t for a subcategory x of t denote by hx i the smallest thick tensor ideal of t containing x hiroki matsui for a thick subcategory x of t define its radical by x m t such that m x here m denotes the tensor product of by lemma the radical of a thick subcategory is always a thick tensor ideal a thick tensor ideal x of t is called radical if it satisfies x x a thick tensor ideal x of t is called prime if it satisfies m n x m x or n x denote by spc t the set of all prime thick tensor ideals of t for m t the balmer support of m is defined as sppm p spc t m p the set spc t is a topological space with closed basis sppm m t and call it the balmer spectrum of t let x be a topological space we say that a subset w of x is a thomason subset if it is a union of closed subsets whose complements are denote by thom x the set of all thomason subsets of x note that thom x spcl x we say that a support data x for t is tensorial if it satisfies m n m n for any m n t in tensorial support data are called simply support data then w is a radical thick tensor ideal of t for every subset w of x we say that a tensorial support data x is classifying if x is a noetherian sober space and there is a correspondence radical thick tensor ideals of t o spcl x balmer showed the following celebrated result theorem lemma theorem the pair spc t spp is a tensorial support data for t there is a correspondence radical thick tensor ideals of t o fspp gspp thom spc t remark if a topological space x is noetherian then every subset of x is thomason therefore the above theorem shows that spc t spp is a classifying tensorial support data for t provided spc t is noetherian recall that a tensor triangulated category t is rigid if the functor m t t has a right adjoint f m t t for each m t and every object m is strongly dualizable the natural map f m n f m n is an isomorphism for each n t if t is rigid then spc t spp satisfies the stronger condition lemma assume that t is rigid then the support data spc t spp satisfies the condition in remark triangulated equivalences and reconstruction of classifying spaces proof take an object m t with spp m by corollary there is a positive integer n such that m on the other hand by hps lemma a m i belongs to thickt m for any positive integer since every object is strongly dualizable therefore by using induction we conclude that m note that a tensorial classifying support data for t is a classifying tensorial support data for t indeed for a tensorial classifying support data x for t and x th t we obtain an equalities p p x x x x the following lemma gives a criterion for the converse implication of this fact lemma let x be a classifying tensorial support data for t suppose that t is rigid then the following are equivalent there is a correspondence th t o spcl x x is a classifying support data for t every thick subcategory of t is a thick t thickt proof by lemma and theorem theorem x satisfies the condition in remark therefore and means the same conditions from remark from the assumption every thick subcategory x of t is of the form x w for some subset w of x on the other hand w is a radical thick as x is a tensorial support data by assumption the thick subcategory thickt is a thick tensor ideal thus for any m t m m belongs to thickt note that is strongly dualizable and the family of all strongly dualizable objects forms a thick subcategory of t by hps theorem a therefore every object of t thickt is strongly dualizable thus for any object m t m belongs to t m m by hps lemma then proposition shows that every thick tensor ideal of t is radical on the other hand for any thick subcategory x of y one can easily verify that the subcategory y m t m x x is a thick of t containing thus we obtain y thickt t and hence x is a thick from these discussion we conclude that every thick subcategory of t is a radical thick and this shows the implication the following corollaries are direct consequences of this lemma proposition and theorem corollary let t be a rigid tensor triangulated category assume that the balmer spectrum spc t of t is noetherian and that t thickt then for any classifying support data x for t x is homeomorphic to spc t corollary let t and t be rigid tensor triangulated categories such that hiroki matsui spc t and spc t are noetherian and t and t are generated by their unit objects if t and t are equivalent as triangulated categories then spc t and spc t are homeomorphic next we consider applications of these corollaries to tensor triangulated categories appeared in example thomason showed the following classification theorem of thick tensor ideas of dperf x theorem tho theorem let x be a noetherian scheme then x suppx is a classifying tensorial support data for dperf x as an application of corollary we can reconstruct underlying topological spaces of a certain class of schemes from their perfect derived categories without tensor structure theorem let x and y be noetherian schemes open subschemes of affine schemes if x and y are derived equivalent then x and y are homeomorphic in particular topologically determined properties such as the dimensions and the numbers of irreducible components of noetherian schemes are preserved by derived equivalences proof first let me remark that the functor f dperf x dperf x has a right adjoint rhomox f dperf x dperf x for each f dperf x and moreover dperf x is rigid note that a scheme x is if and only if its structure sheaf ox is ample thus every thick subcategory of dperf x is thick tensor ideal by tho proposition applying corollary we obtain the result remark let x and y be noetherian schemes as we have already remarked in the introduction if x and y are affine then a derived equivalence dperf x dperf y implies that x and y are isomorphic as schemes by theorem if dperf x and dperf y are equivalent as tensor triangulated categories then x and y are isomorphic as schemes next consider stable module categories over group rings of finite groups in this case the following classification theorem is given by for algebraically closed field k and by for general theorem bcr bik let k be a field of characteristic p and g a finite group such that p divides the order of then the support data proj g k vg is a classifying tensorial support data for mod kg applying corollary to this classifying tensorial support data we obtain the following result theorem let k resp l be field of characteristic p resp q g resp h be a finite resp if kg and lh are stably equivalent then proj g k and proj h l are homeomorphic proof for each m mod kg the functor m mod kg mod kg has a right adjoint homk m mod kg mod kg and in addition mod kg is rigid moreover for a pgroup g kg has only one simple module therefore we have mod kg thickmod kg applying corollary we are done triangulated equivalences and reconstruction of classifying spaces recall that the of a finite group g is by definition rp g sup r r g quillen qui showed that the dimension of the cohomology ring h g k is equal to the of thus the is an invariant of stable equivalences corollary let k l g h be as in theorem assume that there is a stable equivalence between kg and lh then rp g rq h remark let g and h be a and k a field of characteristic by lin corollary if there exists a stable equivalence between kg and kh then by lin corollary if there exists a stable equivalence of morita type between kg and kh then g a necessary condition for singular equivalences recall that commutative noetherian rings r and s are said to be singularly equivalent if their singularity categories are equivalent as triangulated categories the only known examples of singular equivalences are the following example if r dsg s s then dsg r if r and s are regular then dsg r dsg s s periodicity yos chapter let k be an algebraically closed field of characteristic set r k xd f and s k xd u v f then dsg r dsg s remark all of these singular equivalences the singular loci sing r and sing s are homeomorphic in fact the cases and are clear consider the case of r k xd f and s k xd u v f uv then sing s v u v spec u v spec k xd u v f uv u v spec k xd f v sing here the first and the last equalities are known as the jacobian criterion let me give some definitions appearing in the statement of the main theorem of this section definition let r m k be a commutative noetherian local ring we say that an ideal i of r is if there is an sequence x of i such that x is decomposable as an a local ring r is said to be complete intersection if there is a regular local ring s and an sequence x such that the completion of r is isomorphic to x we say that r is a hypersurface if we can take x to be an sequence of length a local ring r is said to be locally a hypersurface on the punctured spectrum if rp is a hypersurface for every prime ideal hiroki matsui the following theorem is the main result of this section theorem let r and s be commutative noetherian local rings that are locally hypersurfaces on the punctured spectra assume that r and s are either a complete intersection rings or b rings with maximal ideal if r and s are singularly equivalent then sing r and sing s are homeomorphic for a ring r satisfying the condition b in theorem nt theorem b shows that sing r ssuppr is a classifying support data for dsg r therefore the statement of theorem follows from theorem therefore the problem is the case of a for a ring r satisfying the condition a in theorem takahashi tak classified thick subcategories of dsg r containing the residue field k of r by using the singular locus sing r and the singular support ssuppr we would like to apply theorem also for this case the problem is that whether the condition containing the residue field k is preserved by stable equivalences as we will show later this condition is actually preserved by singular equivalences for local complete intersection rings to do this we discuss replacing the residue field k with some categorically defined object first of all let us recall the notion of a test module definition let r be a noetherian ring we say that a finitely generated t is a test module if for any finitely generated m torr n t m for n pdr m example for a noetherian local ring r m k the syzygy k of its residue field is a test module for each for commutative noetherian rings admitting dualizing complexes gorenstein rings there is another characterization for test modules theorem cdt theorem let r be a commutative noetherian ring admitting a dualizing complex then test modules are nothing but finitely generated t satisfying the following condition for any finitely generated m extnr t m for n idr m motivated by this theorem we introduce the following notion definition let t be a triangulated category we say that t t is a test object if for any object m of t homt t m for n m denote by t t the full subcategory of t consisting of test objects the following lemma shows that we can consider the notion of a test object is a generalization of the notion of a test module lemma let r be a gorenstein ring then one has t cm r t cm r t is a test module triangulated equivalences and reconstruction of classifying spaces proof by theorem we have only to show t cm r t cm r all n mod r with extr m n satisfy idr n fix a maximal t and a finitely generated since r is gorenstein and t is maximal one has t r therefore we get isomorphisms exti t m t m t m r r r r for any positive integer i therefore we get isomorphisms hom t m extn t m t m r r r r r for n here d denotes the dimension of thus we are done since m is free if and only if m has finite injective dimension let us recall several classes of subcategories of modules definition an additive subcategory x of mod r is called resolving if it satisfies the following conditions i x is closed under extensions for an exact sequence l m n in mod r if l and n belong to x then so does ii x is closed under kernels of epimorphisms for an exact sequence l m n in mod r if m and n belong to x then so does iii x contains all projective for a finitely generated m denote by resr m the smallest resolving subcategory of mod r containing a additive subcategory x of mod r is called thick if x satisfies property for an exact sequence l m n in mod r if l m n belong to x then so does the third for a finitely generated m denote by thickr m the smallest thick subcategory of mod r containing lemma let t be a triangulated category and t an object of t if thickt t contains a test object of t then t is also a test object proof take an object m t with homt t m for n set x n t homt n m for n then one can easily verify that x is a thick subcategory of t by assumption x contains a test object as x contains t thus m must be zero and hence t is a test object the next proposition plays a key role to prove our main theorem proposition let r m k be a local complete intersection ring and t a finitely generated then the following are equivalent t is a test module k resr t k thickr t r k thickdb mod r t r k thickdsg r t k thickcm r t hiroki matsui proof notice resr t resr t thickr t r thickr t r thickdb mod r t r thickdb mod r t r thickdsg r t thickdsg r t and t is a test module if and only if so is t hence we may assume that t is maximal then we have resr t thickr t r thickdb mod r t r mod here the first inclusion directly follows from the definition and the second equality is given by ks theorem moreover the composition functor db mod r dsg r cm r sends k to k d and the inverse image of thickcm r t is thickdb mod r t r therefore the implications hold true furthermore by using lemma and lemma the implication follows thus it remains to show the implication assume that t is a test module recall that the complexity cxr m of a finitely generated m is the dimension of the support variety vr m associated to m see ab for details by cdt proposition t has maximal complexity namely cxr t codim r thanks to the prime avoidance lemma we can take an sequence x of length d from m set r x and t t x then r is an artinian complete intersection ring and cxr t cxr t c codimr codim r moreover one has vr t acka vr k where k a denotes the algebraic closure of this follows from the fact that vr t and vr k are closed subvarieties of the affine space acka hence by ci theorem k belongs to thickdb mod r t as a result we get k thickdb mod r t mod r thickdb mod r t r mod r thickr t r again the second equality uses ks theorem since thickr t r resr t by dt corollary we deduce k resr t by using tak lemma gathering tak theorem nt theorem b lemma and proposition we obtain the following proposition proposition let r be a noetherian local ring if r satisfies the condition a in theorem then sing r ssuppr is a classifying support data for dsg r with respect to t dsg r if r satisfies the condition b in theorem then sing r ssuppr is a classifying support data for dsg r now the proof of theorem has almost been done proof of theorem use proposition and theorem here let me remark that test objects are preserved by singular equivalences remark for a hypersurface ring r the triangulated category dsg r becomes a pseudo tensor triangulated category tensor triangulated category without unit it is shown by yu implicitly in the paper yu that for two hypersurfaces r and s if a singular equivalence between r and s preserves tensor products then sing r and sing s are homeomorphic indeed sing r is reconstructed from dsg r by using its pseudo tensor triangulated structure triangulated equivalences and reconstruction of classifying spaces since theorem gives a necessary condition for singular equivalences we can generate many pairs of rings which are not singularly equivalent let us start with the following lemma lemma let r be a local complete intersection ring with only an isolated singularity and r an integer then the ring r u ur is a local complete intersection ring which is locally a hypersurface on the punctured spectrum and sing r u ur is homeomorphic to spec proof of course t r u ur is a local complete intersection ring the natural inclusion r t induces a homeomorphism f spec t spec then one can easily check that p f p u t for any p spec t and tp rf p u ur therefore t is locally a hypersurface on the punctured spectrum and sing t spec t corollary let r and s be local complete intersection rings which have only isolated singularities assume that spec r and spec s are not homeomorphic then for any integers r s one has dsg r u ur dsg s v v s in particular dsg r r dsg s s here r r denotes the trivial extension ring of a commutative ring proof from the above lemma we obtain r u ur and s v v s satisfies the condition a in theorem sing r u ur spec s are not homeomorphic spec r and sing s u v r r thus we conclude dsg r u u dsg s v v s by theorem the second statement follows from an isomorphism r r r u the following corollary says that a equivalence fails over a ring corollary let s be a regular local ring assume that f has an isolated singularity then one has dsg s u f dsg s u v w f vw proof sing s u f spec s v w f spec f and sing s u v w f vw have different dimensions and hence are not homeomorphic for the last of this paper we will show that singular equivalence localizes lemma let r be a gorenstein local ring and p a prime ideal of then a full subcategory xp m dsg r mp in dsg rp is thick and there is a triangle equivalence dsg r dsg rp proof by using the triangle equivalence dsg r cm r we may show the triangle equivalence cm r cm rp where xp m cm r mp in cm rp note that the localization functor lp cm r cm rp m mp is triangulated since xp ker lp xp is a thick subcategory of cm r and lp induces a triangulated hiroki matsui functor lp cm r cm rp thus we have only to verify that lp is dense and fully faithful i lp is dense let u be an rp take a finite free presentation rpn rpm u of u then can be viewed as an m with entries in rp write aij for some aij r and s r then the cokernel m coker aij rn rm is a finitely generated and mp u since mp is a maximal rp we obtain isomorphisms m p mp mp r r rp rp in cm rp this shows that the functor lp is dense ii lp is faithful let m n be a morphism in cm r then is given by a fraction f of morphisms f m z and s n z in cm r such that the mapping cone c s of s belongs to xp assume lp lp s lp f sp fp then fp in homrp mp zp from the isomorphism homr m z p homrp mp zp there is a such that af in homr m z since a zp zp is isomorphism the mapping cone of the morphism a z z in cm r belongs to xp thus f af as in cm r this shows that lp is faithful iii lp is full let g mp np be a morphism in cm rp where m n cm r by the isomorphism homr m n p homrp mp np there is a morphism f m n in cm r and a such that g fp since the mapping cone of a n n is in xp we obtain a morphism f m n in cm r and lp f fp this shows that lp is full corollary let r and s be complete intersection rings which are locally hypersurfaces on the punctured spectra if r and s are singularly equivalent then there is a homeomorphism sing r sing s such that rp and p are singularly equivalent for any p sing proof as in lemma we may consider the category cm r let f cm r cm s be a triangle equivalence take a homeomorphism sing r sing s given in proposition and theorem then by construction it satisfies supps f m p m r suppr m p for each p sing moreover the following diagram is commutative tht cm r cm r tht cm s cm s fsupp y y s r nesc sing r nesc sing s where the map and are defined by x n t x such that n f m and w w respectively let p be an element of sing set wp q sing r q p which is a specializationclosed subset of sing we establish two claims triangulated equivalences and reconstruction of classifying spaces claim gsuppr wp xp proof of claim let m xp since mp in cm rp one has p suppr m thus suppr m wp and hence m gsuppr wp next take m gsuppr wp then suppr m wp means that p does not belong to suppr m therefore mp in cm rp and hence m xp claim wp p q sing s q p proof of claim one can easily check that is order isomorphism with respect to the inclusion relations since sing r wp has a unique maximal element p sing r wp sing s wp also has a unique maximal element p this shows wp p from the above two claims we obtain xp gsuppr wp gsupps wp gsupps p p where the second equality comes from the above commutative diagram and the last equality is shown by the same proof as claim consequently the triangle equivalence f induces triangle equivalences cm rp cm r cm s p cm p acknowledgments the author is grateful to his supervisor ryo takahashi for many supports and his helpful comments references ab avramov buchweitz support varieties and cohomology over complete intersections invent math no ail avramov iyengar lipman reflexivity and rigidity for complexes commutative rings algebra number theory no balmer presheaves of triangulated categories and reconstruction of schemes math ann no balmer the spectrum of prime ideals in tensor triangulated categories reine angew math bm bass murthy grothendieck groups and picard groups of abelian group rings ann of math ben benson representations and cohomology ii cohomology of groups and modules cambridge stud adv math cambridge university press bik benson iyengar krause stratifying modular representations of finite groups ann of math bcr benson carlson rickard thick subcategories of the stable module category fund math no buc buchweitz maximal modules and over gorenstein rings unpublished manuscript http ci carlson iyengar thick subcategories of the bounded derived category of a finite group trans amer math soc no cdt celikbas dao takahashi modules that detect finite homological dimensions kyoto j math no che chen the singularity category of an algebra with radical square zero doc math dt dao takahashi the radius of a subcategory of modules algebra number theory no hiroki matsui hap happel triangulated categories in the representation theory of finite dimensional algebras london math soc lecture note series cambridge university press hps hovey palmieri strickland axiomatic stable homotopy theory mem amer math soc no iw iyama wemyss singular derived categories of terminalizations and maximal modification algebras adv math ks krause stevenson a note on thick subcategories of stable derived categories nagoya math j lin linckelmann stable equivalences of morita type for selfinjective algebras and math zeit mor morita duality of modules and its applications to the theory of rings with minimum condition sci tokyo kyoiku daigaku sect a muk mukai duality between d x and d with its application to picard sheaves nagoya math j nt nasseh takahashi local rings with maximal ideal preprint orlov equivalences of derived categories and surfaces j math sci olrov triangulated categories of singularities and in model proc steklov inst math no qui quillen the spectrum of an equivariant cohomology ring i ann math ric rickard morita theory for derived categories london math soc ste stevenson subcategories of singularity categories via tensor actions compos math no tak takahashi classifying thick subcategories of the stable category of modules adv math no tho thomason the classification of triangulated subcategories compos math no yos yoshino modules over rings london mathematical society lecture note series cambridge university press cambridge yu yu the triangular spectrum of matrix factorizations is the singular locus proc amer math soc no graduate school of mathematics nagoya university furocho chikusaku nagoya aichi japan address
0
milp and based heuristics for the eternity ii puzzle oct fabio salassaa wim vancroonenburgb tony wautersb federico della crocea greet vanden bergheb a politecnico di torino digep corso duca degli abruzzi torino italy b ku leuven department of computer science codes gebroeders de smetstraat gent belgium abstract the present paper considers a hybrid local search approach to the eternity ii puzzle and to unsigned rectangular edge matching puzzles in general both an original linear programming milp formulation and a novel formulation are presented for this problem although the presented formulations remain computationally intractable for medium and large sized instances they can serve as the basis for developing heuristic decompositions and very large scale neighbourhoods as a side product of the formulation new instances are published for the academic research community two reasonably well performing constructive methods are presented and used for determining the initial solution of a local search approach experimental results confirm that this local search can further improve the results obtained by the constructive heuristics and is quite competitive with the state of the art procedures keywords edge matching puzzle hybrid approach local search introduction the eternity ii puzzle eii is a commercial edge matching puzzle in which square tiles with four coloured edges must be arranged on a grid corresponding author email address tony wauters preprint submitted to october such that all tile edges are matched in addition a complete solution requires that the grey patterns which appear only on a subset of the tiles should be matched to the outer edges of the grid an illustration of a complete solution for a small size puzzle is provided in figure figure solution to an eternity edge matching puzzle of size image generated with the eternity ii editor http accessed on january the eii puzzle was created by christopher monckton and released by the toy distributor tomy uk in july along with the puzzle release a large cash prize of million usd was announced to be awarded to the first person who could solve the puzzle as can be expected this competition attracted considerable attention many efforts were made to tackle this challenging problem yielding interesting approaches and results however no complete solution has ever been generated meanwhile the final scrutiny date for the cash price december has passed leaving the large money prize unclaimed the eii puzzle belongs to the more general class of edge matching puzzles which have been shown to be many approaches to edge matching puzzles are now available in the literature constraint programming approaches have been developed in addition to metaheuristics backtracking and evolutionary methods other methods translate the problem into a sat formulation and then solve it with sat solvers an extensive literature overview on the topic is provided in while provides a survey on the complexity of other puzzles the present paper introduces a novel linear programming milp model and a novel based formulation for puzzles of size n both formulations serve as components of heuristic decompositions used in a local search approach the remainder of the paper is structured as follows section presents the milp and maxclique formulations in section several hybrid heuristic approaches are introduced computational results are presented in section final conclusions are drawn in section problem formulations mixed integer linear programming formulation a novel linear programming model was developed for the eii puzzle problem the following notation will be used the puzzle consists of an n n square onto which tiles need to be placed the index t is used to refer to tiles the indices r n c n denote the rows resp columns of the puzzle board the index refers to the rotation of the tile means not rotated means rotated clockwise over etc the coefficient ctt l resp cb cl cr is equal to if tile t has colour l at its top resp bottom left right position when rotated by the decision variables of the milp model are defined as follows if tile t is placed in row r column c with rotation xt r c otherwise if the right edge of position r c is unmatched hr c otherwise if the bottom edge of position r c is unmatched vr c otherwise the model is then defined as follows min n x x hr c n x x vr c n x n x x xt r c n x x xt r c n c n n x x crt l xt r c n x x clt l xt r hr c n c n l l n x x crt l xt r c n x x clt l xt r hr c n c n l l cbt l xt r c n x x ctt l xt c vr c n c n l l n x x n x x cbt l xt r c n x x ctt l xt c vr c n c n l l xx ctt c n cbt xt c n n x x n x x clt xt n crt xt r n n x x xt r c r n c n hr c n c n vr c n c n the objective function expression minimises the number of unmatched edges in the inner region of the puzzle constraints indicate that each tile must be assigned to exactly one position with one rotation constraints require that exactly one tile must be assigned to a position the edge constraints and force the hr c variables to take on the value if the tiles on positions r c and r are unmatched similarly constraints and do the same for the vertical edge variables vr c finally constraints ensure that the border edges are matched to the gray frame colour l we point out that constraining the objective function to zero no unmatched edges allowed turns the model into a feasibility problem where every feasible solution is also optimal however preliminary testing showed that the latter model is only relevant for very small size problem instances if the milp solver needs to be stopped prematurely on the feasibility model no solution is returned clique formulation the eii puzzle itself is a decision problem and can be modelled as it reduces to the well known decision version of the clique problem as follows given a parameter k and an undirected graph g v e the clique problem calls for finding a subset of pairwise adjacent nodes called a clique with a cardinality greater than or equal to let the nodes of the graph correspond to variables xt r c from the formulation introduced in section each node thus represents a tile in a given position on the puzzle and with a given rotation the nodes are connected iff there is no conflict between the nodes in the puzzle possible causes of conflicts are unmatching colors for adjacent positions same tile assigned to different positions same tile assigned to the same position with different rotations different tiles assigned to the same position the objective is to find a clique of size where n is the size of the puzzle puzzle size optimal solution clique number of nodes number of edges graph density q q q milp number of variables number of constraints thread time threads time e t e t e t e t e t e t e t e t e t e t e t e t table results obtained with the maximum clique formulation solved with the algorithm by and the milp formulation solved with cplex on small size edge matching puzzle instances comparison of the milp model and the clique model the applicability of the milp model and the clique model is investigated in what follows initial testing was performed on a set of small puzzle instances ranging from up to refer to section for more information on these instances the milp model was implemented using cplex a state of the art heuristic was used for solving the maximum clique problem kindly provided by its authors the heuristic has only one parameter the number of selections q the computing time of the algorithm is linear with respect to q we tested the heuristic with q both the milp model and the max clique heuristic were tested on a modern desktop table shows the results obtained with the max clique formulation and the milp formulation for each instance we report for the clique formulation the number of nodes the number of edges the optimal solution namely the max number of matching edges the density the best number of matching edges e and the average computing time t in seconds for runs the number of variables and constraints is reported for the milp formulation the solutions depicted in bold are optimal the results show that instances up to size can be easily solved using a state of the art maximum clique intel core cpu algorithm instances of size could not be solved completely even when the algorithm was executed with higher values of q and more runs the milp is also able to solve up to size however the clique formulation is significantly faster from size upwards note that the edge matching puzzles correspond with large difficult clique instances for which current max clique solvers are not able to find the optimal solution we provide the corresponding max clique instances of the to instances to the academic larger size instances are hard to manage the graph file for example is larger than gb solution approaches both the milp model and the clique formulation presented in the previous section proved to be computationally intractable for medium sized instances the size appears to be restricted to and when the execution time is limited to one day the true eii puzzle instance is still far beyond the grasp of these models however these models can serve as the basis for some well performing heuristics presented in the following paragraphs greedy heuristic a greedy constructive heuristic has been developed for the problem studied here the heuristic is based on subproblem optimisation the puzzle is divided in regions by considering individual or rectangular regions regions are then consecutively constructed by employing a variant of the milp model presented in section first we introduce the notion of a partial solution s t in which a subset t of the tiles t have been assigned to a subset of the positions r r c n c n given a partial solution s model can be modified such that it only considers the positions in a region and it only aims to assign tiles t t tiles that have not been assigned elsewhere in addition we restrict to a rectangular region denoted by rmin rmax the positions of the region the instances can be downloaded from https a generator for these instances is available upon request from the authors figure illustrates how model can be modified to solve a region given a partial solution on in this example it is required to select of the available tiles tiles are already assigned to region in such a way that the unmatched edges are minimised hence in order to consider region the objective function is modified as follows min x x hr c x x vr c only of the remaining tiles must be selected and assigned to the region and therefore constraints are modified as follows note that the inequality indicates that not all tiles will be selected x x x xt r c t similarly constraints are also suitably modified in order to take into account the specific region to be considered note that the edge constraints forcing the values of the hr c and vr c variables also hold for rows and columns matching the boundaries of previously solved regions this enables building a solution with only a few unmatched edges between region boundaries this partial optimization model can be applied to solve all regions sequentially thus constructing the final complete solution initially is optimised after which region disjoint from region is optimised and so on the variables corresponding with the region are then optimally assigned by the milp solver algorithm presents pseudocode of this approach algorithm greedy heuristic require r rk a decomposition of r in k regions ri the initial partial solution s has no tiles assigned for i k do apply milp model to region ri given and get a new partial solution si with tiles assigned to ri end for return sk for each puzzle s size differently sized subsets of tiles have been tested to assess the quality of the approach preliminary tests of regions varying partial solution hrc vrc empty region figure illustration of how model can be modified such that it only considers the positions in a region given a partial solution s on region from by tiles size to by tiles size have been performed on the eii puzzle instance this preliminary analysis revealed that the cpu time required at each iteration of the greedy heuristic limits the subset size to tiles this roughly corresponds to milp variables for the first region of the real eii puzzle clearly an increased number of tiles leads to better results however more cpu time is needed to compute the optimal solution limiting the use in any hybrid framework backtracking constructive heuristic a backtracking version of the greedy heuristic has also been developed the main idea namely building a complete solution by constructing optimal regions is the same as for the greedy heuristic the backtracking version however restricts the optimal value of each subproblem to zero all tiles in the region should match both internally and with respect to the tiles outside the region whenever a subproblem is determined to be infeasible no completely edge matching region can be constructed the procedure backtracks to the previous region in order to find a new assignment in that region this may afterwards enable constructing a feasible assignment in the next region if not then the process is repeated until the backtracks are sufficient to find a complete solution model suitably modified is again used to build partial solutions let ri be the current region considered by the procedure and the related partial solution once the corresponding milp model is solved whenever the lower bound of the milp model related to region ri is detected to be greater than zero optimisation of region rk is stopped instead the previous region again with is reconsidered in order to obtain a new partial solution value let be the set of variables xt r c having value in solution the previous partial solution must be cut off when searching for solution the following new constraint is added to the model x xt r c t r c xt r c the rationale is to force at least one of the variables of set to be equal to zero if no solution of the previous region can lead to a zero lower bound in the current region ri the procedure backtracks further and searches for a new solution for region and so on due to the enumerative nature this procedure can lead to incomplete solutions despite long computation times we decided to limit the backtracking procedure to a fixed time limit after which the greedy heuristic continues until a complete solution is generated this backtracking heuristic is sketched in algorithm in which a recursive method backt rackin g heu rist ic obtained attempts to solve the current region ri given partial solution in the previous region if the lower bound lb of the current region is greater than the method backtracks to the previous level however if the lower bound is still and a perfectly matched assignment is found the heuristic attempts to solve the next region this will continue calling recursively until the puzzle is solved or shown infeasible given the current assignments in in the latter case the current partial solution will be excluded and a new partial solution will be constructed different and any other previously excluded partial solution from if a timeout is reached the method will continue with the best partial solution and solve the remaining regions with the greedy heuristic discussed in the previous section a local search approach a local search approach has been developed to improve the solutions generated by the constructive heuristics or any random solution the key idea is to test after an initial complete solution is generated by the heuristics whether a neighbourhood can still improve the current solution this local search method is a steepest descent search that tries to improve a solution with the following neighbourhoods border optimisation region optimisation tile assignment and tiles swap and rotation we refer to figure for an illustration of the regions considered by these neighbourhoods the border optimisation bo neighbourhood only considers placing tiles in the border while all the tiles in the inner part are fixed the decomposition tries to find the optimal border in terms of matching edges also considering the fixed tiles on the adjacent inner part correspondingly model is modified in such a way that the inner variables are fixed to their current value only the border variables can change value this subproblem corresponds to a edgematching problem preliminary computational tests indicated that the related milp model could always be solved solutions for the largest instances algorithm backt rackin g heu rist ic require r rk a decomposition of r in k regions ri require i current recursive level i k require partial solution of the previous level is the initial empty partial solution excludedsolutions partial solutions not leading to feasible solutions while not timeout do r excludedsolutions op t im ize region i if lb then return no feasible solution in current level backtrack else backt rackin g heu rist ic r i if then does not lead to feasible solution excludedsolutions excludedsolutions else s is the complete solution return s end if end if end while s greedy return s such as the original eii puzzle can be generated within little computation time when the bo neighborhood is considered the corresponding milp model is solved returning a solution at least as good as the current solution and consisting of an optimal border with respect to the n n inner region the region optimisation ro neighbourhood relates to the optimisation of a smaller region inside the puzzle and only considers the tiles of this region in the puzzle correspondingly given the current solution model is suitably modified in such a way that the variables outside the region are fixed to their current value only the region s variables can change value the ro neighborhood can also be tackled by means of the formulation by generating a graph only containing nodes corresponding to assignments in the specified region however only feasible assignments should be considered and nodes conflicting with assignments adjacent to but outside the region should not be added to the graph we recall that the purpose of the model is to find complete assignments that is without any unmatched edges however given the tiles in the considered region it may not be feasible to find such a solution in this case holes are left in the region to which the remaining unassigned tiles should be assigned the related milp region model is solved where all assigned variables are fixed to the value determined by the maxclique solver when the ro neighborhood is considered the local search procedure samples regions of fixed size in the current solution under consideration for small sizes the model heuristically is solved faster than the milp model therefore the ro neighborhood is always addressed by means of the formulation where the milp formulation is only used for completing the solution whenever holes are left in the region in the tile assignment ta neighbourhood k tiles are removed from positions diagonally adjacent is allowed and optimally reinserted thereby minimising the number of unmatched edges the related subproblem corresponds to a pure bipartite weighted matching problem which is optimally solvable by the hungarian algorithm the ta neighbourhood was first introduced by schaus and deville who called it a very large neighbourhood wauters et al developed a probabilistic version of the ta neighbourhood that sets a higher probability to selecting tiles with many unmatched edges the latter ta variant was applied in the present paper the ta separates the inner and the border moves it is prohibited to reassign border pieces to the inner region and vice versa an extention to the ta neighbourhood is also tested in particular a checkers configuration of selected tiles is studied all tiles on the board that are diagonally adjacent we denote this extension black and white bw the local search procedure iterates in this neighbourhood iteratively changing between black and white positions and solving the related bipartite weighted matching problem until no more improvements are found finally the tiles swap and rotation tsr neighbourhood is a standard local search swap operator in this case swapping the assignment of two tiles trying all possible rotations as well the local search procedure exhaustively searches the neighbourhood until a local optimum is reached computational results this section provides computational results obtained by the local search approach on the eternity ii puzzle as well as the and instances that were used in the meta eii the latter instances serve as an interesting test set for comparison due to the availability of some results from the contest in addition to the best of our knowledge the complete solutions of these instances are not publicly available the to instances used in section also originate from this set all tests were performed on a cores intel nehalem cluster with gb ram with each core running ghz with mb cache computational resources provided by dauin s hpc this cluster was used to solve different in parallel in order to reduce the total time required to run all tests each individual test was run on a single processing core thus no parallelism was employed in the algorithms all milp models are solved using cplex rd international conference on metaheuristics and nature inspired computing djerba island tunisia october eternity contest http for more details see http a b d c e figure illustration of the regions involved in the neighbourhood operations a optimizing the border bo b optimizing a rectangular region ro c optimizing nonadjacent tile assignments ta d optimizing diagonally adjacent tile assignments in a checkers fashion bw e swapping two tiles and possibly rotating them tsr parameter name value t tiles t it cols rows it table parameter settings table summarizes the parameter settings of the local search approach the local search procedure starts either from a random solution or from a solution obtained by the constructive heuristics the algorithm cycles through the proposed neighbourhoods in the following sequence ta for t iterations with sample size t bo for one iteration bw till local optimum tsr till local optimum and finally ro by means of the formulation for iterations with rectangular sample size this sequence was determined experimentally though the difference in performance between sequences was very limited at the end of the step the final solution is a local minimum with respect to all considered neighbourhoods table shows the results for twenty runs of the greedy heuristic and the backtracking heuristic for different region sizes on all the problem instances a timeout of seconds was set for the backtracking heuristic for the instance seconds for the instance seconds for the instance and seconds for the and eii instances the greedy heuristic executed by itself or after the backtracking heuristic is executed until all regions are solved the table also shows the results of both constructive methods after subsequent optimisation by the local search heuristic in general both constructive heuristics generate better results when larger regions are used this clearly affects the cpu time needed to compute optimal solutions for each region by comparing the results of the two heuristics without the local search phase it seems that the backtracking procedure does not strongly dominate on maximum average and minimum values the greedy one while consuming all the available time this dominance tends to be more evident for small puzzles and region sizes while for larger instances with solutions generated by larger regions the gap becomes smaller in almost all cases the local search procedure manages to improve the results of the constructive heuristics by several units indicating that these initial solutions are not local optima with respect to the considered neighbourhoods we conclude that many neighbourhoods in a complex structure are effective for improving these greedy constructive solutions table shows the performance of the local search procedure starting from a random initial solution of poor quality the procedure can achieve good quality results for the instance but not for larger instances this can easily be related to the size of the ro neighborhood as it is quite large with respect to the puzzle size in the instance it is able to optimize a large part of the puzzle however this ratio becomes smaller and is thus less effective for larger instances finally table compares the best published results with the results obtained by the hybrid local search procedure the cpu times refer to the considered time limits table also reports a large test of the procedure where the best performing configuration was run times within a doubled execution time limit larger execution times using cplex as ilp solver do not induce further improvements of the results we note that some of the entries of table are missing many of the approaches only deal with a subset of the considered instances only three studies report results for the to instances that were tested in this paper some approaches were only applied to the real eii game puzzle the algorithm reported in was executed on a cpu intel xeon tm with computation time hours the best score over runs equals obtained a best score of no indication was provided on the computer and the required cpu time the algorithm of was run on a pc pentium core quad ghz with gb of ram it considered eii style problems but not the real puzzle with sizes and the corresponding time limits were and seconds respectively and the entries of table report the best solution obtained over runs the algorithm of addressed the same instances with the same time limits and number of runs as it was tested on a personal computer with cpu and ram time limits and number of runs were the same as in the tests were performed on an intel core duo with of ram the algorithm of was tested on all the instances from and and also on the eii real game puzzle the entry reports the best result obtained over runs with a time limit of seconds for eii finally the algorithm of was tested on the eii real game puzzle only running on a grid computing system over a period of several not explicitly indicated by the authors the results show that the algorithm is competitive with the state of the art obtaining top results for the instance in a similar time frame as the other algorithms most interesting the initial solution constructed by the greedy and backtracking heuristics is already of high quality leaving only a limited gap from the optimal solution therefore we expect that these methods may serve as the basis for reaching new top results the best result for the official eii puzzle instance obtained using a slipping tile scanrow backtracking algorithm is still out of our current grasp however that algorithm was highly tailored to the eii puzzle instances used precomputed sequences and was run over the course of several see http a direct comparison with the approach presented is partially misleading among the other existing approaches only shows to be slightly superior to our approach however our approach should become more competitive along with the expected performance improvement of milp solvers over the years clearly solving larger subregions in both the constructive heuristics greedy and backtracking will lead to better initial solutions in addition the effectiveness of the local search neighbourhoods is expected to improve when larger regions can be solved if performance improvements allow ilp solvers to address instances of size or even in a reasonable amount of time it may safely be assumed that the proposed approach will lead to improved results competitive with the other state of the art approaches conclusions the present work introduced a hybrid approach to the eternity ii puzzle a milp formulation is related to the puzzle s optimisation version where the total number of unmatched edges should be minimised it is shown that the eternity ii puzzle can be modelled as a clique problem providing as a byproduct of this work new hard instances of the maximum clique problem to the community preliminary testing revealed it was clear that these models can not handle large size instances such as the original eii puzzle as they quickly become computationally intractable therefore these models were used as the basis for heuristic decompositions which could then be used in a hybrid approach a greedy and a backtracking constructive heuristic have been designed which strongly rely on the capability of optimally solving a specific region of the puzzle within a reasonable time limit high quality solutions can be generated using these heuristics a local search approach has also been proposed by applying a set of different neighbourhoods the local search procedure manages to improve upon the initial solutions generated by the constructive heuristics and reaches solutions competitive to the best available results these results confirm that a novel and clever use of mathematical models and is effective for large size problems which can not be solved all in once by the same milp solver we believe that hybridizing local search approaches and mathematical programming techniques in a matheuristic context is the key to break up the intractability of hard problems such as the eii puzzle references c r c mateu edge matching puzzles as hard benchmarks in cp proceedings of the international conference on principles and practice of constraint programming benoist t bourreau fast global filtering for eternity ii constraint programming letters bomze im budinich m pardalos pm pelillom the maximum clique problem in du dz pardalos pm eds handbook of combinatorial optimization kluwer academic dordrecht coelho i coelho b coelho v haddad m souza m ochi a general variable neighborhood search approach for the resolution of the eternity ii puzzle in proceedings of the international conference on metaheuristics and nature inspired computing meta http submission demaine ed demaine ml jigsaw puzzles edge matching and polyomino packing connections and complexity graphs and combinatorics grosso a locatelli m pullan simple ingredients leading to very efficient heuristics for the maximum clique problem journal of heuristics heule mjh solving problems with satisfiability solvers in proceedings of the second international workshop on logic and search lash kendall g parkes a spoerer a survey of puzzles international computer games association journal kuhn hw the hungarian method for the assignment problem naval research logistics quarterly j gutierrez g sanchis a evolutionary genetic algorithms in a constraint satisfaction problem puzzle eternity ii in cabestany j sandoval f prieto a corchado j editors systems computational and ambient intelligence lncs schaus p deville hybridization of cp and vlns for eternity ii in jfpc francophones de programmation par contraintes http eternity verhaard http wang ws chiang tc solving puzzles with a tabu search algorithm in proceedings of the international conference on metaheuristics and nature inspired computing http submission wauters t vancroonenburg w vanden berghe a approach to the eternity ii puzzle journal of mathematical modelling and algorithms instance start greedy greedy backtracking backtracking greedy greedy backtracking backtracking greedy greedy backtracking backtracking greedy greedy backtracking backtracking greedy greedy backtracking backtracking region size x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x max avg min time avg s table results for the greedy and backtracking heuristics with and without local search using different region sizes objmax objavg objmin optimum eii instance table results for the local search procedure with optimal neighbourhoods starting from a random solution present paper runs present paper runs et al wang and chiang coelho et al schaus and deville wauters et al verhaard optimum min min min min min min min min min min min min min min min min min min min min eii instance min min min min table comparison of the best results to other approaches available in the literature execution times presented within parenthesis
8
mapping images to scene graphs with structured prediction roei herzig moshiko raboh gal chechik jonathan berant amir globerson feb abstract structured prediction is concerned with predicting multiple labels simultaneously classical methods like crf achieve this by maximizing a score function over the set of possible label assignments recent extensions use neural networks to either implement the score function or in maximization the current paper takes an alternative approach using a neural network to generate the structured output directly without going through a score function we take an axiomatic perspective to derive the desired properties and invariances of a such network to certain input permutations presenting a structural characterization that is provably both necessary and sufficient we then discuss invariant gpi architectures that satisfy this characterization and explain how they can be used for deep structured prediction we evaluate our approach on the challenging problem of inferring a scene graph from an image namely predicting entities and their relations in the image we obtain results on the challenging visual genome benchmark outperforming all recent approaches introduction structured prediction addresses the problem of classification when the label space contains multiple labels for example in semantic segmentation of an image each pixel is assigned a label while considering the labels of nearby pixels a similar problem is the task of recognizing multiple entities and their relations in an image where recognizing one entity affects recognition of the others structured prediction has attracted considerable attention because it applies to many learning problems and poses unique theoretical and applied challenges see taskar et chen et belanger et equal contribution university israel google brain ca gonda brain research institute university israel correspondence to roei herzig roeiherzig moshiko raboh shikorab gal chechick jonathan berant joberant amir globerson typically structured prediction models define a score function s x y that quantifies how well a label assignment y is compatible or consistent with an input x in this setup the inference task amounts to finding the label that maximizes the compatibility score y arg maxy s x y this approach separates a scoring component implemented by a parametric model from an optimization component aimed at finding a label that maximizes that score unfortunately for a general scoring function s the space of possible label assignments grows exponentially with input size for instance the set of possible pixel label assignments is too large even for small images thus inferring the label assignment that maximizes a scoring function is computationally hard in the general case an alternative approach to methods is to map an input x to a structured output y with a black box neural network without explicitly defining a score function this raises a natural question what properties and invariances must be satisfied by such a network we take this axiomatic approach and argue that one important property is invariance to a particular type of input permutation we then prove that this invariance is equivalent to imposing certain structural constraints on the architecture of the network and describe architectures that satisfy these constraints significantly extending the expressive power of current structured prediction approaches we argue that respecting permutation invariance is important as otherwise the model would have to spend capacity on learning this invariance at training time conceptually our approach is motivated by recent work on d eep s ets zaheer et which asked a similar question for functions on sets to evaluate our approach we tackle the challenging task of mapping an image to a scene graph which describes the entities in the image and their relations we describe a model that satisfies the permutation invariance property and show that it achieves results on the competitive visual genome benchmark krishna et demonstrating the power of our new design principle in summary the novel contributions of this paper are first we derive sufficient and necessary conditions for a deep structured prediction architecture second we improve the with this approach in a challenging problem on a large dataset of complex visual scenes mapping images to scene graphs with structured prediction structured prediction methods in structured prediction define a score function s x y that reflects the degree to which y is compatible with x and infer a label by solving y arg maxy s x y see lafferty et taskar et meshi et chen et belanger et most score functions previously used decompose as a sum over simpler functions s x y p i fi x y where solving maxy fi x y can be performed this local maximization forms the basic building block of algorithms for approximately maximizing s x y one way to achieve this is to restrict fi x y to depend only on a small subset of the y variables the renewed interest in deep learning led to efforts to integrate deep networks with structured prediction including modeling the fi functions as deep networks in this context the most score functions are singleton f yi x and pairwise fij yi yj x initial work used a architecture learning local scores independently of the structured prediction goal chen et farabet et later works considered architectures where the inference algorithm is part of the computation graph chen et pei et schwing urtasun zheng et these studies used standard inference algorithms such as loopy belief propagation mean field methods and gradient descent belanger et methods provide several advantages first they allow intuitive specification of local dependencies between labels like pairwise dependencies and how these translate to global dependencies second when the score function is linear in its parameters s x y w is linear in w the learning problem has natural convex surrogates logloss in crf making learning efficient third inference in large label spaces is often possible via exact combinatorial algorithms or empirically accurate approximations however with the advent of deep scoring functions s x y w learning is no longer convex thus it is worthwhile to rethink the architecture of structured prediction models and consider models that map inputs x to outputs y directly without an explicit score function we want these models to enjoy the expressivity and predictive power of neural networks while maintaining the ability to specify local dependencies between labels in a flexible manner in the next section we present such an approach and consider a natural question what should be the properties of such a deep neural network used for structured prediction the term energy function is also used more precisely many p message passing algorithms require that functions fi x y k yk can be maximized efficiently permutation invariant structured prediction we begin with some notation focusing on structures that consists of pairwise interactions as these are simpler in terms of notation and sufficient for describing the structure in many problems we denote a structured label with n entries by y yn in a approach the score is defined via a set of singleton scores fi yi x and pairwise scores fij yi yj x where the overall score s x y is the sum of these singleton and pair scores for brevity we also denote fij fij yi yj x and fi fi yi x an inference algorithm takes as input the set of local scores fi fij and outputs the assignment maximizing s x y we can therefore abstractly view an inference algorithm as a blackbox that takes as input a set of and inputs the local scores fi fij and returns a label y even without an explicit score function s x y while numerous inference algorithms exist for this setup including belief propagation bp and mean field here we aim to develop a framework for a deep learning labeling algorithm we avoid the term inference since the algorithm does not explicitly maximize a score function such an algorithm will be a with the f functions as input and the labels yn as output we next ask what architecture such an algorithm should have we follow with several definitions a graph labeling function f v e y is a function whose input is an ordered set of node features v z z n and ordered set of edge features e z z i j z n for example the z i s can be the array of values fi yi x and the z i j s can be the table of values fi j yi yj x for simplicity assume z i rd and z i j re the output of f is a set of labels y yn which can be thought of as labeling the nodes thus inference algorithms like bp are graph labeling functions since they take f as input and output a set of labels however graph labeling functions need not correspond to any inference algorithm an algorithm that maximizes a score function a natural requirement is that the algorithm produces the same result when given the same score function for example consider a label space containing three variables and assume that the inference algorithm takes as input z z z z z z z and outputs a label y when the same algorithm is given an input that is permuted in a consistent way z this defines exactly the same score function as the first scenario hence we would expect it to output the same label only permuted namely it should output y most inference algorithms including bp and mean field satisfy this symmetry requirement by mapping images to scene graphs with structured prediction characterizing permutation invariance figure graph permutation invariance and structured prediction a graph labeling function f is graph permutation invariant gpi if permuting the names of nodes maintains the output sign here we design a deep learning hence need to guarantee invariance to input permutations a that does not satisfy this invariance has to waste capacity on learning it at training time in what follows we use z to denote the joint set of node and edge features z can be thought of as a container with n n n elements we next consider what happens to a graph labeling function when graph variables are permuted by a permutation importantly the edges in this case are also permuted in a way that is consistent with the node permutation definition let z be a set of node and edge features given a permutation of n denote z to be a new set of node and edge features that are given by z i z i z i j z i j z has the same elements as in z but the node elements are permuted according to and the edge elements are permuted accordingly in what follows we use the notation yn n namely applied to a set of labels yields the same labels only permuted by next comes our key definition of a function f whose output is invariant to permutations of the input graph definition a graph labeling function f is said to be invariant gpi if for all permutations of n and for all z it satisfies f z f z figure illustrates the desired invariance the above property says that as long as the input to f describes the same node and edge properties the same labeling will be output this is indeed a property we would like any such f to have and we thus turn to characterizing a necessary and sufficient structure for achieving it motivated by the above discussion we ask what structure is necessary and sufficient to guarantee that f is graphpermutation invariant note that a function f takes as input an ordered set z therefore its output on z could certainly differ from its output on z to achieve permutation invariance f should intuitively contain certain symmetries for example one permutation invariant architecture is to define yi g z i for any function g but this characterization is too restrictive to cover all permutation invariant functions the next theorem provides a complete characterization while figure shows the corresponding architecture theorem let f be a graph labeling function then f is invariant if and only if there exist functions such that for all k n f z k z k n x z i x z i z i j z j where rl and rw and rw proof first we show that any f satisfying the conditions of theorem is gpi namely for any permutation f z k f z k to see this write f z k using eq and definition as x x z k z i z i z i j z j i the second argument of above is clearly invariant under because the sum considers an index i and all other indices j hence the same elements are covered under permutation the expression therefore equals to x x z k z i z i z i j z j f z k i where the equality follows from eq we thus proved that eq implies graph permutation invariance next we prove that any invariant function can be expressed as in eq namely we show how to define and that can implement any permutation invariant function the key idea is to construct such that the second argument of in eq contains all the information about the graph features z including the edges they originated from then the function consists of an application of the black box f to this representation followed by extracting the label yk to simplify notation assume that edge features are scalar e the extension to the vector case is simple but involves more indexing we also assume that z k uniquely identifies the node no two nodes share the same node mapping images to scene graphs with structured prediction figure a schematic representation of the gpi architecture in theorem singleton features z i are omitted for simplicity first the features z i j are processed by next they are summed to create a vector si which is concatenated with z i third a representation of the entire graph is created by applying n times and summing the created vector the graph representation is then finally processed by together with z k feature which can be achieved by adding the index as another feature of z k finally we assume that f is a function only of the pairwise features z i j this can be achieved by adding singleton features into the pairwise ones let h be a hash function with l buckets mapping node features z i to an index bucket assume that h is perfect this can be achieved for a large enough l define to map the pairwise features to a vector of size let j be a vector of dimension rl with one in the j th coordinate recall that we consider scalar z i j so that is indeed in rl and define as z i z i j z j h z j zi j stores zi j in the unique bucket for node p let si z i zi j z j be the second argument of in eq si rl then since all z j are distinct si stores all the pairwise features for neighbors of i in unique positions within its l coordinates since si h z k contains the feature zi k whereas sj h z k contains the feature z j k we can not simply sum the si since we would lose the information of which edges the features originated from instead we define to map si to such that each feature is mapped to a distinct location formally z i si h z i sti outputs a matrix that is all zeros except for the features correspondingp to node i that are stored in row h z i the matrix m i z i si namely the second argument of in eq is a matrix with all the edge features in the graph including the graph structure figure illustration of the proof construction for theorem here h is a hash function of size l such that h h h g is a input graph and z i j r are the pairwise features in purple of a is applied to each z i j each application yields a vector in the three dark yellow columns correspond to z z and z then all vectors z i j are summed over j to obtain three si vectors b s blue matrices are an outer product between h z i and si see eq resulting in a matrix of zeros except one row the dark blue matrix corresponds for z c all s are summed to a matrix isomorphic to the original z i j matrix to complete the construction we set to have the same outcome as we first discard rows and columns in m that do not correspond to original nodes reducing m to dimension n n then we use the reduced matrix as the input z to the given assume for simplicity that m does not need to be let the output of f on m be y yn then we set z k m y h zk since f is invariant to permutations this indeed returns the output of f on the original input general graphs so far we discussed complete graphs where all edges correspond to valid feature pairs many graphs however may be sparse and have certain structures for example an chain graph in sequence labeling has only n edges for such sparse graphs the input to f would not be all z i j pairs but rather only features corresponding to valid edges of the graph and we are only interested in invariances that preserve the graph structure namely the automorphisms of the graph thus the desired invariance is that f z f z only for automorphisms of the graph it is easy to see that p theorem holds in this case if one replaces the sum with i where n i are the neighbors of node i in the graph this merely introduces another indexing step mapping images to scene graphs with structured prediction deep graph prediction theorem provides the general requirements for designing an architecture for structured prediction for a given problem one has to choose a specific architecture and parameterization for for instance it is interesting to consider how an algorithm like belief propagation bp can be implemented in our framework following the proof of theorem one would use to aggregate features and then would apply bp to these features our architecture is of course more general by construction for example it could use and to sketch the input graph such that labeling can be performed on a reduced representation we now survey certain architectures consistent with theorem and discuss their expressive power introducing attention attention is a powerful architectural component in deep learning bahdanau et but most inference algorithms do not use attention we now show how attention can be introduced in our framework intuitively attention means that instead of aggregating features of neighbors a node i weighs neighbors based on their relevance for example the label of an entity in an image may depend more strongly on entities that are spatially closer we now implement attention for the architecture of eq formally we learn attention weights for the neighbors j of a node i which scale the features z i j of that neighbor we can also learn different attention weights for individual features of each neighbor in a similar way let wi j r be an attention mask specifying the weight that node i gives to node j x wi j z i z i j z j zi zi j zj zi zi t zt t where can be any function of its arguments a dot product of z i and z j as in standard attention models to introduce attention we wish re to have the form of weighting p wi j over neighboring feature vectors z i j namely wi j z i j figure an image top and its scene graph bottom from the visual genome dataset krishna et the scene graph captures the entities in the image nodes blue circles and their pairwise relations edges red circles example relationships in this graph include hat on dog and dog on motorcycle using rnns as components theorem allows arbitrary functions for and except for their input dimensionality specifically these functions can involve highly expressive recursive computation simulate existing message passing algorithms and new algorithms that are learned from data this can of course be extended to more elaborate structures like lstms hochreiter schmidhuber and neural turing machines graves et which we leave for future work theorem suggests that any function in the form of f is graph permutation invariant it is easy to show that composing two functions that are gpi is also gpi therefore we can run f iteratively by providing the output of one step of f as part of the input to the next step and maintain invariance this results in a recurrent architecture which we will employ in the next section to obtain performance on scene graph prediction application scene graph classification x e x zi zi j zj z i j p wi j z i j z z z i i j j si e we demonstrate the benefits of our axiomatic approach in the task of inferring scene graphs from images in this problem the input is an image annotated with a set of rectangles that bound entities in the image known as bounding boxes the goal is to label each bounding box with the correct entity category and every pair of entities with their relation such that they form a coherent graph known as a scene graph in a scene graph nodes correspond to bounding boxes labeled with the entity category and edges correspond to relations among entities which could be spatial on or functional wearing thus in an image with bounding boxes there are output variables a similar approach can be applied over and to model attention over the outputs of as well graph nodes this concept is illustrated in figure showing an image of a dog on a motorcycle top and the corresponding scene to achieve this form we extend by a single entry defining namely we set l as e z i z i j z j zi zi j zj z i j here e are the first e elements of and z ip z i j z j zi zi j zj we keep the definition e of si z i zi j z j next we define si and substitute si and to obtain the desired form as attention weights wi j over neighboring feature vectors z i j z i si mapping images to scene graphs with structured prediction graph below the pink box in the image is labeled motorcycle and the white box is labeled dog these two boxes correspond to two nodes light blue circles in figure bottom and their relation on corresponds to an edge red circle labeled on while scene graphs are typically very sparse one can view a scene graph as complete if each pair of unrelated entities is connected by a null edge a scene graph can be represented as a collection of triplets each with a relation and two entities like dog on motorcycle model our model has two components a label predictor lp that takes as input an image with bounding boxes and outputs a distribution over labels for each entity and relation then a scene graph predictor sgp that takes all label distributions and predicts more consistent label distributions jointly for all entities and relations label prediction the lp module figure receives an input image x and a set of bounding boxes bbn corresponding to image entities as in figure lp outputs a set of entity label probabilities p yient bbi for each box i from a set of candidate entity labels p and a set rel of relation probabilities p yi j bbi bbj from another predefined set of relation labels these unary and pairwise potentials are later fed to the sgp module to predict entity labels we used r es n et he et b taking as input a patch cropped from the full image according to the ith bounding box figure we used a second r es n et to predict relations using a tensor input three channels for the rgb image and two channels for binary masks for the subject entity and object entity bounding boxes figure the image patch provided to this network was cropped such that it covers both the subject entity and the object entity providing the two binary masks breaks the symmetry of the subject entity and object entity and allow the network to discriminate between triplets like man wearing shirt and shirt on man scene graph prediction while the lp module described above is trivially gpi because the output variables rel yient yi j are predicted independently constructing a gpi architecture for a scene graph predictor is harder we now outline this construction entity classification in this module is gpi following theorem where z i are features for every bounding box and z i j are features for box pairs to classify relations we added a function that reuses the gpi representation created during entity classification because the input to is a gpi representation it is easy to show that our entire network is gpi let z i be the concatenation of z features and z spatial where i i features zi is the current label probability for entity i logits figure the label predictor a entity recognition network a network takes an image patch cropped based on a bounding box and outputs classification probabilities per label b relation recognition network a network takes an input tensor containing the rgb image in the first channels and two binary masks for the subject and object entities in the remaining two channels before the final softmax layer and z spatial is i s bounding i box given as a x y width height in addition for z i j we used the confidences for relation i j logits before the final softmax layer in each step of sgp we apply the function f which receives all entity features z i and all relation features z i j and output updated confidences for entities and relations because composing gpi functions is gpi our sgp module is gpi we now describe our implementation of the three components of f and is a network with two it receives a subject features z i b relations features z i j c entity features z j and outputs a vector of size next for each entity i we aggregate z i z i j z j into si using the attention mechanism described in section to calculate the weights wi j we implement eq with a fc layer that receives the same input as and outputs a scalar is a two network receiving entity features z i and context features si the outputs of are aggregated with a similar attention mechanism over entities resulting in a vector g representing the entire graph consists of which classifies entities and which classifies relations is a three network of size it receives z i si and g as input and outputs a vector qi with one scalar per entity class unlike theorem we allow direct access to si which maintains the gpi property and improved learning in practice the final output confidence is a linear interpolation of the current confidence z fi eatures and the new confidence qi controlled by a learned forget gate the output is qi forget z fi eatures the relation classifier is analogous to the entity classifier receiving as input z i z j the relation features z i j and the graph representation mapping images to scene graphs with structured prediction we also explored concatenating word embeddings of the most probable entity class to z i word vectors were learned with g lov e pennington et from the captions of visual genome krishna et experimental setup dataset we evaluated our approach on the visual genome vg dataset krishna et vg consists of images annotated with bounding boxes entities and relations distribution over entity classes and relations is with a total of unique entity classes and unique relations to allow comparison with previous studies of this dataset xu et newell deng zellers et we used the same preprocessed data including the train and test splits as provided by xu et this dataset had on average entities and relations per image for evaluation we used the same entity categories and relations as in xu et newell deng zellers et to tune we also split the training data into two by randomly selecting examples resulting in a final split for sets training procedure we trained all networks using adam kingma ba input images were resized to to conform with the r es n et architecture we first trained the lp module and then trained the sgp module using the best lp model in what follows all the particular chosen values were tuned on the validation set for lp we trained the relation network with loss and a ratio of where positive refers to a labeled relation and negative to unlabeled and performed after epochs we chose a batch size of and also used data augmentation techniques such as translation and rotation to further improve the results the loss function for the sgp was the sum of cross entropy losses over all entities and relations in the image in the loss we penalized entities times more strongly than relations and penalized negative relations times more weakly than positive relations we used batch size and after epochs the recurrent application of f was performed for steps evaluation xu et al defined three different subtasks when inferring scene graphs and we focus on two sgcls given bounding boxes for entities predict all entity categories and relations categories predcls given bounding boxes annotated with entity labels predict all relations following lu et we used recall k as the evaluation metric it measures the fraction of correct triplets that appear within the k most confident triplets proposed by the model two evaluation protocols are used in the literature which differ by whether they enforce graph constraints over model predictions the first protocol requires that the triplets assign one consistent class per entity and relation this rules out putting more than one triplet for a pair of bounding boxes it also rules out inconsistent assignment like a bounding box that is labeled as one entity in one triplet and as another entity in another triplet the second evaluation protocol does not enforce any such constraints models and baselines we compare four variants of our gpi approach with the reported results of four baselines that are currently the on various scene graph subtasks all models use the same data split and as xu et l u et al this work leverages word embeddings to the likelihood of predicted relations x u et al this model passes messages between entities and relations and iteratively refines the feature map used for prediction n ewell d eng the p ixel raph model uses associative embeddings newell et to produce a full graph from the image z ellers et al the n eural m otif method encodes global context for capturing highorder motifs in scene graphs gpi n o attention our gpi model but with no attention mechanism instead following theorem we simply sum the features gpi n eighbor attention our gpi model using attention over neighbors as described in section gpi m ulti attention our gpi model except that we learn different attention weights per feature gpi l inguistic same as gpi m ulti attention but also concatenating the word embedding vector for the most probable entity label see sec results table lists recall and recall for four variants of our approach compared with three baselines evaluating with graph constraints the gpi approach performs well and l inguistic outperforms all baselines for both predcls and sgcls table provides a similar comparison when evaluating without graph constraints again l inguistic performs best more details in the supplemental material figure illustrates the model behavior predicting isolated labels with lp column c mislabels several entities but these are corrected after joint prediction column d column e shows that the system learned to attend more to nearby entities the window and building are closer to the tree and column f shows that stronger attention is learned for the classes bird presumably because it is usually more informative than common classes like tree mapping images to scene graphs with structured prediction figure a an input image with bounding boxes from vg b the scene graph c the lp fails to recognize some entities building and tree and relations in front of instead of looking at d gpi l inguistic fixes most incorrect lp predictions e window is the most significant neighbor of tree f the entity bird receives substantial attention while tree and building are less informative table test set results for evaluation sgc ls r r l u et al x u et al n eural m otifs n o attention n eighbor atten m ulti attention l inguistic p red c ls r r table test set results for unconstrained evaluation sgc ls r r p ixel raph n o attention n eighbor atten m ulti attention l inguistic p red c ls r r related work there has been significant recent interest in extending deep learning to structured prediction much of this work has been on semantic segmentation where convolutional networks shelhamer et became a standard approach for obtaining singleton scores and various approaches were proposed for adding structure on top most of these approaches used variants of message passing algorithms unrolled into a computation graph xu et some studies parameterized parts of the message passing algorithm and learned its parameters lin et recently gradient descent has also been used for maximizing score functions belanger et gygli et an alternative approach for deep structured prediction is via greedy decoding where one label is inferred at a time based on previous labels this has been popular in applications like dependency parsing chen manning these works rely on the sequential structure of the table recall of predcls for the relations ranked by their frequency as in xu et r elation on has in of wearing near with above holding behind under sitting on in front of attached to at hanging from over for riding l u et al x u et al l inguis tic input where bilstms can be effectively applied the concept of architectural invariance was recently proposed in d eep s ets zaheer et the invariance we consider is much less restrictive we do not need to be invariant to all permutations of singleton and pairwise features just those consistent with a graph and hence results in a substantially different set of architectures extracting scene graphs from images provides a semantic representation that can later be used for reasoning question answering and image retrieval johnson et lu et raposo et it is at the forefront of machine vision research integrating challenges like object detection action recognition and detection of interactions liao et plummer et mapping images to scene graphs with structured prediction conclusion we presented a deep learning approach to structured prediction which constrains the architecture to be invariant to structurally identical inputs as in methods our approach relies on pairwise features capable of describing correlations and thus inheriting the intuitive aspect of approaches however instead of maximizing a score function which leads to computationallyhard inference we directly produce an output that is invariant to equivalent representations of the pairwise terms the axiomatic approach can be extended in many ways for image labeling geometric invariances shift or rotation may be desired in other cases invariance to feature permutations may be desirable we leave the derivation of the corresponding architectures to future work finally there may be cases where the invariant structure is unknown and should be discovered from data which is related to work on lifting graphical models bui et it would be interesting to explore algorithms that discover and use such symmetries for deep structured prediction references bahdanau cho and bengio neural machine translation by jointly learning to align and translate in international conference on learning representations iclr belanger david yang bishan and mccallum andrew learning for structured prediction energy networks in precup doina and teh yee whye eds proceedings of the international conference on machine learning volume pp pmlr bui hung hai huynh tuyen and riedel sebastian automorphism groups of graphical models and lifted variational inference in proceedings of the twentyninth conference on uncertainty in artificial intelligence uai pp arlington virginia united states auai press url http chen danqi and manning christopher a fast and accurate dependency parser using neural networks in proceedings of the conference on empirical methods in natural language processing emnlp pp chen liang chieh papandreou george kokkinos iasonas murphy kevin and yuille alan semantic image segmentation with deep convolutional nets and fully connected crfs in proceedings of the second international conference on learning representations chen liang chieh schwing alexander g yuille alan l and urtasun raquel learning deep structured models in proc icml farabet clement couprie camille najman laurent and lecun yann learning hierarchical features for scene labeling ieee transactions on pattern analysis and machine intelligence graves alex wayne greg and danihelka ivo neural turing machines arxiv preprint gygli michael norouzi mohammad and angelova anelia deep value networks learn to evaluate and iteratively refine structured outputs in precup doina and teh yee whye eds proceedings of the international conference on machine learning volume of proceedings of machine learning research pp international convention centre sydney australia pmlr he kaiming zhang xiangyu ren shaoqing and sun jian deep residual learning for image recognition in ieee conference on computer vision and pattern recognition cvpr las vegas nv usa june pp he kaiming zhang xiangyu ren shaoqing and sun jian identity mappings in deep residual networks in eccv volume of lecture notes in computer science pp springer hochreiter and schmidhuber j long memory neural computation johnson justin krishna ranjay stark michael li lijia shamma david bernstein michael and li image retrieval using scene graphs in ieee conference on computer vision and pattern recognition cvpr pp kingma diederik and ba jimmy adam a method for stochastic optimization arxiv preprint arxiv url http krishna ranjay zhu yuke groth oliver johnson justin hata kenji kravitz joshua chen stephanie kalantidis yannis li shamma david a et al visual genome connecting language and vision using crowdsourced dense image annotations international journal of computer vision lafferty mccallum and pereira conditional random fields probabilistic models for segmenting and labeling sequence data in proceedings of the international conference on machine learning pp mapping images to scene graphs with structured prediction liao wentong yang michael ying ackermann hanno and rosenhahn bodo on support relations and semantic scene graphs arxiv preprint lin guosheng shen chunhua reid ian and van den hengel anton deeply learning the messages in message passing inference in advances in neural information processing systems pp lu cewu krishna ranjay bernstein michael and li visual relationship detection with language priors in european conference on computer vision pp meshi sontag jaakkola and globerson a learning efficiently with approximate inference via dual losses in proceedings of the international conference on machine learning pp new york ny usa acm newell alejandro and deng jia pixels to graphs by associative embedding in advances in neural information processing systems to appear pp curran associates newell alejandro huang zhiao and deng jia associative embedding learning for joint detection and grouping in advances in neural information processing systems pp curran associates pei wenzhe ge tao and chang baobao an effective neural network model for dependency parsing in proceedings of the annual meeting of the association for computationa linguistics pp pennington jeffrey socher richard and manning christopher glove global vectors for word representation in empirical methods in natural language processing emnlp pp url http plummer bryan mallya arun cervantes christopher hockenmaier julia and lazebnik svetlana phrase localization and visual relationship detection with comprehensive cues in iccv raposo david santoro adam barrett david pascanu razvan lillicrap timothy and battaglia peter discovering objects and their relations from entangled scene representations arxiv preprint schwing alexander g and urtasun raquel fully connected deep structured networks arxiv shelhamer evan long jonathan and darrell trevor fully convolutional networks for semantic segmentation ieee conference on computer vision and pattern recognition cvpr taskar guestrin and koller max margin markov networks in thrun saul and b eds advances in neural information processing systems pp mit press cambridge ma xu zhu choy and scene graph generation by iterative message passing in the ieee conference on computer vision and pattern recognition zaheer manzil kottur satwik ravanbakhsh siamak poczos barnabas salakhutdinov ruslan r and smola alexander j deep sets in guyon luxburg bengio wallach fergus vishwanathan and garnett r eds advances in neural information processing systems pp curran associates zellers rowan yatskar mark thomson sam and choi yejin neural motifs scene graph parsing with global context arxiv preprint url http zheng shuai jayasumana sadeep bernardino vineet vibhav su zhizhong du dalong huang chang and torr philip hs conditional random fields as recurrent neural networks in proceedings of the ieee international conference on computer vision pp
1
improved linear time algorithms for some classical graph problems sankardeep seungbum and srinivasa rao dec the institute of mathematical sciences hbni chennai india sankardeep university of siegen siegen germany seoul national university seoul south korea ssrao we provide linear time algorithms for computing bridges topological sorting and strongly connected components improving on several recent results of elmasry et al stacs banerjee et al cocoon and chakraborty et al isaac en route we also provide another dfs implementation with weaker input graph representation assumption without compromising on the time and space bounds of the earlier results of banerjee et al cocoon and kammer et al mfcs introduction since the early days of designing graph algorithms researchers have developed several approaches for testing whether a given undirected or directed graph g v e with n vertices and m edges is strongly connected biconnected connected and finding cut vertices bridges of all of these methods use search dfs as the backbone to design the main algorithm the classical linear time algorithms due to tarjan computes the values which are defined in terms of a of g for every vertex v and checks some conditions using that to determine whether g has the desired property there are other linear time algorithms as well for these problems see and all the references therein all of these classical algorithms take o m n time and o n words our model of computation is the standard word ram model with word size w lg n bits of space our aim is to improve the space bounds of these algorithms without increasing the running time motivation and related work motivated mainly by the big data phenomenon among others recently there has been a surge of interest in improving the space complexity of the fundamental linear time graph algorithms by paying little or no penalty in the running time reducing the working space of the classical graph algorithms which generally take o n lg n bits to o n lg n bits without compromising on time towards this elmasry et al gave among others an implementation for dfs taking o m n time and o n lg lg n bits of space for sparse graphs when m o n the time space in bits dfs o n m o n m o n m o n m o n lg n o n m o n lg o n lg lg n testing biconnectivity reporting cut vertices testing connectivity reporting bridges this paper topological sort this paper this paper testing strong connectivity this paper this paper table summary of our results space bound was improved further to o n bits keeping the same linear time in banerjee et al gave among others a space efficient implementation for performing bfs using just o n bits of space and linear time improving upon the result of such algorithms for a few other graph problems also have been considered recently our results we assume that the input graph g which is represented using adjacency array g is represented by an array of length where the entry stores a pointer to an array that stores all the neighbors of the vertex is given in a memory with a limited working memory and output we count space in terms of the number of bits in workspace used by the algorithms our main goal here is to improve the space bounds of some of the classical and fundamental graph algorithms we summarize all our main results in table in this paper basically we complete the full spectrum of results regarding the space bounds for these problems keeping the running time linear by the algorithms in the recent space efficient graph algorithm literature due to lack of space we provide only sketches of our proofs testing connectivity and finding bridges in an undirected graph g a bridge is an edge that when removed without removing the vertices from a graph creates more components than previously in the graph a connected graph with at least two vertices is if and only if it has no bridge let t denote the dfs tree of following kammer et al we call a tree edge u v of t with u being the parent of v full marked if there is a back edge from a descendant of v to a strict ancestor of u half marked if it is not full marked and there exists a back edge from a descendant of v to u and unmarked otherwise they use this definition to prove the following i every vertex u except the root r is a cut vertex exactly if at least one of the edges from u to one of its children is either an unmarked edge or a half marked edge and ii root r is a cut vertex exactly if it has at least two children in t based on the above characterization they gave o m n time and o n lg lg n bits algorithm to if g has any cut vertex our main observation is that we can give a similar characterization for bridges in g and essentially using a similar implementation we can also obtain o m n time and o n lg lg n bits algorithms for testing connectivity and reporting bridges of we start with the following lemma lemma a tree edge e u v in t is a bridge of g if and only if it is unmarked proof sketch if e is unmarked then no descendants of v reaches u or any strict ancestor of u so deleting e would result in disconnected graph thus e has to be a bridge on the other direction it is easy to see that if e is a bridge it has to be an unmarked edge now we state our theorem below theorem given an undirected graph g in o m n time and o n lg lg n bits of space we can determine whether g is connected if g is not connected then in the same amount of time and space we can compute and output all the bridges of proof sketch using lemma and the similar implementation of using stack compression and other tools of the algorithm provided in section of kammer et al with few modifications we can prove the theorem note that the space bound of theorem improves the results of and for sufficiently dense graphs when m n lg lg n and m n lgo n respectively while keeping the same linear runtime see table dfs without cross pointers banerjee et al and subsequently kammer et al gave o m n bits and o m n time implementations of dfs improving on the bounds of for sparse graphs but both of these dfs implementations assume that the input graph is represented using the adjacency array along with cross pointers for undirected graphs every neighbour v in the adjacency array of a vertex u stores a pointer to the position of vertex u in the adjacency array of see for detailed definitions for directed graphs we emphasize that this input assumption can double the space usage compared to the raw adjacency array in worst case in what follows we provide the proof sketch of a dfs implementation taking the same time and space bounds as that of but without using the cross pointers our main theorem is as follows theorem given a directed or undirected graph g represented as adjacency array we can perform dfs traversal of g using o m n bits and o m n time proof sketch we essentially modify the proof of which uses a bitvector a of length o m n having one to one mapping with the unary encoding of the degree sequence to mark the tree edges and subsequently uses cross pointers to find the parent of any vertex during backtracking as well as starting with next unvisited vertex after backtracking we note that we can represent the parents of all the vertices in another bitvector p of length o m n parallel to a now to perform backtracking efficiently we could use the constant time append only structure also with constant time of grossi et al along with the p array with these modifications we could get rid of cross pointers without compromising on the running time and space bound of the earlier algorithms testing strong connectivity and topological sorting towards giving improved space efficient algorithms for strong connectivity sc and topological sorting ts we first improve lemma of which says the following if dfs of a directed graph g takes t n m time and s n m space then we can output the vertices of g in reverse postorder of the dfs tree t of g taking o t n m time and o s n m n lg lg n space combining this lemma with the classical algorithms for sc and ts they obtained o n lg lg n bits and o m n time algorithms for both these problems we improve these by showing the following theorem if dfs of a directed graph g takes t n m time and s n m space then the vertices of g can be output in reverse postorder with respect to a dfs forest of g taking o t n m time and o s n m m n space as a result we can also solve sc and ts in o m n time using o n m bits of space proof sketch we use the dfs algorithm of theorem to first mark all the tree edges in the array a now we start with the rightmost leaf vertex of the dfs tree and use operations on a and p as defined in the proof of theorem carefully to traverse the tree in reverse direction along with standard dfs backtracking etc to generate reverse postorder sequence now using this as the back bone of the classical algorithms we obtain o m n bit and o n m time algorithms for sc and ts theorem improves the result of for sparse when m o n graphs now if we use the dfs algorithm of chakraborty et al and modify it suitably to perform the traversal of the dfs tree in reverse we obtain the following result theorem if dfs of a directed graph g takes t n m time and s n m space then the vertices of g can be output in reverse postorder with respect to a dfs forest of g taking o t n m time and o s n m n lg space as a result we can also solve sc and ts using o m n time and o n lg bits references banerjee chakraborty and raman improved space efficient algorithms for bfs dfs and applications in cocoon volume pages springer lncs banerjee chakraborty raman roy and saurabh tradeoffs for dynamic programming in trees and bounded treewidth graphs in cocoon volume pages springer lncs chakraborty raman and satti biconnectivity chain decomposition and stnumbering using o n bits in isaac volume of lipics pages schloss dagstuhl fuer informatik chakraborty and satti algorithms for maximum cardinality search stack bfs queue bfs and applications in cocoon hong kong china august pages cormen leiserson rivest and stein introduction to algorithms chakraborty raman and satti biconnectivity and other applications of dfs using o n bits comput syst elmasry hagerup and kammer basic graph algorithms in stacs pages grossi and ottaviano the wavelet trie maintaining an indexed sequence of strings in compressed space in pods pages kammer kratsch and laudahn biconnected components and recognition of outerplanar graphs in mfcs schmidt a simple test on and inf process tarjan search and linear graph algorithms sicomp tarjan a note on finding the bridges of a graph inf pro
8
apr and moments properties of a markov process fabio gobbi sabrina mulinacci university of bologna department of statistics april abstract this paper provides conditions under which a markov process is we introduce as a particular case a gaussian markov process which generalizes the standard random walk allowing the increments to be dependent jel classification mathematics subject classification keywords markov process copula gaussian process introduction in this paper we analyze the temporal dependence properties satisfied by a discrete times nonstationary markov process temporal dependence is relevant since it permits to verify of how well theoretical models explain temporal persistency observed in financial data moreover it is also a useful tool to establish large sample properties of estimators for dynamic models in particular in this paper we analyze the property and we give sufficient conditions that ensure this property be satisfied in the copula approach to univariate time series modelling the finite dimensional distributions are generate by copulas darsow et al provide necessary and sufficient conditions for a time series to be a markov process recent literature on this topic has mainly focused on the stationary case chen and fan introduce a strictly stationary first order markov process generated from c where is the invariant distribution of yt and c is the parametric copula for yt the authors show that the temporal dependence measure is purely determined by the properties of copulas and present sufficient conditions to ensure that the process yt t based on gaussian and efgm copulas are geometric beare shows that all markov models generated via symmetric copulas with positive and square integrable densities are geometric many commonly used bivariate copulas without tail dependence such as gaussian efgm and frank copulas satisfy this condition chen et al show that clayton gumbel and student s t copula based markov models are geometrically ergodic which is a stronger condition than the geometric in this paper we focus on markov processes where some dependence between each state variable and increment is allowed and modeled through a copula in particular we introduce a gaussian markov process which is and generalizes the classical gaussian random walk and we study related moments properties and provide conditions under wich the process is the paper is organized as follows section presents a general result on the properties satisfied by markov processes section restricts the study to the gaussian case section concludes markov processes and properties throughout the paper y yt is a discrete time markov process thanks to the seminal paper of darsow et al the markovianity of a stochastic process can be characterized through a specific requirement that the copulas representing the dependence structure of the finite dimensional distributions induced by the stochastic process for a detailed discussion on copulas see nelsen joe cherubini et al and durante and sempi must satisfy in particular in darsow et al it is proved that the equations for transition probabilities are equivalent to the requirement that if ci j is the copula associated to the vector yi yj then cs t u v cs r cr t u v z cs r u w cr t w v dw r as a consequence since y is a discrete times markov process if we assume that the set of bivariate copulas ct representing the dependence structure of the stochastic process at two adjacent times is given for t z and k then necessarily we remind that the is associative ct u v ct u v ct u v notice that in the stationary case considered in beare ct c for all t z therefore all bivariate copulas ct are functions of the copula c and of the lag k and not of the time in this paper we extend the study to the more general case in particular we analyze the temporal dependence problem with a special attention to mixing properties the notion of was introduced by volkonskii and rozanov and and was attribute there to kolmogorov given a not necessarily stationary sequence of random variables y yt let ftl be the ftl yt t t l with t l and let j i t xx ai bj p ai p bj ai bj sup where the second supremum is taken over all finite partitions ai and bj of such t that ai for each i and bj for each j define the following dependence coefficient t sup we say that the sequence yt is or absolutely regular if as k in next theorem we give conditions on the set of copulas ct t z in order to guarantee that the markov resulting process is these conditions are based on specific requirements on the maximal correlation coefficients of the copulas ct we remind that the maximal correlation of a copula c is given by z z sup f g f x g x c dx dy where f g f x dx g x dx and f x dx g x dx and we refer to beare and for more details theorem let y yt be a markov process let ct be the copula associated to the vector yt for t z that we assume to be absolutely continuous with symmetric and density ct so that ct is uniformly bounded in if the maximal correlation coefficients of ct satisfy sup then y is proof the proof follows that of theorem in beare who proves a similar result for stationary markov processes first of all since the stochastic process is markovian can be rewritten in terms of the cumulative distribution functions of yt yt and ft ft and respectively and the total variation norm k kt v see bradley and then applying sklar s theorem we can write k ft x y ft x y kt v k ct ft x y ft x y kt v k ct u v uv kt v t ft from it follows that all bivariate copulas of type ct for t z and k are absolutely continuous let us denote their density as ct then t k ct u v k ct u v and sup k ct u v since ct is a symmetric joint density with uniform margins it admits the following series expansion in terms of a complete orthonormal sequence in x ct u v t u v where the eigenvalues t i form a sequence of nonnegative real numbers notice that as proved in lancaster max t applying we get ct u v x then using and we get k ct u v x y y u v u v y x x y x y t t x t k ct u v therefore sup k ct u v which since ct is uniformly bounded in tends to zero as k a gaussian markov process from now on we assume that the markov process y is obtained through yt where is a sequence of identically distributed random variables such that is dependent on for each the dependence structure is modelled by a copula function the process defined in is not stationary however we can determine the distribution of c yt for each t thanks to the operator denoted by introduced in cherubini et al as a tool to recover the distribution of the sum of two dependent random variables as shown in cherubini et al and the technique may be used in the construction of dependent increments stochastic processes like more precisely if is the cumulative distribution function of and ht that of we may recover the cumulative distribution function of yt iterating the for all t z c ft yt ht yt c w ht yt w dw t while the copula associated to yt is z u c w ht v t u v w dw c u v where c u v equations and provide the ingredients to construct discrete times markov processes according to darsow et al our model is a sort of a modified version of a random walk process where the independence assumption for the innovations is no longer required however its weakness is that in most cases the distribution function can not be expressed in closed form and it may be evaluated only numerically from now on we assume that innovations are gaussian identically distributed with zero mean and standard deviation and that the copula between and is a stationary gaussian copula with constant parameter for all this way the distribution of yt is gaussian for all t and more specifically in section of cherubini et al it is shown that yt n where v ar yt t x t where since by assumption moreover the copula between yt and is gaussian with parameters vt since e yt the limiting behavior of the standard deviation vt has also been analyzed in cherubini et al where it is proved that if lim vt otherwise notice that only in case of negative correlation with the increments the standard deviation of the levels does not explode in the following we will restrict the analysis to the case moments and autocorrelation function in this subsection we study the behavior of moments and autocorrelation functions of the process yt when t it is just the case to recall that in the standard random walk model the order autocorrelation function of yt tends to as t for each lag in our more general setting this is no longer true the limit of the order autocorrelation function of yt is a function of k and as the following proposition shows proposition let the order autocorrelation function of yt tends to k for any k as t proof as proved in section in cherubini et al using the fact that the of two gaussian copulas has a parameter given by the product of the parameters of the copulas involved in the we have that the copula between yt and is gaussian with parameter y therefore since as t and for any s we easily get the result on the other hand the innovations are no longer serially independent as in the random walk case and the order autocorrelation function approaches to a limit which again depends on and proposition let the order autocorrelation function of tends to for any k as t proof we compute first the autocovariance of order k with k e we have e e yt e yt e yt e e vt vt since for any fixed k k and vt as t we get e k k as t moreover it is immediate to find the statement of the proposition since as t corr properties in our gaussian framework ct is the density of a gaussian copula for which it is well known that the maximal correlation coefficient is equal to the absolute value of the simple correlation coefficient see lancaster therefore according to the notation of theorem for each t the following results which is an application of theorem holds corollary the markov process defined by with n and is proof firstly notice that in fact this is equivalent to vt which is always verified since by assumption thanks to since we have that is bounded by a constant smaller than that is is satisfied furthermore it is not hard to prove that for any t k ct u v thus theorem applies concluding remarks in this paper we provide conditions under which a markov process is our results represent a generalization of those in beare where the author considers the stationary case our analysis is focused on the particular case of a gaussian markov process with dependent increments that represents a generalization of the standard gaussian random walk in this particular setting it is proved that the order autocorrelation function of the process does not converge to as in the random walk case but to a quantity that depends on the lag and the correlation between the state variable and the innovation which is assumed to be additionally it is proved that the process satisfies the conditions required to be references beare b copulas and temporal dependence econometrica bradley introduction to strong mixing conditions vols kendrick press herber city chen fan y estimation of semiparametric time series models journal of econometrics chen wu yi y efficient estimation of semiparametric markov models the annals of statistics cherubini gobbi mulinacci convolution copula econometrics springerbriefs in statistics cherubini gobbi mulinacci romagnoli dynamic copula methods in finance john wiley sons cherubini mulinacci romagnoli a model of speculative price dynamics in discrete time journal of multivariate analysis darsow nguyen b olsen copulas and markov processes illinois journal of mathematics durante sempi principles of copula theory boca raton chapman and joe multivariate models and dependence concepts chapman hall london lancaster some properties of the bivariate normal distribution considered in the form of a contingency table biometrika lancaster the structure of bivariate distributions annals of mathematical statistics nelsen an introduction to copulas springer renyi a on measures of dependence acta mathematica academiae scientiarum hungaricae volkonskii rozanov some limit theorems for random functions i theor probab volkonskii rozanov some limit theorems for random functions ii theor probab
10
value alignment feb jaime fisac monica a gates jessica hamrick chang liu dylan malayandi palaniappan dhruv malik shankar sastry thomas griffiths and anca dragan abstract as intelligent systems gain autonomy and capability it becomes vital to ensure that their objectives match those of their human users this is known as the problem in robotics value alignment is key to the design of collaborative robots that can integrate into human workflows successfully inferring and adapting to their users objectives as they go we argue that a meaningful solution to value alignment must combine decision theory with rich mathematical models of human cognition enabling robots to tap into people s natural collaborative capabilities we present a solution to the cooperative inverse reinforcement learning cirl dynamic game based on cognitive models of decision making and theory of mind the solution captures a key reciprocity relation the human will not plan her actions in isolation but rather reason pedagogically about how the robot might learn from them the robot in turn can anticipate this and interpret the human s actions pragmatically to our knowledge this work constitutes the first formal analysis of value alignment grounded in empirically validated cognitive models key words value alignment interaction dynamic game theory introduction the accelerating progress in artificial intelligence ai and robotics is bound to have a substantial impact in society simultaneously unlocking new potential in augmenting and transcending human capabilities while also posing significant challenges to safe and effective interaction in the short term integrating robotic systems into environments will require them to assess the intentions all authors are with the university of california berkeley jfisac mgates jhamrick changliu dhm malayandi dhruvmalik anca fisac a gates j hamrick liu et al and preferences of their users in order to assist them effectively while avoiding failures due to poor coordination in the long term ensuring that advanced and highly autonomous ai systems will be beneficial to individuals and society will hinge on their ability to correctly assimilate human values and objectives we envision the and challenges as being inherently coupled and predict that improving the ability of robots to understand and coordinate with their human users will inform solutions to the general ai problem successful value alignment requires moving from typical ai formulations to robots that account for a second determines what the objective is in other words value alignment is fundamentally a problem cooperative inverse reinforcement learning cirl formulates value alignment as a game in which a human and a robot share a common reward function but only the human has knowledge of this reward in practice solving a cirl game requires more than decision theory we are not dealing with any system but with a system this poses a unique challenge in that humans do not behave like idealized rational agents however humans do excel at social interaction and are extremely perceptive of the mental states of others they will naturally project mental states such as beliefs and intentions onto their robotic collaborators becoming invaluable allies in our robots quest for value alignment in the coming decades tackling the problem will be crucial to building collaborative robots that know what their human users want in this paper we show that value alignment is possible not just in theory but also in practice we introduce a solution for cirl based on a model of the human agent that is grounded in cognitive science findings regarding human decision making and pedagogical reasoning our solution leverages two closely related insights to facilitate value alignment first to the extent that improving their collaborator s understanding of their goals may be conducive to success people will tend to behave pedagogically deliberately choosing their actions to be informative about these goals second the robot should anticipate this pedagogical reasoning in interpreting the actions of its human users akin to how a pragmatic listener interprets a speaker s utterance in natural language jointly pedagogical actions and pragmatic interpretations enable stronger and faster inferences among people our result suggests that it is possible for robots to partake in this equilibrium ultimately becoming more perceptive and competent collaborators solving value alignment using cognitive models cooperative inverse reinforcement learning cirl cooperative inverse reinforcement learning cirl formalizes value alignment as a game which we briefly present here consider two agents a human value alignment h and a robot r engaged in a dynamic collaborative task involving a possibly infinite sequence of steps the goal of both agents is to achieve the best possible outcome according to some objective however this objective is only known to in order to contribute to the objective r will need to make inferences about from the actions of h an inverse reinforcement learning irl problem and h will have an incentive to behave informatively so that r becomes more helpful hence the term cooperative irl formally a cirl game is a dynamic markov game of two players h and r described by a tuple hs ah ar t r where s is the set of possible states of the world ah ar are the sets of actions available to h and r respectively t s s ah ar a discrete transition over the next state conditioned on the previous state and the actions of h and r t ah ar is the set of possible objectives r s ah ar r is a cumulative reward function assigning a real value to every tuple of state and actions for a given objective r s ah ar s is a probability measure on the initial state and the objective is a geometric time discount factor making future rewards gradually less valuable pragmatic robots for pedagogic humans asymmetric information structures in games even static ones generally induce an infinite hierarchy of beliefs our robot will need to maintain a bayesian belief over the human s objectives to decide on its actions to reason about the robot s decisions the human would in principle need to maintain a belief on the robot s belief which will in turn inform her decisions thereby requiring the robot to maintain a belief on the human s belief about its own belief and so on in it was shown that an optimal pair of strategies can be found for any cirl game by solving a partially observed markov decision process pomdp this avoids this bottomless recursion as long as both agents are rational and can coordinate perfectly before the start of the game unfortunately when dealing with human agents rationality and prior coordination are nontrivial assumptions finding an equivalent tractability result for more realistic human models is therefore crucial in using the cirl formulation to solve problems we discover the key insight in cognitive studies of human pedagogical reasoning in which a teacher chooses actions or utterances to influence the beliefs of a learner who is aware of the teacher s intention the teacher can then exploit the fact that the learner can interpret utterances pragmatically infinite recursion is averted by finding a relation between the teacher s best utterance and the learner s best interpretation exploiting a common modeling assumption in bayesian theory of mind the learner models the teacher as a noisily rational decision maker who will be likelier to choose utterances note that the theoretical formulation is easily extended to arbitrary measurable sets we limit our analysis to finite state and objective sets for computational tractability and clarity of exposition fisac a gates j hamrick liu et al causing the learner to place a high posterior belief on the correct hypothesis given the learner s current belief while in reality the teacher can not exactly compute the learner s belief the model supposes that she estimates it from the learner s previous responses to her utterances then introduces noise in her decisions to capture estimation inaccuracies this framework can predict complex behaviors observed in human interactions in which pedagogical utterances and pragmatic interpretations permit efficient communication we adopt an analogous modeling framework to that in for value alignment with a critical difference the ultimate objective of the human is not to explicitly improve the robot s understanding of the true objective but to optimize the team s expected performance towards this objective pedagogic behavior thus emerges implicitly to the extent that a robot becomes a better collaborator equilibrium solution to cirl the robot does not have access to the true objective but rather estimates a belief br over we assume that this belief on can be expressed parametrically this is always true if is a finite set and define to be the corresponding finitedimensional parameter space denoting r s belief by br while in reality the human can not directly observe br we assume as in that she can compute it or infer it from the robot s behavior and model estimation inaccuracies as noise in her policy we can then let q s ah ar r represent the value function of the cirl game for a given objective which we are seeking to compute if is the true objective known to h then q s br ah ar represents the best performance the team can expect to achieve if h chooses ah and r chooses ar from state s with r s current belief being br in order to solve for q we seek to establish an appropriate dynamic programming relation for the game given a information structure and a model of the human s decision making since it is typically possible for people to predict a robot s next action if they see its beginning we assume that h can observe ar at each turn before committing to ah a model of human decision making in psychology and econometrics is the luce choice rule which models people s decisions probabilistically making choices more likely than those with lower utility in particular we employ a common case of the luce choice rule the boltzmann or noisy rationality model in which the probability of a choice decays exponentially as its utility decreases in comparison to competing options the relevant utility metric in our case is the sought q which captures h s best expected outcome for each of her available actions ah therefore the probability that h will choose action ah has the form ah br ar exp q s br ah ar value alignment where is termed the rationality coefficient of h and quantifies the concentration of h s choices around the optimum as h becomes a perfect rational agent while as h becomes indifferent to q the above expression can be interpreted by r as the likelihood of action ah given a particular the evolution of r s belief br is then given deterministically by the bayesian update br ar ah ah br ar br jointly and define a equation analogous to the one in which states how r should pragmatically update br based on a noisily rational pedagogic ah this amounts to a deterministic transition function for r s belief fb s br ah ar crucially however the relation derived here involves q itself which we have yet to compute unlike h r is modeled as a rational agent however not knowing the true the best r can do is to the expectation of q based on its current br s br arg max ar q s br ah ar ah br ar br ah combining with the state transition measure t ah ar we can define the bellman equation for h under the noisily rational policy for any given h q s br ah ar r s ah ar where t ah ar fb s br ah ar ah note that h s next action implicitly depends on r s action at the next turn substituting into we obtain the sought dynamic programming relation for the cirl problem under a noisily human and a pragmatic robot the human is pedagogic because she takes actions according to which takes into account how her actions will influence the robot s belief about the objective the robot is pragmatic because it assumes the human is actively aware of how her actions convey the objective and interprets them accordingly the resulting problem is similar to a pomdp in this case formulated in beliefstate mdp form with the important difference that the belief transition depends on the value function itself in spite of this complication the problem can be solved in backward time through dynamic programming each bellman update will be based on a fixed point that encodes an equilibrium between the q function and therefore h s policy for choosing her action and the belief transition that is r s rule for interpreting h s actions evidence in suggests that people are proficient at finding such equilibria even though uniqueness is not guaranteed in general study of disambiguation is an open research direction we assume for simplicity that the optimum is unique or a disambiguation rule exists note that this does not imply certainty equivalence nor do we assume separation of estimation and control r is fully reasoning about how its actions and those of h may affect its future beliefs fisac a gates j hamrick liu et al a we introduce the benchmark domain chefworld a household collaboration setting in which a human h seeks to prepare a meal with the help of an intelligent robotic manipulator there are multiple possible meals that h may want to prepare using the available ingredients and r does not know beforehand which one she has chosen we assume h can not or will not tell r explicitly the team obtains a reward only if h s intended recipe is successfully cooked if h is aware of r s uncertainty she should take actions that give r actionable information particularly the information that she expects will allow r to be as helpful as possible as the task progresses our problem has ingredients each with or states spinach absent chopped tomatoes absent chopped and bread absent sliced toasted recipes correspond to joint target states for the food soup requires the tomatoes to be chopped then the bread to be sliced then toasted and no spinach salad requires the spinach and tomatoes to be chopped and the bread to be sliced then toasted h and r can slice or chop any of the foods while only r can tomatoes or toast bread a simple scenario with the above two recipes is solved using discretized beliefstate value iteration and presented as an illustrative example in fig r has a wrong initial belief about h s intended recipe under standard irl h fails to communicate her recipe but if r is pragmatic and h is pedagogic h is able to change r s belief and they successfully collaborate to make the meal in addition we computed the solution to games with recipes through a modification of pomdp value iteration table in the cirl equilibrium with h and r successfully cook the correct recipe of the time whereas under the standard irl framework with h acting as an expert disregarding r s inferences they only succeed of the than half as often fig simple collaborative scenario with possible objectives the human h wants soup but the robot r initially believes her goal is salad even under a full pomdp formulation if r reasons literally about h s actions using standard irl assuming h behaves as if r knew the true objective it fails to infer the correct objective conversely under the cirl equilibrium r views h as incentivized to choose pedagogic actions that will fix r s belief when needed under the pragmatic interpretation h s wait action in turn instead of adding spinach which would be preferred by a pedagogic h wanting salad indicates h wants soup while h s actions are the same under both solutions only the pragmatic r achieves value alignment and completes the recipe value alignment irl cirl boltzmann boltzmann boltzmann rational table a comparison of the expected value or equivalently here the probability of success achieved by cirl and irl on the chefworld domain with four recipes when the robot begins with a uniform belief over the set of recipes we ran each algorithm across different models of the human s behavior namely a rational model and a model with various values of a higher corresponds to a more rational human when the human is highly irrational both cirl and irl unsurprisingly perform rather poorly however as the human becomes less noisy cirl outperforms irl by a significant margin in fact the pragmaticpedagogic cirl strategy with a human performs comparably or even substantially outperforms the irl result when the human is perfectly rational discussion we have presented here an analysis of the ai value alignment problem that incorporates a model of human decision making and theory of mind into the framework of cooperative inverse reinforcement learning cirl using this analysis we derive a bellman backup that allows solving the dynamic game through dynamic programming at every instant the backup rule is based on a equilibrium between the robot and the human the robot is uncertain about the objective and therefore incentivized to learn it from the human whereas the human has an incentive to help the robot infer the objective so that it can become more helpful we note that this type of equilibrium recently studied in the cognitive science literature for human teaching and learning may not be unique in general there may exist two actions for h and two corresponding interpretations for r leading to different fixed points for example h could press a blue or a red button which r could then interpret as asking it to pick up a blue or a red object although we might feel that is a more intuitive pairing is valid as well that is if h thinks that r will interpret pressing the blue button as asking for the red object then she will certainly be incentivized to press blue when she wants red and in this case r s policy should consistently be to pick up the red object upon h s press of the blue button when multiple conventions are possible human beings tend to naturally disambiguate between them converging on salient equilibria or focal points accounting for this phenomenon is likely to be instrumental for developing competent robots on the other hand it is important to point out that although they are computationally simpler than more general planning problems pomdps are still so reducing equilibrium computation to solving a modified pomdp falls short of rendering the problem tractable in general however finding a bellman backup does open the door to efficient cirl solution methods that leverage and benefit from the extensive research on practical algorithms for approximate planning in large pomdps references we find the results in this work promising for two reasons first they provide insight into how cirl games can be not only theoretically formulated but also practically solved second they demonstrate for the first time formal solutions to value alignment that depart from the ideal assumption of a rational human agent and instead benefit from modern studies of human cognition we predict that developing efficient solution approaches and incorporating more realistic human models will constitute important and fruitful research directions for value alignment acknowledgements this work is supported by onr under the embedded humans muri by afosr under implicit communication and by the center for ai references amodei steinhardt man and christiano concrete problems in ai safety arxiv preprint dragan abbeel and russell cooperative inverse reinforcement learning nips tversky and kahneman judgment under uncertainty heuristics and biases science heider and simmel an experimental study of apparent behavior the american journal of psychology meltzoff understanding the intentions of others of intended acts by dev psych baker and j tenenbaum modeling human plan recognition using bayesian theory of mind plan activity and intent recognition shafto goodman and griffiths a rational account of pedagogical reasoning teaching by and learning from examples cog psych zamir bayesian games games with incomplete information computational complexity theory techniques and applications luce individual choice behavior a theoretical analysis john wiley and sons dragan and srinivasa integrating human observer inferences into robot motion planning autonomous robots schelling the strategy of conflict harvard university press mundhenk goldsmith lusena and allender complexity of markov decision process problems acm silver and veness planning in large pomdps nips
2
jun generating massive complex networks with hyperbolic geometry faster in practice moritz von looz mustafa safa karlsruhe institute of technology kit germany email istanbul technical university turkey email ozdayi laue henning meyerhenke friedrich schiller university jena germany email karlsruhe institute of technology kit germany email meyerhenke network models play an important role in algorithm development scaling studies network analysis and realistic system benchmarks for graph data sets the commonly used benchmark model has some drawbacks concerning realism and the scaling behavior of network properties a complex network model gaining considerable popularity builds random hyperbolic graphs generated by distributing points within a disk in the hyperbolic plane and then adding edges between points whose hyperbolic distance is below a threshold we present in this paper a fast generation algorithm for such graphs our experiments show that our new generator achieves speedup factors of over the best previous implementation one billion edges can now be generated in under one minute on a workstation furthermore we present a dynamic extension to model gradual network change while preserving at each step the point position probabilities introduction relational data of complex relationships often take the form of complex networks graphs with heterogeneous and often hierarchical structure low diameter high clustering and a degree distribution examples include social networks the graph of hyperlinks between websites protein interaction networks and infrastructure routing networks on the autonomous system level frequently found properties in generative models for complex network are clustering ratio of triangles to triads a community structure and a heavytailed degree distribution such as a benchmarks developed to evaluate a system with respect to floating point operations do not represent the requirements of graph algorithms especially with heterogeneous datasets such as complex networks the benchmark addresses this gap it is the most graph benchmark in computing it uses the recursive matrix model to generate synthetic networks as benchmark instances graphs from this model are efficiently computable but suffer from drawbacks in terms of realism for example even with fixed parameters the clustering coefficient shrinks with graph size while the number of connected components increases which is problematic for scaling studies an interesting model without this problem are random hyperbolic graphs rhg a family of geometric graphs in the hyperbolic plane krioukov et al introduced this graph model and showed how the structure of complex networks naturally develops from the properties of hyperbolic geometry to generate a rhg one randomly samples node positions in a hyperbolic disk then connects two nodes with an edge with a probability depending on their hyperbolic distance in a special case of this model an edge between two nodes is added exactly if their distance is below a threshold this subset of rhg sometimes called threshold random hyperbolic graphs is theoretically and could be considered as unitdisk graphs in hyperbolic space the resulting graphs show a degree distribution with adjustable exponent provably high clustering and small diameter motivation outline and contribution a fast generator implementation that scales to large graph sizes and provides sufficient realism is necessary to create meaningful graph benchmark instances in acceptable time while our previous work was able to improve the quadratic time complexity of the pairwise probing approach for threshold rhgs it still has superlinear time complexity we therefore provide a faster generation algorithm in this paper for threshold random hyperbolic graphs section using a new spatial data structure the key idea is to divide the relevant part of the hyperbolic plane into slabs and use these to bound the coordinates of possible neighbors in each slab as our experiments section show a network with million vertices and edges can be generated with our parallel implementation in under one minute yielding a speedup factor of up to over the best previous implementation for a graph with n nodes and m edges the measurements suggest an o n log time on the hyperbolic plane the distance between them is given by the hyperbolic law of cosines complexity but we do not have a proof for this while an algorithm with optimal expected linear time complexity has been suggested in a theoretical paper our present work provides the fastest implementation to date the generator code is publicly available in our network analysis toolkit networkit cosh dist p q cosh rp cosh rq rp sinh rq cos as mentioned briefly in section an important special case is t where an edge is added to a node pair exactly if the hyperbolic distance between the points is below a threshold this graph family is sometimes called threshold random hyperbolic graphs hyperbolic graphs or slightly confusingly just random hyperbolic graphs while we consider hyperbolic graphs to be more precise we stick with threshold random hyperbolic graphs to avoid name proliferation many theoretical results are for this special case related work generative models due to the growing interest in complex networks numerous generators for them exist for a comprehensive overview which would be outside the scope of this paper we refer the interested reader to goldenberg s survey none of the models is suitable for all use cases as mentioned above the recursive matrix model has received particular attention in the hpc community due to its use in the benchmark rhg generation algorithms previous generators for random hyperbolic graphs exist both for the general and special case aldecoa et al present a generator for the general case with quadratic time complexity calculating distances and sampling edges for all node pairs von looz et al use polar quadtrees to generate threshold rhgs with a time complexity of o m log n with high probability recently von looz and meyerhenke have extended this approach to generate general rhgs with the same time complexity bringmann et al propose geometric inhomogeneous random graphs as a generalization of rhgs and describe a generation algorithm with expected linear time complexity to our knowledge no implementation of this algorithm is available hyperbolic geometry hyperbolic space is one of the three isotropic spaces the other two being the more common euclidean space and spherical space in contrast to the flat euclidean geometry and the positively curved spherical geometry hyperbolic geometry has negative curvature among other interesting properties hyperbolic geometry shows an exponential expansion of space while the area of a euclidean circle grows quadratically with the circle radius the area of a circle on the hyperbolic plane grows exponentially with its radius in balanced trees the number of nodes at a certain distance from the root also grows exponentially with said distance leading to the suggestion that hierarchical complex networks with structures might be easily embeddable in hyperbolic space indeed et al demonstrate the connection between hyperbolic geometry and complex networks by embedding the autonomous system internet graph in the hyperbolic plane and enabling locally greedy routing as a generative model krioukov et al introduced random hyperbolic graphs in to generate a graph points are first distributed randomly within in a disk dr of radius r in the hyperbolic plane the probability density functions for the point distributions are given in polar coordinates the angular coordinate is distributed uniformly over the radial coordinate r is given by eq f r sinh cosh algorithm our main idea is to partition the hyperbolic plane into concentric slabs section and use them to limit the number of necessary distance calculations during edge creation algorithm point positions are sampled sorted by their angular coordinates and stored in the appropriate slab as determined by their radial coordinates to gather the neighborhood of a point v we then iterate over all slabs and examine possible neighbors within them since each slab limits the radial coordinates of points it contains we can use eq to also bound the angular coordinates of possible neighbors in each slab thus reducing the number of comparisons and running time the parameter governs node dispersion which determines the exponent of the resulting degree distribution after sampling point positions edges are then added to each node pair u v with a probability given in eq depending on their hyperbolic distance and parametrized by a temperature t f x e data structure let c cmax be a set of log n ordered radial boundaries with and cmax we then define a slab si as the area enclosed by ci and a point p rp is contained in slab si exactly if ci rp since slabs are they partition the hyperbolic disk dr for the resulting degree distribution follows a power law with exponent eq given two points in polar coordinates p rp q rq dr log n si ax algorithm graph generation min ci r figure graph in hyperbolic geometry with neighborhood neighbors of the bold blue vertex are in the hyperbolic circle marked in blue the choice of radial boundaries is an important tuning parameter after experimenting with different divisions we settled on a geometric sequence with ratio p the relationship between successive boundary values is then ci p ci from and cmax r we derive the value of log x log n p r p r r plog n the remaining values follow geometrically figure shows an example of a graph in the hyperbolic plane together with slab si the neighbors of the bold blue vertex v are those within a hyperbolic circle of radius r in this visualization marked by the blue area when considering nodes in si as possible neighbors of v the algorithm only needs to examine nodes whose angular coordinate is between and k input number of vertices n average degree k exponent output g v e r gettargetradius n k v n vertices c cmax set of log n ordered radial coordinates with and cmax r b bmax set of log n empty sets for vertex v v do in parallel draw v from u draw r v with density f r sinh cosh insert v r v in suitable bi so that ci r v end for b b do in parallel sort points in b by their angular coordinates end for vertex v v do in parallel for band bi b where r v do getminmaxphi v r v ci r for vertex w bi where w do if disth v w r then add v w to e end end end end return g algorithm algorithm shows the generation of g v e with average degree k and exponent first the radius r of the hyperbolic disk is calculated according to desired graph size and density line the value of can be fixed while retaining all degrees of freedom in the model we thus assume we then use binary search with fixed n and desired k to find an r that gives us a close approximation of the desired average degree k note that the above equation is only an approximation and might give wrong results for extreme values our implementation could easily be adapted to skip this step and accept the commonly used parameter c with r ln or even accept r directly for increased usability we accept the average degree k as a parameter in the default version gettargetradius this function is unchanged from our previous work for given values of n and r an approximation of the expected average degree k is given by eq and the notation k n n r vertex positions and bands after settling the disk boundary the radial boundaries ci are calculated line as defined above the disk dr is thus partitioned into log n slabs for each slab si a set bi stores the vertices located in the area of si these sets bi are initially empty line the vertex positions are then sampled randomly in polar coordinates lines and and stored in the corresponding set vertex v is put into set bi iff ci r v line within each set vertices are sorted with respect to their angular coordinates lines to generation algorithm outside the hyperbolic disk dr in this case the movement is inverted and the node bounces off the boundary the different probability densities in the center of the disk and the outer regions can be translated into movement speed a node is less likely to be in the center thus it needs to spend less time there while traversing it resulting in a higher speed we implement this movement in two phases in the initialization step values and are assigned to each node according to the desired movement each movement step of a node then consists of a rotation and a radial movement the rotation step is a straightforward addition of angular coordinates rotated r mod the radial movement is described in algorithm and a visualization is shown in figure getminmaxphi the neighbors of a given vertex v rv are those whose hyperbolic distance to v is at most let bi be the slab between ci and and u ru bi a neighbor of v in bi since u is in bi ru is between ci and with the hyperbolic law of cosines we can conclude cosh r cosh rv cosh ci sinh rv sinh ci cos cosh rv cosh ci cosh r sinh rv sinh ci cos cosh rv cosh ci cosh r cos sinh rv sinh ci cosh r cosh c cosh r v i cos sinh rv sinh ci algorithm radial movement in dynamic model input r r output rnew x sinh r y z asinh y return z to gather the neighborhood of a vertex v rv we iterate over all slabs si and compute for each slab how far the angular coordinate of a possible neighbor in bi can deviate from line we call the vertices in bi whose angular coordinates are within these bounds the neighbor candidates for v in bi since points are sorted according to their angular coordinates we can quickly find the leftmost and rightmost neighbor candidate in each slab using binary search we then only need to check each neighbor candidate line compute its hyperbolic distance to v and add an edge if this distance is below r lines and since edges can be found from both ends we only need to iterate over slabs in one direction we choose outward in our implementation line the process is repeated for every vertex v line not surprisingly the running time of algorithm is dominated by the range queries lines our experiments in section suggest a running time of o n log n m for the complete algorithm this should be seen as an empirical observation we leave a mathematical proof for future work if the new node position would be outside the boundary r r or below the origin r the movement is reflected and set to theorem let fr pr be the probability density of point positions given in polar coordinates let move pr be a movement step then the node movement preserves the distribution of angular and radial distributions fr move pr fr pr proof since the distributions of angular and radial coordinates are independent we consider them separately fr pr fr pr as introduced in eq the radial coordinate r is sampled from a distribution with density sinh cosh we introduce random variables x y z for each step in algorithm each is denoted with the upper case letter of its equivalent an additional random variable q denotes the radial coordinate the other variables are defined as x sinh q y x and z asinh y let fq fx fy and fz denote the density functions of these variables sinh fq r cosh asinh r fx r fq cosh fy r fx r cosh sinh fz r fy sinh r cosh fq r cosh dynamic model to model gradual change in networks we design and implement a dynamic version with node movement while deleting nodes or inserting them at random positions is a suitable dynamic behavior for modeling internet infrastructure with sudden site failures or additions change in social networks happens more gradually a suitable node movement model needs to be consistent after moving a node the network may change but properties should stay the same in expectation since the properties emerge from the node positions the probability distribution of node positions needs to be preserved in our implementation movement happens in discrete time steps we choose the movement to be directed if a node i moves in a certain direction at time t it will move in the same direction at t except if the new position would be to for graphs with nodes and edges very roughly the experimental running times fit a complexity of o n log n m while the running times of the faster generator appear to grow more steeply with increasing edge count this is an artifact of the logarithmic plot the same constant increase is relatively larger compared to a smaller running time and thus appears larger in the logarithmic drawing sinh r r n our impl n our impl n our impl n impl of n impl of n impl of n theoretical fit n theoretical fit n theoretical fit figure for each movement step radial coordinates are mapped into the interval sinh where the coordinate distribution is uniform adding and transforming the coordinates back results in correctly scaled movements the distributions of q and z only differ in the constant addition of cosh every cosh steps the radial movement reaches a limit or r and is reflected causing to be multiplied with on average is thus zero and fq r fz r a similar argument works for the rotational step while the rotational direction is unchanged the change in coordinates is balanced by the addition or subtraction of whenever the interval is left leading to an average of zero in terms of change running time in seconds experimental evaluation setup the generation algorithm is implemented in and parallelized with openmp running time measurements were made on a server with gb ram and intel xeon cores at ghz with hyperthreading enabled we use up to threads for memory allocations we use the malloc implementation of intel s threading building blocks library our code is included in the network analysis toolkit networkit to compare performance we generate graphs with and nodes and average degrees between and both with the algorithm presented in this work and the implementation of von looz et al to validate the distribution of generated graphs we compare our implementation with the implementation of aldecoa et al we generate graphs with nodes each for a combination of parameters and calculate several network analytic characteristics averaging over runs for the dynamic model we measure the time required for a movement step and again compare the distributions of network analytic properties edges figure comparison of running times to generate networks with vertices and varying k circles represent running times of our implementation diamonds the running times of the implementation of our running times are fitted with the equation t n m n n m seconds the scaling behavior for to threads on cores is shown in figure considering edge sampling alone it shows strong scaling up to the number of physical cores with a speedup of for threads with hyperthreading the speedup increases to combining the edge lists later on into the networkit graph data structure however requires coordination and proves to be a bottleneck in parallel if only edge lists are required this final step can be omitted as done for example in the benchmark running time figure shows the running times to generate graphs with to nodes and to edges the speedup over the previously fastest implementation increases with graph size and sparsity reaching up distribution of generated graphs the average degree assortativity degeneracy clustering coefficient and size and diameter of largest components of our generator and the total edge sampling speedup factor bader berry kahan murphy riedy and willcock graph benchmark search version graph tech chakrabarti zhan and faloutsos a recursive model for graph mining in proc siam intl conf on data mining sdm orlando fl siam apr kolda pinar plantenga and seshadhri a scalable generative graph model with community structure siam j scientific computing vol no pp sep krioukov papadopoulos kitsak vahdat and hyperbolic geometry of complex networks physical review e vol no sep online available http bode fountoulakis and the probability that the hyperbolic random graph is connected random structures and algorithms to appear preprint available at http gugelmann panagiotou and peter random hyperbolic graphs degree sequence and clustering extended abstract in automata languages and programming international colloquium icalp proceedings part ii ser lecture notes in computer science czumaj mehlhorn pitts and wattenhofer vol springer pp online available http bonato a survey of models of the web graph in combinatorial and algorithmic aspects of networking ser lecture notes in computer science springer berlin heidelberg vol pp online available http threads figure speedup curves for n k on a machine with physical cores marked with a vertical line and hyperthreading averaged over runs one by aldecoa et al are shown in plots and in appendix a averaged over runs the network analytic properties show a very close match between the distributions of the two generation algorithms von looz prutkin and meyerhenke generating random hyperbolic graphs in subquadratic time in isaac proc int l symp on algorithms and computation dynamic model our implementation allows updating a graph without rebuilding it from scratch moving up to of nodes and updating an existing graph is still faster than a new static generation the distribution of generated graphs is indistinguishable from the static model appendix b aldecoa orsini and krioukov hyperbolic graph generator computer physics communications online available http bringmann keusch and lengler geometric inhomogeneous random graphs arxiv preprint conclusions staudt sazonovs and meyerhenke networkit a tool suite for complex network analysis network science to appear we have provided the fastest implementation so far to generate massive complex networks based on threshold random hyperbolic graphs the running time improvement is particularly large for graphs with realistic densities we have also presented a model extension to cover gradual node movement and have proved its consistency regarding the probability densities of vertex positions both the static and the dynamic model can serve as complex network generators with reasonable realism and fast generation times even for massive networks goldenberg zheng fienberg and airoldi a survey of statistical network models foundations and trends r in machine learning vol no pp anderson hyperbolic geometry ser springer undergraduate mathematics series berlin springer papadopoulos and krioukov sustaining the internet with hyperbolic mapping nature communications no september online available http von looz and meyerhenke querying probabilistic neighborhoods in spatial data sets efficiently arxiv preprint acknowledgements this work is partially supported by german research foundation dfg grant me finca and grant both within the priority programme algorithms for big data kiwi and mitsche a bound for the diameter of random hyperbolic graphs in proceedings of the twelfth workshop on analytic algorithmics and combinatorics analco siam jan pp online available http references newman networks an introduction oxford university press chakrabarti and faloutsos graph mining laws generators and algorithms acm computing surveys csur vol no appendix a comparison tion with previous degree assortativity degree assortativity degree assortativity degree assortativity k clustering coefficient k degeneracy degeneracy clustering coefficient k k cc k vertices in largest component k figure comparison of degree assortativity and degeneracy for the implementation of left and our implementation right degree assortativity describes whether vertices have neighbors of similar degree a value near signifies subgraphs with equal degree a value of structures k in turn are a generalization of connected components and result from iteratively peeling away vertices of degree k and assigning to each vertex the core number of the innermost core it is contained in degeneracy refers to the largest core number values are averaged over runs size of largest component size of largest component k diameter of largest component diameter of largest component k k vertices in largest component diameter of largest component diameter of largest component max core number max core number cc k k k figure comparison of clustering coefficients size of largest component and diameter of largest components for the implementation of left and our implementation right values are averaged over runs appendix b consistency of dynamic model degree assortativity degree assortativity clustering coefficient k k degeneracy max core number k vertices in largest component max core number cc k k cc degeneracy clustering coefficient k k k figure comparison of degree assortativity and degeneracy for graphs with nodes before and after one movement step all nodes were moved with and sampled randomly distribution of graphs after node movement are shown left before node movement right values are averaged over runs size of largest component size of largest component k diameter of largest component diameter of largest component k k vertices in largest component diameter of largest component diameter of largest component degree assortativity degree assortativity k figure comparison of clustering coefficients size of largest component and diameter of largest components for graphs with nodes before and after one movement step all nodes were moved with and sampled randomly distribution of graphs after node movement are shown left before node movement right values are averaged over runs
8
the problem on the plane concerning distance constraints aug yu lin and lee institute of information science academia sinica nankang taipei taiwan herbert kero dtlee abstract in drezner proposed the problem on the plane in which two players called the leader and the follower open facilities to provide service to customers in a competitive manner the leader opens the first facility and the follower opens the second each customer will patronize the facility closest to him ties broken in favor of the first one thereby decides the market share of the two facilities the goal is to find the best position for the leader s facility so that its market share is maximized the best algorithm of this problem is an o log n parametric search approach which searches over the space of market share values in the same paper drezner also proposed a general version of centroid problem by introducing a minimal distance constraint r such that the follower s facility is not allowed to be located within a distance r from the leader s he proposed an o log n algorithm for this general version by identifying o points as the candidates of the optimal solution and checking the market share for each of them in this paper we develop a new parametric search approach searching over the o candidate points and present an o log n algorithm for the general version thereby close the o gap between the two bounds keywords competitive facility euclidean plane parametric search introduction in economist hotelling introduced the first competitive location problem in his seminal paper since then the subject of competitive facility location has been extensively studied by researchers in the fields of spatial economics social and political sciences and operations research and spawned hundreds of contributions in the literature the interested reader is referred to the following survey papers hakimi and drezner individually proposed a series of competitive location problems in a framework the framework is briefly described as follows there are n customers in the market and each is endowed research supported by under grants no most yu lin and lee with a certain buying power two players called the leader and the follower sequentially open facilities to attract the buying power of customers at first the leader opens his p facilities and then the follower opens another r facilities each customer will patronize the closest facility with all buying power ties broken in favor of the leader s ones thereby decides the market share of the two players since both players ask for market share maximization two competitive facility location problems are defined under this framework given that the leader locates his p facilities at the set xp of p points the follower wants to locate his r facilities in order to attract the most buying power which is called the problem on the other hand knowing that the follower will react with maximization strategy the leader wants to locate his p facilities in order to retain the most buying power against the competition which is called the problem drezner first proposed to study the two competitive facility location problems on the euclidean plane since then many related results have been obtained for different values of r and due to page limit here we introduce only previous results about the case r p for the problem drezner showed that there exists an optimal solution arbitrarily close to and solved the problem in o n log n time by sweeping technique later lee and wu obtained an n log n lower bound for the problem and thus proved the optimality of the above result for the problem drezner developed a parametric search based approach that searches over the space of o possible market share values along with an o test procedure constructing and solving a linear program of o constraints thereby gave an o log n algorithm then by improving the test procedure via megiddo s result for solving linear programs hakimi reduced the time complexity to o log n in drezner also proposed a more general setting for the framework by introducing a minimal distance constraint r into the medianoid problem and the problem such that the follower s facility is not allowed to be located within a distance r from the leader s the augmented problems are respectively called the r problem and r problem in this paper drezner showed that the r medianoid problem can also be solved in o n log n time by using nearly the same proof and technique as for the problem however for the r problem he argued that it is hard to generalize the approach for the problem to solve this general version due to the change of problem properties then he gave an o log n algorithm by identifying o candidate points on the plane which contain at least one optimal solution and performing medianoid computation on each of them so far the o bound gap between the two centroid problems remains unclosed in this paper we propose an o log n algorithm for the r centroid problem on the euclidean plane thereby close the gap last for decades instead of searching over market share values we develop a new approach based on the parametric search technique by searching over the o candidate points the r problem on the plane mentioned in this is made possible by making a critical observation on the distribution of optimal solutions for the r problem given which provides us a useful tool to prune candidate points with respect to we then extend the usage of this tool to design a key procedure to prune candidates with respect to a given vertical line the rest of this paper is organized as follows section gives formal problem definitions and describes previous results in in section we make the observation on the r problem and make use of it to find a local centroid on a given line this result is then extended as a new pruning procedure with respect to any given line in section and utilized in our parametric search approach for the r problem finally in section we give some concluding remarks notations and preliminary results let v vn be a set of n points on the euclidean plane as the representatives of the n customers each point vi v is assigned with a positive weight w vi representing its buying power to simplify the algorithm description we assume that the points in v are in general position that is no three points are collinear and no two points share a common x or let d u w denote the euclidean distance between any p two points u w t for any set z of points on the plane we define w z w v v z suppose that the leader has located his facility at x which is shortened as x for simplicity due to the minimal distance constraint r mentioned in any point y with d y x r is infeasible to be the follower s choice if the follower locates his facility at some feasible point y the set of customers patronizing y instead of x is defined as v v v v y d v x with their total buying power w w v then the largest market share that the follower can capture is denoted by the function w x max d y x w which is called the weight loss of x given a point x the r problem is to find a r which denotes a feasible point y maximizing the weight loss of x in contrast the leader tries to minimize the weight loss of his own facility by finding a point such that w w x for any point x the r problem is to find a r which denotes a point minimizing its weight loss note that when r the two problems degenerate to the and problems yu lin and lee previous approaches in this subsection we briefly review previous results for the r and r problems in so as to derive some basic properties essential to our approach let l be an arbitrary line which partitions the euclidean plane into two halfplanes for any point y l we define h l y as the close including y and h l y as the open including y but not l for any two distinct points x y let b denote the perpendicular bisector of the line segment from x to y given an arbitrary point x we first describe the algorithm for finding a r in let y be an arbitrary point other than x and y be some point on the open line segment from y to x we can see that h b y h b y y which implies the fact that w y w h b y y w h b y w it shows that moving y toward x does not diminish its weight capture thereby follows the lemma lemma there exists a r in y y d x y r for any point z let cr x and x be the circles centered at z with radii r and respectively by lemma finding a r can be reduced to searching a point y on cr x maximizing w since the perpendicular bisector b of each point y on cr x is a tangent line to the circle x the searching of y on cr x is equivalent to finding a tangent line to x that partitions the most weight from x the latter problem can be solved in o n log n time as follows for each v v outside x we calculate its two tangent lines to x then by sorting these tangent lines according to the polar angles of their corresponding tangent points with respect to x we can use the angle sweeping technique to check how much weight they partition theorem given a point x the r problem can be solved in o n log n time next we describe the algorithm of the r problem in let s be a subset of v we define c s to be the set of all circles v v s and ch c s to be the convex hull of these circles it is easy to see the following lemma let s be a subset of v for any point x w x w s if x is outside ch c s for any positive number let i be the intersection of all convex hulls ch c s where s v and w s we have the lemma below lemma let be a positive real number for any point x w x if and only if x i the r problem on the plane proof consider first the case that x i by definition x intersects with every ch c s of subset s v with w s let s v be any of such subsets since x ch c s for any point y feasible to x there must exist a point v s such that v h b y implying that no feasible point y can acquire all buying power from customers of s it follows that no feasible point y can acquire buying power larger than or equal to w x if x i there must exist a subset s v with w s such that ch c s by lemma w x w s drezner argued that the set of all r is equivalent to some intersection i for smallest possible we slightly strengthen his argument below let w w x y d x y r the following lemma can be obtained lemma let be the smallest number in w such that i is not null a point x is a r if and only if x i proof let wop t be the weight loss of some r we first show that i is null for any wop t suppose to the contrary that it is not null and there exists a point in i by lemma w wop t which contradicts the optimality of moreover since i is not null we have that wop t we now show that a point x is a r if and only if x i if x is a r we have that w x wop t by lemma x i on the other hand if x is not a r we have that w x wop t since by definition w x w we can see that w x thus again by lemma x i although it is hard to compute i itself we can find its vertices as solutions to the r problem let t be the set of outer tangent lines of all pairs of circles in c v for any subset s v the boundary of ch c s is formed by segments of lines in t and arcs of circles in c v since i is an intersection of such convex hulls its vertices must fall within the set of intersection points between lines in t between circles in c v and between one line in t and one circle in c v let t t c v c v and t c v denote the three sets of intersection points respectively we have the lemma below lemma there exists a r in t t c v c v and t c v obviously there are at most o intersection points which can be viewed as the candidates of being r drezner thus gave an algorithm by evaluating the weight loss of each candidate by theorem theorem the r problem can be solved in o log n time we remark that when r ch c s for any s v degenerates to a convex polygon so does i for any given if not null drezner proved yu lin and lee that in this case i is equivalent to the intersection of all h with w h thus whether i is null can be determined by constructing and solving a linear program of o constraints which takes o time by megiddo s result since o by lemma the problem can be solved in o log n time by applying parametric search over w for unfortunately it is hard to generalize this idea to the case r motivating us to develop a different approach local r within a line in this section we analyze the properties of r of a given point x in subsection and derive a procedure that prunes candidate points with respect to x applying this procedure we study a restricted version of the r centroid problem in subsection in which the leader s choice is limited to a given line l and obtain an o n n algorithm the algorithm is then extended as the basis of the test procedure for the parametric search approach in section pruning with respect to a point given a point x and an angle between and let y be the point on cr x with polar angle with respect to we define m a x w y w x that is the set of angles maximizing w y see figure it can be observed that for any m a x and sufficiently small both and belong to m a x because each v v y does not intersect b y by definition this implies that angles in m a x form open angle interval s of length to simplify the terms let w w y and b b y in the remaining of this section also let f be the line passing through x and parallel to b the following lemma provides the basis for pruning lemma let x be an arbitrary point and be an angle in m a x for any point h f y w w x proof since h f y and y h f y by the definition of bisectors the distance between f and b is no less than which implies that h b y h b y therefore we can derive the following inequality w w w h b y w h b y w w x we assume that a polar angle is measured counterclockwise from the positive the r problem on the plane fig the black arcs represent the intervals of angles in m a x whereas the open circles represent the open ends of these intervals which completes the proof this lemma tells us that given a point x and an angle m a x all points not in h f y can be ignored while finding r as their weight losses are no less than that of x by this lemma we can also prove that the weight loss function is convex along any line on the plane as shown below lemma let be two arbitrary distinct points on a given line for any point x w x max w w proof suppose by contradiction that w x w and w x w for some point x since w x w by lemma there exists an angle m a x such that is included in h f y however since x and locate on different sides of f it follows that is outside h f y and w w x by lemma which contradicts the assumption thus the lemma holds we further investigate the distribution of angles in m a x let ca x be the minimal angle interval covering all angles in m a x see figure a and ca x be its angle span in radians as mentioned before m a x consists of open angle interval s of length which implies that ca x is an open interval and ca x moreover we can derive the following lemma if ca x x is a r proof we prove this lemma by showing that w w x x let be an arbitrary point other than x and be its polar angle with respect to x obviously any angle satisfying h f y is in the open interval the angle span of which is equal to since yu lin and lee a ca x b w edge x fig ca x and w edge x ca x by its definition there exists an angle m a x such that h f y thus by lemma we have w w x thereby proves the lemma we call a point x satisfying lemma a strong r since its discovery gives an immediate solution to the r problem note that there are problem instances in which no strong r exist suppose that ca x for some point x let w edge x denote the wedge of x defined as the intersection of the two h f y and h f y where and are the beginning and ending angles of ca x respectively as illustrated in figure b w edge x is the infinite region lying between two extending from x including x and the two halflines the defined by f and f are called its boundaries and the counterclockwise ccw angle between the two boundaries is denoted by w edge x since ca x we have that w edge x and w edge x it should be emphasized that w edge x is a computational byproduct of ca x when x is not a strong r in other words not every point has its wedge therefore we make the following assumption or restriction in order to avoid the misuse of w edge x assumption whenever w edge x is mentioned the point x has been found not to be a strong r either by computation or by properties equivalently ca x the following essential lemma makes w edge x our main tool for note that its proof can not be trivially derived from lemma since by definition and do not belong to the open intervals ca x and m a x lemma let x be an arbitrary point for any point w edge x w w x proof by symmetry suppose that h f y we can further divide the position of into two cases h f y and h f y the r problem on the plane consider case the two assumptions ensure that there exists an angle such that f passes through obviously any angle satisfies that h f y by the definition of ca x there must exist an angle infinitely close to such that belongs to m a x thus by lemma we have that w w x in case for any angle m a x we have that h f y since is in again w x w x by lemma finally we consider the computation of w edge x lemma given a point x m a x ca x and w edge x can be computed in o n log n time proof by theorem we first compute w x and those ordered tangent lines in o n log n time then by performing angle sweeping around x we can identify in o n time those open intervals of angles with w w x of which m a x consists again by sweeping around x ca x can be obtained from m a x in o n time now if we find x to be a strong r by checking ca x the r problem is solved and the algorithm can be terminated otherwise w edge x can be constructed in o time searching on a line although computing wedges can be used to prune candidate points it does not serve as a stable tool since wedges of different points have indefinite angle intervals and spans however assumption makes it work fine with lines here we show how to use the wedges to compute a local optimal point on a given line a point x with w x w for any point on the line let l be an arbitrary line which is assumed to be for ease of discussion for any point x on l we can compute w edge x and make use of it for pruning purposes by defining its direction with respect to since w edge x by definition there are only three categories of directions according to the intersection of w edge x and l upward the intersection is the of l above and including x downward the intersection is the of l below and including x sideward the intersection is x itself if w edge x is sideward x is a local optimal point on l since by lemma w x w otherwise either w edge x is upward or downward the points on the opposite half of l can be pruned by lemma it shows that computing wedges acts as a predictable tool for pruning on next we list sets of breakpoints on l in which a local optimal point locates recall that t is the set of outer tangent lines of all pairs of circles in c v we define the t as the set l t of intersection points between l and lines in t and the as the set l c v of intersection points between l and circles in c v we have the following lemmas for breakpoints yu lin and lee lemma let be two distinct points on if w w there exists at least a breakpoint on the segment proof let be an arbitrary angle in m a and s be the subset of v located in the h b y by definition is outside the convex hull ch c s and w w s on the other hand since w w w s by assumption we have that is inside ch c s by lemma thus the segment intersects with the boundary of ch c s since the boundary of ch c s consists of segments of lines in t and arcs of circles in c v the intersection point is either a t or a thereby proves the lemma lemma there exists a local optimal point which is also a breakpoint proof let be a local optimal point such that w w for some point adjacent to on note that if no such local optimal point exists every point on l must have the same weight loss and be local optimal and the lemma holds trivially if such and exist by lemma there is a breakpoint on which is itself thus the lemma holds we remark that outer tangent lines parallel to l are exceptional cases while considering breakpoints for any line t t that is parallel to l either t does not intersect with l or they just coincide in either case t is irrelevant to the finding of local optimal points and should not be counted for defining t now by lemma if we have all breakpoints on l sorted in the decreasing order of their a local optimal point can be found by performing binary search using wedges obviously such sorted sequence can be obtained in o log n time since t o and c v o n however in order to speed up the computations of local optimal points on multiple lines alternatively we propose an o log n preprocessing so that a local optimal point on any given line can be computed in o n n time the preprocessing itself is very simple for each point v v we compute a sequence p v consisting of points in v v sorted in increasing order of their polar angles with respect to the computation for all v v takes o log n time in total besides all outer tangent lines in t are computed in o time we will show that for any given line l o n sorted sequences can be obtained from these sequences in o n log n time which can be used to replace a sorted sequence of all t in the process of binary search for any two points v v and z let t r be the outer tangent line of v and z to the right of the line from v to z similarly let t l be the outer tangent line to the left see figure moreover let trl and tll be the points at which t r and t l intersect with l respectively we partition t into o n sets t r v t r vi v v and t l v t l vi v v for v v and consider their corresponding t independently by symmetry we only discuss the case about l t r v the r problem on the plane fig outer tangent lines of lemma for each v v we can compute o sequences of t on l which satisfy the following conditions a each sequence is of length o n and can be obtained in o log n time b breakpoints in each sequence are sorted in decreasing c the union of breakpoints in all sequences form l t r v proof without loss of generality suppose that v is either strictly to the right of l or on note that each point vi v v corresponds to exactly one outer tangent line t r vi thereby exactly one breakpoint trl vi such correspondence can be easily done in o time therefore equivalently we are computing sequences of points in v v instead of breakpoints in the following we consider two cases about the relative position between l and v l intersects with v at zero or one point l intersects with v at two points case let be the angle of the upward direction along see figure a we classify the points in v v by their polar angles with respect to let v denote the sequence of those points with polar angles in the interval and sorted in ccw order similarly let v be the sequence of points with polar angles in and sorted in ccw order obviously v and v together satisfy condition c note that points with polar angles and are ignored since they correspond to outer tangent lines parallel to by general position assumption we can observe that for any two distinct points vi vj in v trl vi is strictly above trl vj if and only if vi precedes vj in v thus the ordering of points in v implicitly describes an ordering of their corresponding breakpoints in decreasing similarly the ordering in v implies an ordering of corresponding breakpoints in decreasing it follows that both v and v satisfy condition b as for condition a both v and v are of length o n by definition also since we have the sequence p v as all points in v v yu lin and lee a no intersection b two intersection points fig two subcases about how v intersects sorted in ccw order v and v can be implicitly represented as concatenations of subsequences of p v this can be done in o log n time by searching in p v the foremost elements with polar angles larger than and respectively case suppose that the two intersection points between l and v are and where is above let and in which and are respectively the polar angles of and with respect to see figure b by assumption we have that which implies that and we divide the points in v v into four sequences v v v and v by their polar angles with respect to v consists of points with polar angles in v in v in and v in all sorted in ccw order it follows that the four sequences satisfy conditions c condition a and b hold for v and v from similar discussion as above however for any two distinct points vi vj in v we can observe that trl vi is strictly below trl vj if and only if vi precedes vj in v similarly the argument holds for v thus what satisfy condition b are actually the reverse sequences of v and v which can also be obtained in o log n time satisfying condition a by lemma c searching in l t r v is equivalent to searching in the o sequences of breakpoints which can be computed more efficiently than the obvious way besides we can also obtain a symmetrical lemma constructing sequences for l t l v in the following we show how to perform a binary search within these sequences lemma with an o log n preprocessing given an arbitrary line l a local optimal point can be computed in o n n time the r problem on the plane proof by lemma the searching of can be done within l t and l c v l t can be further divided into l t r v and l t l v for each v v by lemma these sets can be replaced by o n sorted sequences of breakpoints on besides l c v consists of no more than breakpoints which can be computed and arranged into a sorted sequence in decreasing ycoordinates therefore we can construct o n sequences of breakpoints each of length o n and sorted in decreasing the searching in the sorted sequences is done by performing parametric search for parallel binary searches introduced in the technique we used here is similar to the algorithm in but uses a different weighting scheme for each sorted sequence pj j we first obtain its middle element xj and associate xj with a weight mj equal to the number of elements in pj then we compute the weighted defined as the middle elements p median of the np p ment x such that m is above x m and m is below x j j j j p mj finally we apply lemma on the point x if x is a strong r centroid of course it is local optimal if not assumption holds and w edge x can be computed if w edge x is sideward a local optimal point x is directly found otherwise w edge x is either upward or downward and thus all breakpoints on the opposite half can be pruned by lemma the pruning makes a portion of sequences that possesses over half of total breakpoints by the definition of weighted median lose at least a quarter of their elements hence at least of breakpoints are pruned by repeating the above process we can find in at most o log n iterations the time complexity for finding is analyzed as follows by lemma constructing sorted sequences for l t r v and l t l v for all v v takes o n log n time computing and sorting l c v also takes o n log n time there are at most o log n iterations of the pruning process at each iteration the middle elements and their weighted median x can be obtained in o o n time by the weighted selection algorithm then the computation of w edge x takes o n log n time by lemma finally the pruning of those sequences can be done in o n time in summary the searching of requires o n log n o log n o n log n o n n time we remark that by lemma it is easy to obtain an intermediate result for the r problem on the plane by lemma there exists a r centroid in t t t c v and c v c v by applying lemma to the o lines in t the local optimum among the intersection points in t t and t c v can be obtained in o n time by applying theorem on the o intersection points in c v c v the local optimum among them can be obtained in o log n time thus we can find a r in o n time a nearly o improvement over the o log n bound in r on the plane in this section we study the r problem and propose an improved algorithm of time complexity o log n this algorithm is as efficient as the yu lin and lee algorithm for the problem but based on a completely different approach in subsection we extend the algorithm of lemma to develop a procedure allowing us to prune candidate points with respect to a given vertical line then in subsection we show how to compute a r in o log n time based on this pruning procedure pruning with respect to a vertical line let l be an arbitrary vertical line on the plane we call the strictly to the left of l the left plane of l and the one strictly to its right the right plane of a sideward wedge of some point on l is said to be rightward leftward if it intersects the right left plane of we can observe that if there is some point x l such that w edge x is rightward every point on the left plane of l can be pruned since w w x by lemma similarly if w edge x is leftward points on the right plane of l can be pruned although the power of wedges is not fully exerted in this way pruning via vertical lines and sideward wedges is superior than directly via wedges due to predictable pruning regions therefore in this subsection we describe how to design a procedure that enables us to prune either the left or the right plane of a given vertical line as mentioned above the key point is the searching of sideward wedges on it is achieved by carrying out three conditional phases in the first phase we try to find some proper breakpoints with sideward wedges if failed we pick some representative point in the second phase and check its wedge to determine whether or not sideward wedges exist finally in case of their nonexistence we show that their functional alternative can be computed called the pseudo wedge that still allows us to prune the left or right plane of in the following we develop a series of lemmas to demonstrate the details of the three phases property given a point x l for each possible direction of w edge x the corresponding ca x satisfies the following conditions upward ca x downward ca x rightward ca x leftward ca x proof when w edge x is upward by definition the beginning angle and the ending angle of ca x must satisfy that both h f y and h f y include the of l above x it follows that and thus ca x recall that ca x the case that w edge x is downward can be proved in a symmetric way when w edge x is rightward we can see that h f y must not contain the of l above x and thus by similar arguments therefore counterclockwise covering angles from to ca x must include the angle the case that w edge x is leftward can be symmetrically proved the r problem on the plane lemma let be two points on l where is strictly above for any angle w w symmetrically for w w proof for any angle we can observe that h b y h b y since is strictly above it follows that w w the second claim also holds by symmetric arguments lemma let x be an arbitrary point on if w edge x is either upward or downward for any point edge x w edge has the same direction as w edge x proof by symmetry we prove that if w edge x is upward w edge is also upward for every l strictly below x by property the fact that w edge x is upward means that ca x and thus m a x let be a point on l strictly below x by lemma we have that w w for and w w for it follows that m a and ca so w edge is upward as well following from this lemma if there exist two arbitrary points and on l with their wedges downward and upward respectively we can derive that must be strictly above and that points with sideward wedges or even strong r can locate only between and thus we can find sideward wedges between some specified downward and upward wedges let xd be the lowermost breakpoint on l with its wedge downward xu the uppermost breakpoint on l with its wedge upward and gdu the open segment xd xu xd xu for ease of discussion we assume that both xd and xu exist on l and show how to resolve this assumption later by constructing a bounded box again xd is strictly above xu also we have the following corollary by their definitions corollary if there exist breakpoints in the segment gdu for any such breakpoint x either x is a strong r or w edge x is sideward given xd and xu the first phase can thus be done by checking whether there exist breakpoints in gdu and picking any of them if exist supposing that the picked one is not a strong r a sideward wedge is found by corollary and can be used for pruning notice that when there are two or more such breakpoints one may question whether their wedges are of the same direction as different directions result in inconsistent pruning results the following lemma answers the question in the positive lemma let be two distinct points on l where is strictly above and none of them is a strong r if w edge and w edge are both sideward they are either both rightward or both leftward proof we prove this lemma by contradiction by symmetry suppose the case that w edge is rightward and w edge is leftward this case can be further divided into two subcases by whether or not ca and ca intersect yu lin and lee fig ch c s intersects l between and consider first that ca does not intersect ca because w edge is rightward ca by property thus there exists an angle such that m a since is strictly above by lemma we have that w w w w furthermore since w edge is leftward we can see that w edge and therefore w w by lemma it follows that w w and thus m a by definition m a ca and m a ca which implies that ca and ca intersect at contradicting the subcase assumption when ca intersects ca their intersection must be completely included in either or due to assumption by symmetry we assume the latter subcase using similar arguments as above we can find an angle where such that m a and m a this is a contradiction since since both subcases do not hold the lemma is proved the second phase deals with the case that no breakpoint exists between xd and xu by determining the wedge direction of an arbitrary inner point in gdu we begin with several auxiliary lemmas lemma let be two distinct points on l such that w w and is strictly above there exists at least one breakpoint in the segment a if m a intersects but m a does not b if m a intersects but m a does not proof by symmetry we only show the correctness of condition a from its assumption there exists an angle where such that m a let t s v h b y by definition we have that w s w w w and ch c s h f y which implies that ch c s is strictly above f see figure we first claim that ch c s intersects if not there must exist an angle where such that ch c s h f y that is the r problem on the plane s h b y by definition w w w s since w w s w w and thus m a which contradicts the condition that m a does not intersect thus the claim holds when ch c s intersects l locates either inside or outside ch c s since locates outside ch c s in the former case the boundary of ch c s intersects and forms a breakpoint thereby proves condition a on the other hand if is outside ch c s again there exists an angle such that ch c s h f y by similar arguments we can show that m a by assumption must belong to which implies that ch c s is strictly below f since ch c s is strictly above f as mentioned any intersection point between ch c s and l should be inner to therefore the lemma holds lemma let g be a line segment connecting two consecutive breakpoints on for any two distinct points inner to g w w proof suppose to the contrary that w w by lemma there exists at least one breakpoint in which contradicts the definition of thus the lemma holds lemma when there is no breakpoint between xd and xu any two distinct points in gdu have the same wedge direction if they are not strong r centroids proof suppose by contradiction that the directions of their wedges are different by lemmas and there are only two possible cases w edge is downward and w edge is either sideward or upward w edge is sideward and w edge is upward in the following we show that both cases do not hold case because w edge is downward we have that ca by property and thus m a does not intersect on the other hand whether w edge is sideward or upward we can see that ca and m a intersect by again property since w w by lemma the status of the two points satisfies the condition a of lemma so at least one breakpoint exists between and by definitions of and this breakpoint is inner to gdu thereby contradicts the assumption therefore case does not hold case the proof of case is symmetric to that of case the condition b of lemma can be applied similarly to show the existence of at least one breakpoint between and again a contradiction combining the above discussions we prove that the wedges of and are of the same direction thereby completes the proof of this lemma yu lin and lee this lemma enables us to pick an arbitrary point in gdu the bisector point xb of xd and xu as the representative of all inner points in gdu if xb is not a strong r and w edge xb is sideward the second phase finishes with a sideward wedge found otherwise if w edge xb is downward or upward we can derive the following and have to invoke the third phase lemma if there is no breakpoint between xd and xu and w edge xb is not sideward there exist neither strong r nor points with sideward wedges on proof by lemma this lemma holds for points not in gdu without loss of generality suppose that w edge xb is downward for all points in gdu above xb the lemma holds by again lemma consider an arbitrary point x xb xu xb xu we first show that x is not a strong r suppose to the contrary that x really is by definition we have that ca x and thus ca x and m a x intersect on the other hand ca xb and m a xb do not intersect due to downward w edge xb and property since w xb w x by lemma applying the condition a of lemma to xb and x shows that at least one breakpoint exists between them which contradicts the assumption now that x is not a strong r it must have a downward wedge as xb does by lemma therefore the lemma holds for all points on when l satisfies lemma it consists of only points with downward or upward wedges and is said to be obviously our pruning strategy via sideward wedges could not apply to such lines the third phase overcomes this obstacle by constructing a functional alternative of sideward wedges called the pseudo wedge on either xd or xu so that pruning with respect to l is still achievable again we start with auxiliary lemmas lemma if l is the following statements hold a w xd w xu b w x max w xd w xu for all points x gdu proof we prove the correctness of statement a by contradiction and suppose that w xd w xu besides the fact that l is implies that no breakpoint exists in gdu by lemmas and the wedges of all points in gdu are of the same direction either downward or upward suppose the downward case by symmetry and pick an arbitrary point in gdu say xb since w edge xb is downward we have that m a xb does not intersect oppositely by definition w edge xu is upward so ca xu and m a xu are included in because xb is strictly above xu and w xd w xu according to the condition a of lemma there exists at least one breakpoint in xb xu xu which is a contradiction therefore statement a holds the proof of statement b is also done by contradiction by symmetry assume that w xd w xu in statement a consider an arbitrary point the r problem on the plane x gdu by lemma we have that w x max w xd w xu w xd suppose that the equality does not hold then by lemma at least one breakpoint exists in the segment xd xd contradicting the fact thus w x w xd and statement b holds let max w xd w xu we are going to define the pseudo wedge on either xu or xd depending on which one has the smaller weight loss we consider first the case that w xd w xu and obtain the following lemma if l is and w xd w xu there exists one angle for xu where such that w h b y proof we first show that there exists at least a subset s v with w s such that xu locates on the upper boundary of ch c s let x be the point strictly above but arbitrarily close to xu on by lemma w x hence w x w xu by case assumption it follows that xu w edge x by lemma and w edge x must be downward by property we have that ca x thus there exists an angle m a x where such that w h y w t b let s v h b y since w xu w s xu is inside ch c s by lemma oppositely by the definition of s x is outside the convex hull ch c s it implies that xu is the topmost intersection point between ch c s and l hence on the upper boundary of ch c s it is possible that xu locates at the leftmost or the rightmost point of ch c s the claimed angle is obtained as follows since xu is a boundary point of ch c s there exists a line f passing through xu and tangent to ch c s let be the angle satisfying that f f and ch c s h f y obviously we have that s h b y and thus w h b y w s let be an arbitrary angle satisfying the conditions of lemma we apply the line f for trimming the region of w edge xu so that a sideward wedge can be obtained let p w xu called the pseudo wedge of xu denote the intersection of w edge xu and h f y deriving from the three facts that w edge xu is upward w edge xu and we can observe that either p w xu is xu itself or it intersects only one of the right and left plane of in the two circumstances p w xu is said to be null or sideward respectively the pseudo wedge has similar functionality as wedges as shown in the following corollary corollary for any point p w xu w w xu proof if w edge xu the lemma directly holds by lemma otherwise we have that h f y and thus h b y contains h b y then by lemma w w h b y w h b y thereby completes the proof yu lin and lee by this lemma if p w xu is found to be sideward points on the opposite with respect to l can be pruned if p w xu is null xu becomes another kind of strong r in the meaning that it is also an immediate solution to the r problem without confusion we call xu a conditional r in the latter case on the other hand considering the reverse case that w xd w xu we can also obtain an angle xd and a pseudo wedge p w xd for xd by symmetric arguments then either p w xd is sideward and the opposite side of l can be pruned or p w xd itself is a conditional r thus the third phase solves the problem of the nonexistence of sideward wedges recall that the three phases of searching sideward wedges is based on the existence of xd and xu on l which was not guaranteed before here we show that by constructing appropriate border lines we can guarantee the existence of xd and xu while searching between these border lines the bounding box is defined as the smallest rectangle that encloses all circles in c v obviously any point x outside the box satisfies that w x w v and must not be a r thus given a vertical line not intersecting the box the to be pruned is trivially decided moreover let ttop and tbtm be two arbitrary horizontal lines strictly above and below the bounding box respectively we can obtain the following lemma let l be an arbitrary vertical line intersecting the bounding box and and denote its intersection points with ttop and tbtm respectively w edge is downward and w edge is upward proof consider the case about w edge as described above we know the fact that w xd w v let be an arbitrary angle with we can observe that h f y can not contain all circles in c v that is v h b y this implies that w w and m a therefore we have that m a and w edge is downward by property by similar arguments we can show that w edge is upward thus the lemma holds according to this lemma by inserting ttop and tbtm into t the existence of xd and xu is enforced for any vertical line intersecting the bounding box besides it is obvious to see that the insertion does not affect the correctness of all lemmas developed so far summarizing the above discussion the whole picture of our desired pruning procedure can be described as follows in the beginning we perform a preprocessing to obtain the bounding box and then add ttop and tbtm into t now given a vertical line l whether to prune its left or right plane can be determined by the following steps if l does not intersect the bounding box prune the not containing the box compute xd and xu on the r problem on the plane find a sideward wedge or pseudo wedge via three forementioned phases terminate whenever a strong or conditional r is found a if breakpoints exist between xd and xu pick any of them and check it b if no such breakpoint decide whether l is by checking xb c if l is compute p w xu or p w xd depending on which of xu and xd has smaller weight loss prune the right or left plane of l according to the direction of the sideward wedge or pseudo wedge the correctness of this procedure follows from the developed lemmas any vertical line not intersecting the bounding box is trivially dealt with in step due to the property of the box when l intersects the box by lemma xd and xu can certainly be found in step the three of step correspond to the three searching phases when l is not a sideward wedge is found either at some breakpoint between xd and xu in step a by corollary or at xb in step b by lemma otherwise according to lemma or its symmetric version a pseudo wedge can be built in step c for xu or xd respectively finally in step whether to prune the left or right plane of l can be determined via the sideward wedge or pseudo wedge by respectively lemma or corollary the time complexity of this procedure is analyzed as follows the preprocessing for computing the bounding box trivially takes o n time in step any vertical line not intersecting the box can be identified and dealt with in o time finding xd and xu in step requires the help of the algorithm developed in although the algorithm is designed to find a local optimal point we can easily observe that slightly modifying its objective makes it applicable to this purpose without changing its time complexity thus step can be done in o n n time by lemma in step a all breakpoints between xd and xu can be found in o n log n time as follows as done in lemma we first list all breakpoints on l by o n sorted sequences of length o n which takes o n log n time then by performing binary search with the of xd and xu we can find within each sequence the breakpoints between them in o log n time in step a or b checking a picked point x is done by computing ca x that requires o n log n time by lemma to compute the pseudo wedge in step c the angle satisfying lemma or symmetrically can be computed in o n log n time by sweeping technique as in lemma thus p w xu or p w xd can be computed in o n log n time finally the pruning decision in step takes o time summarizing the above these steps require o n n time in total since the invocation of lemma needs an additional o log n preprocessing we have the following result lemma with an o log n preprocessing whether to prune the right or left plane of a given vertical line l can be determined in o n n time yu lin and lee searching on the euclidean plane in this subsection we come back to the r problem recall that by lemma at least one r can be found in the three sets of intersection points t t c v t and c v c v which consist of total o points let l denote the set of all vertical lines passing through these o intersection points by definition there exists a vertical line l such that its local optimal point is a r conceptually with the help of lemma can be derived by applying approach to l pick the vertical line l from l with median determine by lemma whether the right or left plane of l should be pruned discard lines of l in the pruned and repeat above until two vertical lines left obviously it costs too much if this approach is carried out by explicitly generating and sorting the o lines however by separately dealing with each of the three sets we can implicitly maintain sorted sequences of these lines and apply the approach let lt lm and lc be the sets of all vertical lines passing through the intersection points in t t c v t and c v c v respectively a local optimal line of lt is a vertical line such that its local optimal point has weight loss no larger than those of points in t t the local optimal lines and can be similarly defined for lm and lc respectively we will adopt different techniques to find the local optimal lines in the three sets as shown in the following lemmas lemma a local optimal line of lt can be found in o log n time proof let by definition there are intersection points in t t and vertical lines in lt for efficiently searching within these vertical lines we apply the ingenious idea of parametric search via parallel sorting algorithms proposed by megiddo consider two arbitrary lines tg th t if they are not parallel let tgh be their intersection point and lgh be the vertical line passing through tgh suppose that tg is above th in the left plane of lgh if applying lemma to lgh prunes its right plane tg is above th in the remained left plane on the other hand if the left plane of lgh is pruned th is above tg in the remained right plane therefore lgh can be treated as a comparison between tg and th in the sense that applying lemma to lgh determines their ordering in the remained it also decides the ordering of their intersection points with the undetermined local optimal line since the pruning ensures that a local optimal line stays in the remained it follows that by resolving comparisons the process of pruning vertical lines in lt to find can be reduced to the problem of determining the ordering of the intersection points of the lines with or say the sorting of these intersection points on while resolving comparisons during the sorting process we can simultaneously maintain the remained by two vertical lines as its boundaries thus after resolving all comparisons in lt one of the two boundaries must be a local optimal line as we know the most efficient way to the r problem on the plane obtain the ordering is to apply some optimal sorting algorithm as which needs to resolve only o log comparisons instead of comparisons since resolving each comparison takes o n n time by lemma the sorting is done in o n n o o n time so is the finding of however megiddo observed that when multiple comparisons can be indirectly resolved in a batch simulating parallel sorting algorithms in a sequential way naturally provides the scheme for batching comparisons thereby outperform the case of applying as let ap be an arbitrary parallel sorting algorithm that runs in o log n steps on o n processors the parallel merge sort in using ap to sort the lines in lt on takes o log parallel steps at each parallel step there are k o comparisons lk to be resolved we select the one with median among them which is supposed to be some li if applying lemma to li prunes its left plane for each comparison lj to the left of li the ordering of the corresponding lines of lj in the remained right plane of li is directly known thus the o comparisons to the left of li are indirectly resolved in o time if otherwise the right plane of li is pruned the o comparisons to its right are resolved in o time by repeating this process of selecting medians and pruning on the remaining elements o log k times all k comparisons can be resolved which takes o n n o log k o k o n n o o time therefore going through o log parallel steps of ap requires o log o log n time which determines the ordering of lines in lt on and also computes a local optimal line lemma a local optimal line of lm can be found in o log n time proof to deal with the set lm we use the ideas similar to the proofs of lemmas and in order to divide c v t into sorted sequences of points given a fixed circle c for some point v we show that the intersection points in c t r v and c t l v for each v v can be grouped into o sequences of length o n which are sorted in increasing summarizing over all circles in c v there will be total o sequences of length o n each of which maps to a sequence of o n vertical lines sorted in increasing then finding a local optimal line can be done by performing to the o sequences of vertical lines via parallel binary searches the details of these steps are described as follows first we discuss about the way for grouping intersection points in c t r v and c l v for a fixed point v v so that each of them can be represented by o subsequences of p v by symmetry only c t r v is considered similar to lemma we are actually computing sequences of points in v v corresponding to these intersection points for each vi v v the outer tangent line t r vi may intersect c at two one or zero point let vi and vi denote the first and second points respectively at which t r vi intersects c along the direction from v to vi note that when t r vi intersects c at less than two points vi or both of them will be null in the following we consider the sequence computation under two cases about the relationship between and v v and yu lin and lee a no intersection b two intersection points fig two subcases about how v intersects case since v coincides with c t r v is just the set of tangent points vi for all vi v v it is easy to see the the angular sorted sequence p v directly corresponds to a sorted sequence of these n tangent points in ccw order p v can be further partitioned into two v and v which consist of points in v v with polar angles with respect to v in the intervals and respectively since they are sorted in ccw order we have that intersection points corresponding to v and to the reverse of v are sorted in increasing as we required obviously v and v are of length o n and can be obtained in o log n time case suppose without loss of generality that v locates on the lower left quadrant with respect to and let be the polar angle of with respect to this case can be further divided into two subcases by whether or not v intersects c at less than two points consider first the subcase that they intersect at none or one point see figure a let and be the angles such that t r y and t r y are inner tangent to v and c where note that only when the two circles intersect at one point for each vi v v t r vi does not intersect c if the polar angle of vi with respect to v is neither in nor in we can implicitly obtain from p v two subsequences v and v consisting of points with polar angles in and in respectively it can be observed that the sequence of points vi listed in v corresponds to a sequence of intersection points vi listed in clockwise cw order on c and moreover a sequence of vi listed in ccw order on symmetrically the sequence of points vj in v corresponds to a sequence of vj in ccw order and a sequence of vj in cw order the four implicit sequences of intersection points on c can be further partitioned by a horizontal line lh passing through the r problem on the plane its center so that the resulted sequences are naturally sorted in either increasing or decreasing therefore we can implicitly obtain at most eight sorted sequences of length o n in replace of c t r v by appropriately partitioning p v in o log n time consider that v intersects at two points and where is to the upper right of see figure b let and be the angles such that t r y and t r y are tangent to v at and respectively again p v can be implicitly partitioned into three subsequences v v and v which consists of points with polar angles in and respectively by similar observations v corresponds to two sequences of intersection points listed in cw and ccw order respectively and v corresponds to two sequences listed in ccw and cw order respectively however the sequence of points vi in v corresponds to the sequences of vi and vi listed in both ccw order these sequences can also be partitioned by lh into sequences sorted in it follows that we can implicitly obtain at most twelve sorted sequences of length o n in replace of c t r v in o log n time according to the above discussion for any two points u v v u r v and u t l v can be divided into o sequences in o log n time each of which consists of o n intersection points on u sorted in increasing xcoordinates thus c v t can be as o sorted sequences of length o n in o log n time which correspond to o sorted sequences of o n vertical lines now we can perform parametric search for parallel binary search to these sequences of vertical lines by similar techniques used in lemma for each of the o sequences its middle element is first obtained and assigned with a weight equal to the sequence length in o time then the weighted median l of these o elements are computed in o time by applying lemma to l in o n n time at least of total elements can be pruned from these sequences taking another o time therefore a single iteration of pruning requires o time after o log n such iterations a local optimal line can be found in total o log n time thereby proves the lemma lemma a local optimal line of lc can be found in o log n time proof there are at most o points in c v v thus lc can be obtained and sorted according to in o log n time then by simply performing binary search with lemma a local optimal line can be easily found in o log n iterations of pruning which require total o n n time in summary the computation takes o log n time and the lemma holds by definition can be found among and which can be computed in o log n time by lemmas and respectively then a r centroid can be computed as the local optimal point of in o n n time by lemma combining with the o log n preprocessing for computing the angular sorted sequence p v s and the bounding box enclosing c v we have the following theorem yu lin and lee theorem the r problem can be solved in o log n time concluding remarks in this paper we revisited the problem on the euclidean plane under the consideration of minimal distance constraint between facilities and proposed an o log n algorithm which close the bound gap between this problem and its unconstrained version starting from a critical observation on the medianoid solutions we developed a pruning tool with indefinite region remained after pruning and made use of it via structured parametric search approach which is quite different to the previous approach in considering distance constraint between facilities in various competitive facility location models is both of theoretical interest and of practical importance however similar constraints are rarely seen in the literature it would be good starting points by introducing the constraint to the facilities between players in the and problems maybe even to the facilities between the same player references cole slowing down sorting networks to obtain faster sorting algorithms journal of the acm vol no pp cole parallel merge sort siam journal on computing vol no pp dasci conditional location problems on networks and in the plane in eiselt marianov eds foundations of location analysis springer new york pp davydov kochetov and plyasunov on the complexity of the centroid problem in the plane top vol no pp drezner competitive location strategies for two facilities regional science and urban economics vol no pp drezner and eitan competitive location in the plane annals of operations research vol no pp eiselt and laporte sequential location problems european journal of operational research vol pp eiselt laporte and thisse competitive location models a framework and bibliography transportation science vol pp eiselt marianov vladimir and drezner tammy competitive location models in laporte nickel saldanha da gama eds location science springer international publishing pp hakimi on locating new facilities in a competitive environment european journal of operational research vol no pp hakimi locations with spatial interactions competitive locations and games in mirchandani francis eds discrete location theory wiley new york pp hansen thisse and wendell equilibrium analysis for voting and competitive location problems in mirchandani francis rl eds discrete location theory wiley new york pp the r problem on the plane hotelling stability in competition economic journal vol lee wu geometric complexity of some location problems algorithmica vol no pp megiddo applying parallel computation algorithms in the design of serial algorithms journal of the acm vol no pp megiddo algorithms for linear programming in and related problems siam journal on computing vol no pp plastria static competitive facility location an overview of optimisation european journal of operational research vol no pp reiser a linear selection algorithm for sets of elements with weights information processing letters vol no pp and the location model networks and spatial economics vol no pp
8
hybrid fuel cells power for long duration robot missions in field environments jekanthan danielle daniel steven mechanical engineering department massachusetts institute of technology massachusetts cambridge ma jekan dstrawse dubowsky department of mechanical aerospace and nuclear engineering rensselaer polytechnic institute jonsson engineering center troy new york mobile robots are often needed for long duration missions these include search rescue sentry repair surveillance and entertainment current power supply technology limit walking and climbing robots from many such missions internal combustion engines have high noise and emit toxic exhaust while rechargeable batteries have low energy densities and high rates of in theory fuel cells do not have such limitations in particular proton exchange membrane pems can provide very high energy densities are clean and quiet however pem fuel cells are found to be unreliable due to performance degradation this can be mitigated by protecting the fuel cell in a battery hybrid configuration using filtering electronics that ensure the fuel cell is isolated from electrical noise and a battery to isolate it from power surges simulation results are presented for a hoap humanoid robot that suggests a fuel cell powered hybrid power supply superior to conventional batteries i introduction mobile robots including walking robots are needed to perform long duration missions that are difficult dangerous and tedious these include search and rescue repair entertainment sentry surveillance applications continuous operation of these robots lasting days and weeks not hours would be ideal for these applications typical power demands for field robots will vary significantly during a mission often with high peak power demands these field systems often have constraints on their mass volume and noise current power supply technology is a key limiting factor for long duration field robotic applications internal combustion engines can provide high power for long durations but produce toxic exhaust noise and strong thermal signatures making them inappropriate for many important applications current rechargeable batteries have very low energy densities and high rates of selfdischarge requiring systems to stop and recharge every few hours making them ineffective for continuous long duration missions hence there is a significant need for a power supply that can provide the high total energy required for long duration missions that is quiet and clean figure left boston dynamics big dog a supply robot right robonaut repair robot ii fuel cell power for mobile robots fuel cells are high energy sources of power that have been suggested for robots they are a promising alternative mobile source of power and have the potential to overcome limitations of current batteries and internal combustion engines they are simple electrochemical devices that convert chemical energy into electricity figure unlike a battery fuel cells require a constant supply of fuel and oxidant to produce electricity proton exchange membrane pem fuel cells are particularly attractive for robotics these devices consist of simple solid state components sandwiched together as shown in figure they combine hydrogen fuel and oxygen from breathing air through the most energy releasing reaction known to produce electricity and water it has been demonstrated pem fuel cells can reach or higher operating efficiencies at room temperature and produce clean water exhaust iii the challenges of fuel cells for robots while pem fuel cells are simple and sound great in theory they have three fundamental problems for practical robotics applications these problems are storage of hydrogen fuel reliability of fuel cells and low power hydrogen fuel due to its high energy content and low density is difficult to store in our research we have developed simple innovative hydrogen storage technologies that promise energy storage densities better than the best batteries of today figure a pem fuel cell consumes reactants hydrogen and oxygen to produce electricity water and heat second pem fuel cells have been found to be unreliable our studies of pem fuel cells show that they are delicate and unreliable due to degradation of their components resulting in short lives and premature failure however our physical models and experiments suggests that pem fuel cells controlled to operate within narrow operating can be made robust have long lives of years or more and high operating efficiencies among the factors known to degrade fuel cells are high operating voltages and electrical noise as discussed below mobile field robots operating in unstructured environments are subject to very substantial variation which without proper control can result in fuel cell degradation that shortens their lives a solution to this problem is discussed in the section below a third problem with fuel cells is that while they are high energy devices they have relatively low power this is a problem for robotics where typical power requirements can vary substantially over a mission with rest periods and short bursts at peak power these varying power demands are known to stress the fuel cells resulting in short lives a solution is to use in a hybrid system for mobile robots that maintains a fuel cell at optimal operating conditions to maximize life and efficiency by protecting it from external electrical load variations noises and meeting peak power requirement using a battery see figure fuel cell hybrid systems have been subjected to meet rapid transient power demands in large and stationary applications and in robotics to meet power surges however these hybrid system designs have not considered the effects of fuel cell degradation figure proposed fuel hybrid power supply for robots iv research the research presented here is focused on developing hybrid system design concept for mobile robots that have energy densities that exceed the best battery technology the hybrid system is designed to meet the required peak power demands and isolate the fuel cell from degrading stresses of high and low frequency noises generated by conditioning circuits required for battery management physical models are used to simulate expected conditions and control systems are developed to demonstrate the concept it is shown that the results are vast improvement over conventional batteries in terms of life efficiency energy density and power density case study power for humanoid walking robot here a hybrid fuel cell power supply for a humanoid walking robot figure developed by fujitsu is presented the is a kg robot with a maximum rated power of it contains a kg nickel metal hydride rechargeable battery pack by default the system contains servo actuators for each leg and for each arm for the head and one for the waist the robot has onboard computer equivalent to a pentium iii system a vision system consisting of ccd cameras onboard accelerometers gyroscope and pressure sensors on each feet figure right fujitsu s hoap robot left cyberbotics webotstm model of the a simulation model of from cyberbotics webotstm is used for our power demand calculations the robot system consists of three different subsystems for power calculations namely system computer sensors and power system for the system the simulator model provides mechanical power output of the servo motors the servo motors are assumed to have a electrical to mechanical efficiency the computer and sensor system are assumed to be always powered and consume w based on the specifications below power demand profiles of the robot s walking behavior is shown figure for these scenarios alternative power sources are compared with the default nickel metal hydride battery packs that weigh kg vi fuel cell hybrid system the fuel cell hybrid system consists of a fuel cell stack that provides steady power source and rechargeable lithium ion nanophosphate tm battery that meets peak power demands the fuel cells within the stack are operated at constant operating voltage of providing a operating efficiency our research into the degradation of fuel cells based on models and experimental results shows that increased operating voltages exponentially decreases the life of the fuel cell operating the fuel cell at constant voltage of v or less ensures while providing sufficiently high operating efficiency the fuel cell trickle charges the battery during idle times ensuring the battery is fully charged to meet power peaks the nanophosphatetm battery handles peak demands and can better handle deep battery discharges compared to conventional lithium ion batteries ensuring the battery is nearly fully charged maximizes its life the battery for the hybrid system is sized based on its specific power density to meet the maximum possible power requirements of the robot an oscillation suppression circuit interfaces a fuel cell to the power management system consisting of power switching circuitry and a convertor the interface circuit effectively extracts the energy from the fuel cell and transfers it into the battery the oscillation suppression circuits prevents any voltage oscillation from electrical circuits particularly convertors from being noticed by the fuel cell this ensures the fuel cell operates at steady operating voltage without any electrical load oscillations figure power demand of a robot walking at vii hybrid system sizing based on the power demand profiles figure the fuel cell will provide a constant steady source of power of while the power peaks will be handled by the battery for w steady supply of power a fuel cell stack weighing g will be required for a w peak required by the robot a g lithium ion nanophosphatetm battery is required with specific power of for cells another g is allocated for mass of power electronics and other items leaving g for the lithium hydride fuel supply with an energy density of viii power system comparison four power supply configurations are compared including a nickel metal hydride and lithium ion battery system a fuel cell system and a fuel cell hybrid system see table table power supply comparison for humanoid robot power supply fc stack fc fuel energy system runmass mass density life time nimh battery year hours li ion battery year hours fuel cell g g days hours fuel cell hybrid g g years hours nickel metal hydride and lithium ion batteries have the lowest energy densities and thus provide short before requiring recharging the system life of the batteries is computed based on expected lifetime of multiplied by the hours a direct fuel cell system has the longest runtime however the life of the system based on our degradation models is expected to last just days making this option impractical the fuel cell hybrid system offers a good between both and system life ix summary and conclusions based on these results the fuel cell hybrid system concept offers high energy density and would meet the required peak power demands of a battery the key to our hybrid system concept is effective control and design where a fuel cell and battery are optimally sized to minimize stress on the fuel cell while enabling a battery to meet power demand peaks by minimizing stresses on the fuel cell the system can be operated for at high operating efficiencies references contribution to world robotics technical report european robotics network asada et al a roadmap to us robotics from internet to robotics technical report editor computing community research association rubio urquia and dormida diagnosis of performance degradation phenomena in pem fuel cells international journal of hydrogen energy pp thangavelautham dubowsky on the catalytic degradation in fuel cell power supplies for mobile field sensors fuel cells vol no pp thangavelautham strawser dubowsky lithium hydride powered pem fuel cells for small mobile robotic missions ieee international conference on robots and automation michel o cyberbotics webotstm professional mobile robot simulation international journal of advanced robotic systems pp vol no kesner plante boston fabian dubowsky mobility and power feasibility of a microbot team system for extraterrestrial cave exploration proceedings of the ieee international conference robotics and automation rome italy april joh et a direct methanol fuel cell system to power a humanoid robot journal of power sources vol no pp barbir pem fuel cells theory and practice academic press view publication stats
3
using a hierarchy of domain specific languages in complex software systems design lugovsky vslougovski arxiv sep february abstract a new design methodology is introduced with some examples on building domain specific languages hierarchy on top of scheme introduction nized as true macros as they hide an access to the host language this situation looks like a paradox on the one hand industry uses the metaprogramming ideas and tools and it is easy to imagine how it would suffer without them on the other hand industry does not want to hear anything related to the metaprogramming it does not want people inventing new programming languages plenty of industry coders barely use only one language and it managers believe without any reason that they can not be taught to use more industry prefers to a wheel and to express any sort of complexity in the form of libraries for static steady languages for some strange reason learning complicated libraries for a language which barely fits problem domain needs is preferred to learning a small new language specifically designed for it in this paper i am trying to advocate the metaprogramming approach as the major design methodology for complex systems sounds like another one silver bullet invention there were many methodologies claiming to solve all possible problems of the mankind rup extreme programming etc why do we need another one simply because the previous approaches did not succeed they were too tied to particular programming technologies mostly oop varieties which are definitely not silver bullets metaprogramming programs that write programs that write programs too complicated a hackers technique which can not be applied to the real world problems this is exactly how it industry specialists think about metaprogramming and this is a completely wrong notion metaprogramming is the only known way to reduce the complexity significantly in some areas programs that write programs are accepted by the industry due to the enormous level of complexity of the corresponding handwritten code regular expressions lexers and parsers generators to name a few and code wizards and templates in popular integrated development environments are also widely used but this does not help at all in the overall methodology recognition the industry s most beloved and widely buzzworded language java does not have even such a rudimentary preprocessor as c does very few programmers have an idea on how to use the templates they just utilize stl without any understanding of the true source of the power even in the enlightened world of lisp programming the misunderstanding is surprisingly wide almost all of lisp dialects and scheme implementations have problems with macros not so many people are using them and even the current scheme standard contains only hygienic macros that can hardly be methodology is different it strongly encourages of domain specific languages is not very tricky the use of all possible programming an approach i will describe here is based on metaprogramming techniques it requires a so gies and to invent the impossible ones called core language on top of which we will build a hierarchy of our domain specific domain specific guages the core language should possess the following properties guages below i am providing an outline of the proposed methodology any problem domain can be best expressed using a language mathematical programming natural specially designed for it in most cases there should be one entity in a language for every entity in a problem domain for example if a problem domain is the recognition of syntax constructions in a characters stream the domain specific language should contain characters and characters sets as a primary entity and automata constructions for expressing syntax that is enough regular expressions language is designed it is hard to believe that somebody will ever invent anything better for this purpose than this most optimal dsl if a problem domain is already specified as an algebra we even do not have to design the dsl it will be this algebra itself galvanised with any underlying computational semantics this is the way sql was born if a problem domain is graphics linear algebra and stereometry should be used all the languages and data formats dedicated to contain subsets of this formal theories as it is stated in the object of a software architecture is to minimise the semantic distance between the system s specification and its core language for any problem it is convenient to have a language that best fits it there already exist specialized languages for some common problems but what to do if none is available the answer is trivial implement it implementation true macros that is we must have an access to a complete programming language preferably the same as a host language or a different one inside the macro definitions macros should be real programs which can do anything that the programs written in the host language can do macros are producing the code in the host language in the form of text or directly as an abstract syntax tree true runtime eval programs that are generated in the runtime should be evaluated this can be a different language than the host language or better the same one this should be a real programming language equivalent in its expressive power to the general purpose languages simplicity it is an extensible core and should not contain any unnecessary complexity that can be later added by a user who really needs it comprehensive and easy to use data types system if a type system is well suited for expressing any possible abstract syntax trees the language fits this requirement on top of the core language we have to build functionality that will be needed to implement programming languages it is lexing parsing intermediate languages that fit well computational models different from the model of the core language if the core language is imperative or an eager functional we will need a graph reduction engine to implement lazy functional dsls or a term unification engine to implement logical languages and a stack machine if we have to go to lower levels the core language enriched with this swiss army knife for programming languages development then becomes a major tool for any project new methodology the development process must fit in the following chain divide the problem into possibly using some object oriented design techniques or whatever fits better formalize each scheme example a good example of a practical core language is scheme with addition of common macros it uses as an ast and composition is very natural are good enough to represent any possible ast for example xml is naturally represented as sxml it provides a true runtime eval hosting the same language as in compile time there exist some practical and efficient scheme implementations which provide performance acceptable for most tasks good ffi and thus integration with legacy libraries implement the domain specific language after this formalization using the let us start with adding the functionality core language and other dsl with the described above to scheme first of all we will same semantics need parsing not all of our team members solve the problem using the best possible are fond of parentheses so we have to implelanguage ment many complicated syntaxes the most this way any project will grow into a tree natural way for a functional programming lanarchy of domain specific languages any guage is to implement a set of parsing combinaguage is a subset or a superset of another tors for building recursive descendant parsers guage in the hierarchy or may be mostly ll but it is not such a fixed limit as tion of several languages and the amount of lalr for automata generators coding for a new language if we already have a deep and comprehensive hierarchy is quite small a development team working within this methodology should consist of at least one specialist who maintains this hierarchy an architect who formalizes problems and a number of coders who specialize in particular problem domains they even may not be programmers at all they just have to know well their domains and operate them in terms that are as close as possible to the native problem domain terminology for example html designer will be happy operating tags for his templates that is why jsp custom tags are so popular mathematician will find a language modelled after the standard mathematical notation intuitive for this reason wolfram mathematica is so popular among game script writer will operate a language expressing characters their properties and action rules stating not programming this list can be continued infinitely of course we will use metaprogramming wherever possible all the parsers should be functions which consume a list of tokens characters as an input and return the result in the following form result anyresult or fail reason input to access the parsing result we will provide the following macros success r not eq caar r fail and if we are sure that we have some result we will use the following macro to extract it otherwise this will return a fail message result r cdar r the last definition looks surprisingly comin any case we can access the rest of the pact thanks to the pselect macro from this stream after the parsing pass stage the power of metaprogramming becomes rest r more and more obvious cdr r just as a reference we will show here the these macros could also be implemented as definition of a choice combinator functions but all the macros are available in the context of macro definitions while functions l are not let l almost all of the parsers should fail on the if success end of the input so the following safeguard macro will be extremely useful l parser p l if l fail empty p l and its nested version is obvious por pselect por we will skip the rest of the combinators now this game becomes more interesting definitions and just show what we gained after here is a very handy macro that nests a all for example now to define a floating point quence of applications into the form of m number recognizer we can use this definition m m px pn define pselect m por if let car pmany pdigit px cdr or p pcharx m pselect m px pmany pdigit sequence parsing combinator with two guments can be declared as follows it looks like bnf but still too schemish this is already a domain specific language on l top of scheme but it does not conform to the let l perfectionist requirement however we can use if success this still not perfect parsing engine to imple let rest ment an intermediate regular expressions lan if success guage as a macro omitting the definitions we cons cons result will show the previous recognizer implemented append in a new way result define result regexp rest cons list fail car l pdigit pdigit and it will be immediately turned into the sequence parsing combinator with an arbitrary this new domain specific language can be number of arguments used in many ways for example we can build pselect a simple infix for constants v defparsers letrec epr let body regexp num lst aprs epr regexp body scm psym epr list body scm psym epr list body scm psym epr list body scm psym epr list body car result epr v only languages with a computational model which is close to the model of scheme eager dynamically typed functional languages with imperative features but any possible languages providing small intermediate dsls which simulate alternative computational models for those who need very lowlevel power it is possible to produce an intermediate code in c language for example the bigloo scheme implementation allows to include c code when compiling through c backend for implementing complicated runtime models it is easy to produce an intermediate dsl on top of scheme and then use both scheme and forth metaprogramming powers alternatives to make the picture complete it is necesand then wherever we want to calculate a sary to mention other possible choices for the numerical constant in the compilation time we core language the popular programming may use the macro language could become such a core language relatively easily it has a turingthis language does not look like scheme complete macro system unfortunately featurany more and we can go even further ing the language different from the host lanplementing a pascal or rlisp language guage so only one stage preprocessing is poson top of scheme using just the same regexp sible it lacks a good type system but it could macro to describe both a lexer and a parser be simulated on top of the existing lowlevel feaand then to compile the resulting code to the tures there exist some implementations of the recursive descendant parsing combinators underlying scheme for boost spirit library implementation of the functional programming pasqualish boost lambda and even lisp compilers on top of the template system the runtime function fac x evaluation is available in different ways using begin pluggable scripting languages other than if x then using the interpreter an interesting x fac x approach is described in else another choice is forth it is a powerful end metalanguage but the core language remains too lowlevel and unsafe forth is often the only no more parenthesis that frighten choice available for the embedded systems with programmers so much now even pascal limited resources it is worth mentioning modern experimengrammers can use scheme the code samples above demonstrate some tal extensions for strictly typed functional lanof the techniques available in this approach guages template haskell and metaocaml the complete implementation can be both of them conform well to all of loaded from it is possible to produce not the core language requirements objective caml also provides metaprogramming too ming using a sophisticated preprocessing engine and ocaml is quite good for conclusion implementing interpreters using the based technique some examples can be found the idea of metaprogramming is not something in esoteric metaprogramming is used widely by no doubt that common lisp would also commercial programmers they just do not rebe a very good platform since it shares almost alize it the methodology proposed in this paall the features with scheme with exception of per is an attempt of uncovering all the hidsimplicity the killing feature of common lisp den power of the metaprogramming techniques is advanced runtime compilation in some of the available major implementations cmu cl and its the scheme example presented above is descendant sbcl are good examples and part of the working project which already the defmacro is guaranteed to be working in all proved the supremacy of this approach a subthe implementations available which is a great set of the domain specific languages hierarchy advantage over scheme designed for the www data acquiring project for relatively small projects tcl would is shown on the fig be a good choice its computational model the subject discussed requires future reis based on rewrites and primary data search and practical approbation whose final tures are just the strings of text which result may be a completely formalized matheders an extremely powerful metaprogramming matically strict methodology description and a tool javascript language is also based on core language which will best fit this methodthe rewrites semantics so it could be used for ology references diomidis spinellis reliable software implementation using domain specific languages in and kafka editors proceedings esrel the tenth european conference on safety and reliability pages rotterdam september esra vdi tum a balkema draft http the boost project http the bigloo practical scheme implementation http lugovsky dslengine project home http graham the language http graham the python paradox http lugovsky publications list http cmu common lisp http steel bank common lisp http tcl programming language resource http metaocaml project home http sheard jones template metaprogramming for haskell http tempo project home http cint project home http core language parsing combinators stack machine lexer graph machine unification machine templates language parser generator regular expressins data aquision regexps rule engine sql templates figure a sample dsls hierarchy subset for the web crawler project
2
acd term rewriting arxiv aug gregory j duck peter stuckey and sebastian brand nicta victoria laboratory department of computer science software engineering university of melbourne australia abstract in this paper we introduce associative commutative distributive term rewriting acdtr a rewriting language for rewriting logical formulae acdtr extends ac term rewriting by adding distribution of conjunction over other operators conjunction is vital for expressive term rewriting systems since it allows us to require that multiple conditions hold for a term rewriting rule to be used acdtr uses the notion of a conjunctive context which is the conjunction of constraints that must hold in the context of a term to enable the programmer to write very expressive and targeted rewriting rules acdtr can be seen as a general logic programming language that extends constraint handling rules and ac term rewriting in this paper we define the semantics of acdtr and describe our prototype implementation introduction term rewriting is a powerful instrument to specify computational processes it is the basis of functional languages it is used to define the semantics of languages and it is applied in automated theorem proving to name only a few application areas one difficulty faced by users of term rewriting systems is that term rewrite rules are local that is the term to be rewritten occurs in a single place this means in order to write precise rewrite rules we need to gather all relevant information in a single place example imagine we wish to program an overloaded ordering relation for integers variables real variables and pair variables in order to write this the type of the variable must be encoded in the as in int x int y intleq int x int y real x real y realleq real x real y pair pair in a more standard language the type information for variables and other information would be kept separate and looked up when required operator precedences used throughout this paper are binds tighter than and all other operators bind tighter than term rewriting systems such as constraint handling rules chrs and associative commutative ac term rewriting allow look up to be managed straightforwardly for a single conjunction example in ac term rewriting the above example could be expressed as int x int y x y int x int y intleq x y real x real y x y real x real y realleq x y pair x pair y x y pair x pair y where each rule replaces the x y by an appropriate specialised version in the conjunction of constraints the associativity and commutativity of is used to easily collect the required type information from a conjunction one difficulty remains with both ac term rewriting and chrs the look up is restricted to be over a single large conjunction example given the term int int pair x pair y x y then after rewriting x y to we could not rewrite since the types for appear in a different level in order to push the type information inside the disjunction we need to distribute conjunction over disjunction simply adding distribution rules like a b c a b a c a b a c a b c does not solve the problem rule creates two copies of term a which increases the size of the term being rewritten adding rule to counter this effect results in a rewriting system conjunctive context we address the size explosion problem due to distributivity rewrite rules in a similar way to how commutativity is dealt with by handling distributivity on the language level we restrict ourselves to dealing with expanding distributivity of conjunction over any other operator and we account for idempotence of thus we are concerned with distribution rules of the form p f qn p f p p qn this means that conjunction is distributive over any function f in presence of a redundant copy of p p p f qn p f p p qn we use idempotence to simplify the rhs and derive let us introduce the conjunctive context of a term and its use in rewrite rules informally for now consider a term t and the conjunction c t modulo idempotence of that would result from exhaustive application of rule to the superterm of t by the conjunctive context of t we mean the conjunction example the conjunctive context of the boxed occurrence of x in the term x y x u v w is x u w we allow a rewrite rule p t to refer to the conjunctive context c of the rule head p we use the following notation c p this facility provides without the undesirable effects of rule on the term size example we can express that an equality can be used anywhere in its scope by viewing the equality as a conjunctive context x a x a using this rule on the term of example results in x y u v w without dissolving the disjunction motivation and applications constraint model simplification our concrete motivation behind associative commutative distributive term rewriting acdtr is constraint model mapping as part of the project a key aim of is the mapping of solver independent models to efficient solver dependent models we see acdtr as the basis for writing these mappings since models are not flat conjunctions of constraints we need to go beyond ac term rewriting or chrs example consider the following simple constraint model inspired by the social golfers problem for two groups and playing in the same week there can be no overlap in players maxoverlap the aim is to maximise the number of times the overlap between two groups is less than in other words minimise the number of times two players play together in a group constraint maxoverlap w maximise x holds maxoverlap consider the following acdtr program for optimising this constraint model maxoverlap a b maxoverlap a b true holds true holds false the first rule removes redundant maxoverlap constraints the next two rules implement partial evaluation of the holds auxiliary function which coerces a boolean to an integer by representing the constraint model as a giant term we can optimise the model by applying the acdtr program for example consider the trivial case with one week and two groups and the model becomes maxoverlap maximise holds maxoverlap the subterm holds maxoverlap simplifies to using the conjunctive context maxoverlap it is clear that pure chrs are insufficient for constraint model mapping for at least two reasons namely a constraint model example is typically not a flattened conjunction some rules rewrite functions rules and rewriting function holds which is outside the scope of chrs which rewrite constraints only global definitions as we have seen conjunctive context matching provides a natural mechanism for making global information available in a constraint model structured data and constraint definitions are typically global on the top level while access to the data and the use of a defined constraint is local the type information from example another example is partial evaluation example the solver independent modelling language has support for arrays take a model having an array a of given values it could be represented as the term array a deeper inside the model accesses to the array a occur such as in the constraint x y lookup a the following rules expand such an array lookup array a array lookup a index list element array index list element x list element n n list element xs n referring to the respective array of the lookup expression via its conjunctive context allows us to ignore the direct context of the lookup the concrete constraint or expression in which it occurs propagation rules when processing a logical formula it is often useful to be able to specify that a new formula q can be derived from an existing formula p without consuming p in basic term rewriting the obvious rule p p q causes trivial this issue is recognised in chrs which provide support for inference or propagation rules we account for this fact and use rules of the form p q to express such circumstances example the following is the classic chr leq program reimplemented for acd term rewriting we omit the basic rules for logical connectives leq x x true leq x y leq y x x y leq x y leq x y true leq x y leq y z leq x z reflexivity antisymmetry idempotence transitivity these rules are almost the same as the chr version with the exception of the second and third rule antisymmetry and idempotence which generalise its original by using conjunctive context matching propagation rules are also used for adding redundant information during model mapping the rest of the paper is organised as follows section covers the standard syntax and notation of term rewriting section defines the declarative and operational semantics of acdtr section describes a prototype implementation of acdtr as part of the project section compares acdtr with related languages finally in section we conclude preliminaries in this section we briefly introduce the notation and terminology used in this paper much of this is borrowed from term rewriting we use t x to represent the set of all terms constructed from a set of function symbols and set of variables x assumed to be countably infinite we use n to represent the set of function symbols of arity a position is a string sequence of integers that uniquely determines a subterm of a term t where represents the empty string we define function t which returns the subterm of t at position p as t t f ti tn ti we similarly define a function t s p which replaces the subterm of t at position p with term we define the set pos t to represent the set of all positions of subterms in t an identity is a pair s t t x t x which is usually written as s given a set of identities e we define to be the set of identities closed under the axioms of equational logic symmetry transitivity etc we define the congruence class t s t x t as the set of terms equal to t with respect to finally we define function vars t to return the set of variables in t syntax and semantics the syntax of acdtr closely resembles that of chrs there are three types of rules of the following form simplification propagation simpagation r h g b r h g b r c h g b where r is a rule identifier and head h conjunctive context c guard g and body b are arbitrary terms the rule identifier is assumed to uniquely determine the rule a program p is a set of rules we assume that vars g vars h or vars g vars h vars c for simpagation rules the rule identifier can be omitted if g true then the guard can be omitted we present the declarative semantics of acdtr based on equational logic first we define the set of operators that acdtr treats specially definition operators we define the set of associate commutative operators as ac the set ac must satisfy ac and ac for our examples we assume that ac we also treat the operator as distributive as explained below acdtr supports a simple form of guards definition guards a guard is a term we denote the set of all true guards as g a guard g is said to hold iff g we assume that true g and false we can now define the declarative semantics for acdtr in order to do so we employ a special binary operator where to explicitly attach a conjunctive context to a term intuitively the meaning of t where c is equivalent to that of t provided c is true otherwise the meaning of t where c is unconstrained for boolean expressions it is useful to interpret where as conjunction therefore identity below becomes equivalent to the advantage of distinguishing where and is that we are not forced to extend the definition of to arbitrary functions we denote by b the following set of identities a b c a b c t t where true a b a where b b t where t where where f ai an where w f ai where w an where w for all ac functions f n and i n definition declarative semantics for acdtr the declarative semantics for an acdtr program p represented as a multiset of rules is given by the function jk defined as follows jp k r k r p guard r g b jh g bk b h h b jc h g bk b c h h where c b where c jh g bk b h h h b where function guard r returns the guard of a rule the function jk maps acdtr rules to identities between the head and the body terms where variables are existentially note that there is a new identity for each possible binding of guard r that holds in a propagation rule is equivalent to a simplification rule that re introduces the head h in conjunction with the body b in the rhs this is analogous to propagation rules under chrs a simpagation rule is equivalent to a simplification rule provided the conjunctive context is satisfied the rules b from definition contain identities for and and distributing a conjunctive context in terms of the where operator the set b also contains identities and for the properties of the ac operators example consider the following acdtr rule and the corresponding identity jx y x y k y where x y x where x y under this identity and using the rules in b we can show that f a a b f b a b as follows f a a b f a where a b a b f a where a b where a b a b f b where a b where a b a b f b where a b a b f b a b operational semantics in this section we describe the operational semantics of acdtr it is based on the theoretical operational semantics of chrs this includes support for identifiers and propagation histories and conjunctive context matching for simpagation rules all other variables are implicitly universally quantified where the universal quantifiers appear outside the existential ones propagation history the chr concept of a propagation history which prevents trivial of propagation rules needs to be generalised over arbitrary terms for acdtr a propagation history is essentially a record of all propagation rule applications which is checked to ensure a propagation rule is not applied twice to the same sub term in chrs each constraint is associated with a unique identifier if multiple copies of the same constraint appear in the chr store then each copy is assigned a different identifier we extend the notion of identifiers to arbitrary terms definition identifiers an identifier is an integer associated with each sub term we use the notation t i to indicate that term t has been associated with identifier i a term t is annotated if t and all subterms of t are associated with an identifier we also define function ids t to return the set of identifiers in t and term t to return the version of t for example t f a b is an annotated term where ids t and term t f a b identifiers are considered separate from the term we could be more precise by separating the two explicitly maintain a map between pos t and the identifiers for t we do not use this approach for space reasons we extend and overload all of the standard operations over terms from section to annotated terms in the obvious manner for example the subterm relation t over annotated terms returns the annotated term at position the exception are elements of the congruence class t formed by the ac relation which we assume satisfies the following constraints a i b j b j a i a i b j c k a i b j c k we have neglected to mention the identifiers over ac operators these identifiers will be ignored later so we leave them unconstrained a propagation history is a set of entries defined as follows definition entries a propagation history entry is of the form r e where r is a propagation rule identifier and e is a string of identifiers we define function entry r t to return the propagation history entry of rule r for annotated term t as follows entry r t r entry t entry entry entry entry f tn i i entry entry tn ac otherwise this definition means that propagation history entries are unaffected by associativity but are effected by commutativity example consider the annotated term t f a b we have that t t and t f b a t although t and t belong to t they have different propagation history entries entry r t r while entry r t r when a sub term is rewritten into another the new term is assigned a set of new unique identifiers we define the auxiliary function annotate p t ta to map a set of identifiers p and term t to an annotated term ta such that ids ta p and ta t these conditions ensure that all identifiers are new and unique when a rule is applied the propagation history must be updated accordingly to reflect which terms are copied from the matching for example the rule f x g x x essentially clones the term matching x the identifiers however are not cloned if a term is cloned we expect that both copies will inherit the propagation history of the original likewise terms can be merged g x x f x merges two instances of the term matching x in this case the propagation histories of the copies are also merged to achieve this we duplicate entries in the propagation history for each occurrence of a variable in the body that also appeared in the head definition updating history define function update h ha b ba where h and b are terms ha and ba are annotated terms and and are propagation histories is a minimal propagation history satisfying the following conditions pos h such that v x where x is the set of variables and pos b such that v then define identifier renaming such that ha and ba are identical annotated terms then if e we have that e example consider rewriting the term ha f a b with a propagation history of r using the rule f x g x x the resulting term is ba g a a and the new propagation history is r r r conjunctive context according to the declarative semantics a term t with conjunctive context c is represented as t where c operationally we will never explicitly build a term containing a where clause instead we use the following function to compute the conjunctive context of a subterm on demand definition conjunctive context given an annotated term t and a position p pos t we define function cc t p to return the conjunctive context at position p as follows cc t true cc a b b cc a p cc a b a cc b p cc f ti tn ip cc ti p f states and transitions the operational semantics are defined as a set of transitions on execution states definition execution states an execution state is a tuple of the form hg t v pi where g is a term the goal t is the propagation history v is the set of variables appearing in the initial goal and p is a set of identifiers we also define initial and final states as follows definition initial and final states given an initial goal g for program p the initial state of g is hga vars g ids ga i where ga annotate g a final state is a state where no more rules are applicable to the goal we can now define the operational semantics of acdtr as follows definition operational semantics v i v i simplify there exists a renamed rule from p h g b such that there exists a matching substitution and a term such that pos h g g ba annotate b then ba p ids and update h b ba propagate there exists a renamed rule from p r h g b such that there exists a matching substitution and a term such that pos h g g entry r ba annotate b then ba p update h b ba entry r and ids simpagate there exists a renamed rule from p c h g b such that there exists a matching substitution and a term such that h leq leq leq h leq leq leq leq t i h leq leq leq t i y h leq leq leq f t i y h leq leq f t i y h leq f t i y h f t i fig example derivation for the leq program pos h c d cc p g g ba annotate b then ba p update h b ba and ids example consider the leq program from example with the goal leq x y leq y z x z figure shows one possible derivation of this goal to the final state representing f alse for brevity we omit the v and p fields and represent identifiers as subscripts t i ti also we substitute t transitivity we can state a soundness result for acdtr theorem soundness if v pi t v p i with respect to a program p then jp k this means that for all algebras a that satisfy jp k and are equivalent for some assignment of the fresh variables in implementation we have implemented a prototype version of acdtr as part of the mapping language of the project called cadmium in this section we give an overview of the implementation details in particular we will focus on the implementation of conjunctive context matching which is the main contribution of this paper cadmium constructs normalised terms from the bottom up here a normalised term is one that can not be reduced further by an application of a rule given a goal f tn we first must recursively normalise all of tn to say sn and then attempt to find a rule that can be applied to the of f sn this is the standard execution algorithm used by many trss implementations this approach of normalising terms bottom up is complicated by the consideration of conjunctive context matching this is because the conjunctive context of the current term appears higher up in the overall goal term thus conjunctive context must be passed top down yet we are normalising bottom up this means there is no guarantee that the conjunctive context is normalised example consider the following acdtr program that uses conjunctive context matching x v x var x nonvar v one x x not one f alse consider the goal not one a a which we expect should be normalised to f alse assume that the not one a is selected for normalisation first the conjunctive context for not one a and its subterm a is one a no rule is applicable so not one a is not reduced next the subterm one a is reduced the second rule will fire resulting in the new term a now the conjunctive context for the first term not one a has changed to a so we expect that a should be rewritten to the number however not one a has already being considered for normalisation the current cadmium prototype solves this problem by terms when and if the conjunctive context changes for example when the conjunctive context one a changes to a the term not one x will be renormalised to not one by the first rule the general execution algorithm for cadmium is shown in figure function normalise takes a term t a substitution a conjunctive context cc and a boolean value ch which keeps track of when the conjunctive context of the current subterm has changed if ch true then we can assume the substitution maps variables to normalised terms for the initial goal we assume is empty otherwise if we are executing a body of a rule then is the matching substitution operationally normalise splits into three cases depending on what t is if t is a variable and the conjunctive context has changed ch true then t is no longer guaranteed to be normalised in this case we return the result of renormalising t with respect to cc otherwise if ch f alse we simply return t which must be already normalised if t is a conjunction we repeatedly call normalise on each conjunct with the other added to the conjunctive context this is repeated until a fixed point further normalisation does not result in either conjunct changing is reached and then return the result of apply rule on the which we will discuss below this fixed point calculation accounts for the case where the conjunctive context of a term changes as shown in example otherwise if t is any other term of the form f tn construct the new term t by normalising each argument finally we return the result of apply rule applied to t the function call apply rule t cc will attempt to apply a rule to normalised term t with respect to conjunctive context cc if a matching rule is found then normalise t cc ch if is var t if ch return normalise t cc f alse else return t else if t do normalise cc true normalise cc true while return apply rule cc else t f tn t f normalise cc ch normalise tn cc ch return apply rule t cc fig pseudo code of the cadmium execution algorithm the result of normalise b cc f alse is returned where b is the renamed rule body and is the matching substitution otherwise t is simply returned related work acdtr is closely related to both trs and chrs and in this section we compare the three languages ac term rewriting systems the problem of dealing with associative commutative operators in trs is well studied a popular solution is to perform the rewriting modulo some permutation of the ac operators although this complicates the matching algorithm the problem of trivial by continually rewriting with respect to commutativity is solved acdtr subsumes actrs associative commutative trs in that we have introduced distributivity via simpagation rules and added some concepts such as identifiers and propagation rules given an actrs program we can map it to an equivalent acdtr program by interpreting each actrs rule h b as the acdtr rule h b we can now state the theorem relating actrs and acdtr theorem let p be an actrs program and t a ground term then t s under p iff hta ids ta i hsa pi under p where ta annotate t for some p and term sa chrs and acdtr has been deliberately designed to be an extension of chrs several chr concepts propagation rules have been adapted there are differences between chrs and acdtr the main difference is that acdtr does not have a or underlying solver acdtr is not a constraint programming language however it is possible to encode solvers directly as rules the simple leq solver from example another important difference is that chrs is based on predicate logic where there exists a distinction between predicate symbols the names of the constraints and functions used to construct terms acdtr is based on equational logic between terms hence there is no distinction between predicates and functions a predicate is just a boolean function to overcome this we assume the existence of a set pred which contains the set of function symbols that are boolean functions we assume that ac pred the mapping between a chr program and an acdtr program is simply p p x true x however we assume program p is restricted as follows rules have no guards apart from implicit equality guards and the only constraint is true and the initial goal g is also restricted g must be of the form gn for n each gi is of the form fi am for m and fi pred for all p pos aj j m we have that if aj g bq then g q ac and g q pred these conditions disallow predicate symbols from appearing as arguments in chr constraints theorem let p be a chr program and g an initial goal both satisfying v the above conditions then hg true s true t ii for some t i and v vars g under the theoretical operational semantics for chrs iff hga v ids ga i hsa t v pi for some t p under acdtr where term sa and s sn in for some identifiers in we believe that theorem could be extended to include chr programs that extend an underlying solver provided the rules for handling tell constraints are added to the acdtr program for example we can combine rules for rational tree unification with the leq program from example to get a program equivalent to the traditional leq program under chrs acdtr generalises chrs by allowing other operators besides conjunction inside the head or body of rules one such extension of chrs has been studied before namely which allows disjunction in the body unlike acdtr there is one slight difference in syntax chrs use to represent conjunction whereas acdtr uses which manipulates disjunction syntactically typically finds solutions using backtracking search one notable implementation of is which has an operational semantics described as an tree rewriting system a limited form of conjunctive context matching is used similar to that used by acdtr based on the knowledge that conjunction distributes over disjunction acdtr generalises this by distributing over all functions future work and conclusions we have presented a powerful new programming language acdtr that naturally extends both ac term rewriting and chrs the main contribution is the ability to match a rule against the conjunctive context of a sub term taking advantage of the distributive property of conjunction over all possible functions we have shown this is a natural way of expressing some problems and by building the distributive property into the matching algorithm we avoid issues that arise from naively implementing distribution as rewrite rules we intend that acdtr will become the theoretical basis for the cadmium constraint mapping language as part of the project work on acdtr and cadmium is ongoing and there is a wide scope for future work such as confluence termination and issues references abdennadher operational semantics and confluence of constraint propagation rules in gert smolka editor proceedings of the third international conference on principles and practice of constraint programming lncs pages abdennadher and a flexible query language in international conference on flexible query answering systems number in lncs pages roskilde denmark baader and nipkow term rewriting and all that cambridge univ press duck stuckey garcia de la banda and holzbaur the refined operational semantics of constraint handling rules in demoen and lifschitz editors proceedings of the international conference on logic programming lncs pages september theory and practice of constraint handling rules journal of logic programming menezes vitorino and aurelio a high performance execution engine in second workshop on constraint handling rules sitges spain stuckey garcia de la banda maher marriott slaney somogyi wallace and walsh the project mapping solver independent models to efficient solutions in gabrielli and gupta editors proceedings of the international conference on logic programming number in lncs pages a examples further motivating examples example conjunctive normal form one of the roles of mapping models is to convert a model written in an expressive language into a restricted language which is easy to solve many standard approaches to solving propositional formulae require that the formulae are in conjunctive normal form cnf disjunction is distributive over which can be used to establish cnf in a direct way using the oriented rule p q r p q p r cnf conversion based on this rule can exponentially increase the size of the formula this undesirable circumstance means that in practice cnf conversions are preferred that replace subformulae by new propositional atoms which increases the formula size at most linearly let us formulate this approach in rewrite rules to keep this example simple we assume that the subformula p q r occurs in a positive context for example by a preprocessing into negation normal form we replace q r by a new atom s defined by the logical implication s q r in rewrite rule form we have p q r p s q r unit resolution and unit subsumption can be formalised in rewrite rules here are two versions one using conjunctive context and a regular one with conj context regular p p true p p p p q p p false p false p q p q we furthermore assume rules eliminating the logical constants true and false from conjunctions and disjunctions in the obvious way let us contrast the two rule sets for the formula the following is a terminating rewrite history with conj context regular a b c d d a b c d d a b c true d a b true d a s b c d d a s b true d a b d a s b d to obtain the simple conjunct a b using the regular rule format a rule expressing binary resolution from p s q follows p q would be required however such a rule is undesirable as it would create arbitrary binary resolvents increasing formula size moreover the superfluous atom s remains in the formula example type remapping one of the main model mappings we are interested in expressing is where the type of a variable is changed from a high level type easy for modelling to a low level type easy to solve a prime example of this is mapping a set variable x ranging over finite subsets of some fixed set s to an array of variables indexed by so for variable x we have e x e for this example we use the more concrete modelling syntax t x indicates variable x has type t the types we are interested are l u an integers in the range l to u set of s a set ranging over elements in s and array i of e an array indexed by set i of elements of type we use f orall and sum looping constructs which iterate over sets this is expressed in acdtr as follows set of s x array s of map x map x x array s of x card x sum e in s x e array s of x z array s of z array s of y x y f orall e in s z e x e y e array s of x z array s of z array s of y x y f orall e in s z e x e y e array s of x x f orall e in s x e card t c card t c c c c c c c c c c c c c c c c c t t maxoverlap x y c card x y c typec vsubs card cap cup emptyset card cupl cupr capl capr eql eqr leql leqr cc maxo the constructor adds some local conjunctive context to an arbitrary term like where and the last rules bar move this context outwards to the nearest predicate scope the last rule defines the maxoverlap predicate they are used to introduce new variables z and their type and the constraints upon then as an example consider the following derivation set of n x set of n y maxoverlap x y set of n x set of n y card x y array n of map x set of n y card x y array n of map x set of n y card y array n of map x array n of y map y y card y array n of map x array n of y map y y card y array n of map x array n of y map y y card z array n of z f orall e in n z e e y e array n of map x array n of y map y y card z array n of z f orall e in n z e e y e array n of map x array n of y map y y card z array n of z f orall e in n z e e y e the final goal is a flat conjunction of constraints and types it can be similarly translated into a conjunction of constraints that can be sent to a finite domain solver by unrolling f orall and replacing the arrays by sequences of n variables example rational tree unification we can directly express the rational tree unification algorithm of as an acd term rewriting system f sn f tn sn tn f sn g tm f alse split f ail the split rule must be defined for each constructor f and the fail rule for each pair of different constructors f and the remaining rules are x x var x true t x var x nonvar t x t x s x t var x nonvar s size s size t s t x y x var x var y x y y id f lip tsubs vsubs where size t is the size of the term t in terms of number of symbols and is syntactic identity even though the goals are a single conjunction of constraints acd is used for succinctly expressing the vsubs rule which replaces one variable by another in any other position colmerauer prolog and infinite trees logic programming apic studies in data processing academic press the following derivation illustrates the unification process in action the underlined part show the matching elements lip x y f f x x y f f f y x y x f f x y f f f y x y y f f x y f f f y x y y f f y y f f f y x y y f f y f f y f f f y x y y f f y f y f f y x y y f f y y f y x y f y f f y y f y x y y f y y f y x y y f y f y f y x y y f y f y f y x y y f y true expanded examples the purpose of this section is to show some example derivations under the operational semantics of acdtr rather than descriptions we allow for some shorthand namely t i ti identifiers and conjunctive context in this section we explain parts of the derivation from example in more detail the initial goal is x y f f x x y f f f y which corresponds to the initial state h f f f f f x y i the initial state is a quadruple contained an annotated version of the goal an empty propagation history the set of variables in the goal and a set of used identifiers the first derivation step is a simplify transition with the f lip rule h f f f f f x y i h f f f f f x y i we have replaced the annotated subterm f f with f f flipped the operands to the equality and reannotated the new term with fresh identifiers these were also added to the set of used identifiers since the propagation history is empty it remains unchanged the next derivation step is a simpagate transition with the vsubs rule h f f f f f x y i h f f f f f x y i the conjunctive context for subterm is cc ga p f f f true where ga is the current goal and p is the position of the first conjunct matches the conjunctive context of the vsubs rule thus subterm is replaced with identifier is added to the list of used identifiers execution proceeds until the final state h x y y f y true x y pi is reached for some annotation of the goal and some set of identifiers this is a final state because no more rules are applicable to it ac matching and propagation histories consider the propagation rule from the leq program trans leq x y leq y z x y y z leq x z and the initial state hleq leq a b i we can apply propagate directly without permuting the conjunction to arrive at the state h leq leq leq trans i the propagation history prevents the rule from firing on the same terms again however we can permute the terms to find a new matching namely we can permute the annotated goal which we call ga leq leq leq to leq leq leq the latter is an element of ga ac and the identifiers have been preserved in the correct way the entry trans is not in the propagation history so we can apply propagate again to arrive at h leq leq leq leq trans trans i now the propagation history prevents the rule trans being applied to the first two leq constraints the guard also prevents the trans rule firing on either of the two new thus we have reached a final state updating propagation histories consider a modified version of the previous example now with two rules x x x trans leq x y leq y z leq x z the first rule enforces idempotence of conjunction consider the initial state hleq leq leq a i we apply the trans rule to the first two copies of the leq constraint with identifiers and hleq leq leq leq trans a i next we apply idempotence to leq constraints with identifiers and hleq leq leq trans trans a i an extra entry trans is added to the propagation history in order to satisfy the requirements of definition this is because we have replaced the annotated constraint leq with the newly annotated term leq which defines an identifier renaming since e trans is an element of the propagation history we have that e trans must also be an element and hence the history is expanded without the guard both acdtr and chrs are not guaranteed to terminate
6