text
stringlengths 16
1.15M
| label
int64 0
10
|
---|---|
nov betti numbers certain sum ideals joydip saha indranath sengupta gaurab tripathi bstract paper compute betti numbers ideals form matrices ideal generated minors matrix consisting two rows ntroduction let field xij indeterminates let xij denote polynomial algebra let denote matrix entries belong ideal xij let column pnmatrix let denote ideal generated polynomials xji minors entries matrix primality primary decomposition betti numbers ideals form studied help bases ideals form particularly interesting occur several geometric considerations like linkage generic residual intersection polynomial ideals especially context syzygies resolved ideal generic matrix generic matrix proved certain properties ideals form generic symmetric matrix either generic generic alternating say intersect transversally suppose resolves resolves minimally interesting note intersect transversally tensor product complex mathematics subject classification primary secondary key words phrases basis betti numbers transversal intersection mapping cone first author thanks ugc senior research fellowship second author corresponding author supported research project sponsored serb government india third author thanks csir senior research fellowship joydip saha indranath sengupta gaurab tripathi resolves minimally see lemma therefore useful know two ideals intersect transversally especially one trying compute minimal free resolutions betti numbers ideals form iterated techniques see otation heorem xin generic let xij xjn generic symmetric let xii xij xin xij xij xjj xjn eij let gij denote set minors eij denote ideal generated gij aim paper prove following theorem theorem let xij either generic generic symmetric matrix order let eij hgi given total betti numbers let let denote total betti number eij ideal smallest set every given eij particular total betti numbers ideal reliminaries determinantal ideals recall useful results determinantal ideals pertaining work refer detailed discussions theorem let field let xij indeterminates let xij matrix indeterminates denotes ideal generated maximal minors set maximal minors universal basis ideal sum ideals proof see complex present relevant portion book let free modules finite rank polynomial ring complex map matrix representing complex symf symf symk symmetric power homr map defined follows first define diagonal map symk dual multiplication map symk symmetric algebra next define analogous diagonal map dual multiplication exterior algebra theorem complex free resolution iff grade denotes minors matrix representing proof see mapping cone present relevant portion book let polynomial ring let map complexes finitely generated mapping cone complex differential defined follows let theorem let ideal minimally generated polynomials set thus short exact sequence resolutions known construct resolution mapping cone construction joydip saha indranath sengupta gaurab tripathi proof see construction basis transversal intersection ideals lemma let respect suitable monomial order leading terms mutually coprime regular sequence proof see lemma lemma suppose either generic generic symmetric eij respect suitable set gij basis ideal monomial order proof choose lexicographic monomial order given following ordering among variables xst xst apply lemma matrix definition let set monomials define supp xij divides divides write supp instead supp lemma let monomial ordering let ideals denote unique minimal generating sets leading ideals respectively supp words ideals intersect transversally set variables occurring set disjointed set variables occurring set proof let show implies therefore hence similarly exists monomial given disjoint support proves replace proof follows induction since sum ideals homological lemmas lemma let graded ideals graded ring suppose minimal free resolutions respectively minimal free resolution graded ideal proof consider short exact sequence tensor get exact sequence terms left since flat module moreover kernel map therefore corollary theorem proved implies tori therefore tori proves resolves resolution minimal since minimal lemma let exact sequence free modules let invertible matrices sizes respectively also exact sequence free modules proof following diagram commutative diagram free modules vertical maps isomorphisms rao therefore exact rao exact since corollary let exact sequence free modules let invertible matrices sizes respectively joydip saha indranath sengupta gaurab tripathi also exact sequence free modules proof consider sequence take apply lemma get sequence exact note entire quence exact well since invertible let consider sequence take apply lemma arrive conclusion lemma let exact sequence free modules let aij denote entry suppose alm ali ajm let matrix obtained deleting row matrix obtained deleting column matrix obtained deleting row column sequence exact proof fact latter sequence complex self evident need prove exactness previous lemma may assume choose elementary matrices permute rows columns matrices always invertible due exactness first complex implies first umn implies therefore right exactness preserved similar argument prove left exactness preserved ker let denote tuple entries ker exists follows proving left exactness similar argument prove right exactness lemma let matrix aij let matrix matrix exist invertible matrix invertible matrix sum ideals xay xay xay spot ckl cjl ait ctl iii bkl bki atj bkt proof prove aij case similar take ejk eki ekl denotes matrix ekl ett eut iii easy verify lemma let matrix matrix matrix matrices satisfy property pij satisfy following conditions aij aik akj bki cjl matrices xay satisfy property pij satisfy property pij proof follows lemma since aik akj belong inimal eij hgi free resolution lemma let generic generic symmetric let eij fij complex minimally resolves ideal proof show given xik xjk form regular sequence let first assume generic take lexicographic monomial order induced following ordering among variables xin xjn appearing variables smaller xik hence joydip saha indranath sengupta gaurab tripathi gcd every therefore regular hand sequence lemma hence theorem hence generic symmetric choose lexicographic monomial order induced xii xij xin xjj xjn fij variables smaller variables xkl appearing xjn fij maximum hence height fij complex minimally resolves ideal lemma let generic generic symmetric let eij hgi eij hgi ideals eij hgi intersect transversally proof let generic choose lexicographic monomial order given following ordering among variables xst xst lemma eij set minors forms basis ideal eij involve inclearly minimal generating set determinates xin whereas xin hence supports eij disjoint therefore lemma done let generic symmetric choose correct monomial order rest proof similar generic case suppose choose lexicographic monomial order given following ordering among variables xnn xst suppose choose lexicographic monomial order given following ordering among variables xii xij xin xjj xjn xst sum ideals eij hgi lemma let generic eij hgi xin generic symmetric xii xin proof generic xit xjt xit xjk xik xjt fij hgi moreover eij hgi hence xin xin xin ideal xin eij hgi prime ideal follows xin proof generic symmetric case similar esolution sum ideals eij aim construct minimal free resolution ideal eij hgi intersect proved ideals eij hgi therefore resolved transversally see ideal eij minimally theorem also proved ideal hgi ideal hgj linear quotient see therefore ideal eij hgi resolved mapping cone construction minimal free resolution extracted resolution apeij hgi plying lemma next show ideal intersects transversally ideal minimum eij set see lemma therefore ideal hgi resolved minimally theorem proceeding eij manner able show ideals intersect transversally smallest set see lemma eij finally gives minimal free resolution ideal hgi every let assume generic proofs general would similar according aforesaid scheme proofs case generic symmetric would similar well comments general symmetric case made whenever necessary joydip saha indranath sengupta gaurab tripathi minimal free minimal free resolution given complex resolution following map defined every ordered tuple every minimal resolution given intersect transversally lemma therethe ideals given fore lemma minimal free resolution tensor product complex map defined eik eik eik mapping find minimal free resolution cone let proved lemma minimally resolved koszul sum ideals complex let denote koszul complex kth differential first construct connecting map let write map defined eik let choose lexicographic ordering among tuples order write ordered basis define lexicographic ordering among tuples order basis elements moreover free module order basis elements way appear first matrix representation respect chosen ordered bases following theorem following diagram commutes every proof suffices prove statement basis element without loss generality consider first compute joydip saha indranath sengupta gaurab tripathi compute sum ideals hence mapping cone gives resolution described however resolution minimal construct minimal free resolution constructed free resolution ideal given let recall map differential map differential koszul free resolution resolution connecting homomorphism complexes defined let order bases respect lexicographic ordering finally order basis way basis elements appear first followed basis elements therefore matrix representation differential map given entries matrices representing belong maximal ideal hxij since differentials minimal free resolutions block matrix also elements maximal ideal hxij block elements outside maximal ideal hxij identity block appearing therefore clear matrix representation map apply lemma repeatedly get rid hence get minimal free resolution joydip saha indranath sengupta gaurab tripathi total betti numbers ideal eij minimal free resolution lemma let defined list notations section set minors respect set basis ideal suitable monomial order proof take lexicographic monomial ordering induced following ordering among indeterminates xnn xtt xst observe every coprime every also coprime every therefore moreover lemma basis test write note every hence note leading terms mutually coprime therefore next expression shows similarly sum ideals leading terms mutually coprime therefore proof similar remark corresponding result general would following lemma let gij smallest set eij defined gij denotes set minors list notations section set basis eij respect suitable monomial order ideal proof proving statement arbitrary choose following monomial orders rest proof remains similar suppose generic choose lexicographic monomial ordering induced following ordering among indeterminates xnn cii xin xjn xst generic symmetric choose lexicographic monomial ordering induced following ordering among indeterminates xnn cii xii xij xin xjj xjn xst intersect transverlemma ideals sally every proof suppose exists let choose monomial order defined lemma upon division elements may assume every since lemma basis ideal therefore hand since mutually coprime contradiction joydip saha indranath sengupta gaurab tripathi remark corresponding result general would eij hgi hgl intersect following ideals transversally smallest set every proof essentially use lemma proof theorem part theorem proved prove part assumption let minimal lemma free resolution lemma minimal free resolution given tensor product precisely let denote total betti number ideal total betti numbers given ideal proof general follows similarly according strategy discussed beginning section eij particular total betti number ideal given example show betti numbers stage eferences bruns kustin miller resolution generic residual intersection complete intersection journal algebra conca emanuela negri elisa gorla universal bases maximal minors international mathematics research notices sum ideals eisenbud geometry syzygies gimenez sengupta srinivasan minimal graded free resolution monomial curves defined arithmetic sequences journal algebra johnson equations defining veronese rings arch math basel vanishing tor regular local rings illinois matsumura commutative ring theory cambridge university press peeva graded syzygies london limited saha sengupta tripathi ideals form saha sengupta tripathi primality certain determinanatal ideals department mathematics rkm vivekananda university belur math howrah india address discipline mathematics iit gandhinagar palaj gandhinagar gujarat india address indranathsg department mathematics jadavpur university kolkata india address gelatinx
| 0 |
nov superrigidity actions finite rank median spaces elia fioravanti abstract finite rank median spaces simultaneous generalisation finite dimensional cat cube complexes real trees irreducible lattice product rank one simple lie groups show every action complete finite rank median space global fixed point sharp contrast behaviour actions infinite rank median spaces fixed point property obtained corollary superrigidity result latter holds irreducible lattices arbitrary products compactly generated groups exploit roller compactifications median spaces introduced generalise construction case cube complexes provide reduced class detects group actions finite orbit roller compactification even cat cube complexes second bounded cohomology classes known property due corollary observe gromov density model random groups low density shalom property contents introduction preliminaries median spaces median algebras bridges haagerup class haagerup class elementarity actions main statement elementarity shalom property superrigidity superrigidity result homomorphisms coarse median groups appendix structure ubs references elia fioravanti introduction metric space median three points exists unique point simple examples provided real trees metric finite dimensionality assumption connected median space spaces bix corresponds canonical cat space lipschitz equivalent isometries induce isometries instance metric associate euclidean distance elaborate examples median spaces provided simply connected cube complexes satisfying gromov link condition case obtain median space endowing cube metric corresponding cat cube complex median spaces generally display wilder features cube complexes like real trees essentially objects note regard class median spaces closed ultralimits also preserve notion dimension usually called rank see section precise definition despite finite rank median spaces retain many good combinatorial properties cube complexes addition cat metric notion boundary compatible median property many groups isometries contain free subgroups would expect many known results cat cube complexes extend finite rank median spaces without significant complications instance name however notable exception pattern general group actions median spaces clear connection existence codimension one subgroups close similarities cube complexes general median spaces ascribed existence collection walls encode geometry space way hyperplanes cat cube complexes set thought discrete needs endowed measure encodes thickness sets walls indeed concept median space certain sense dual notion space measured walls extend spaces walls dual viewpoint cat cube complexes main theorem superrigidity result irreducible lattices products locally compact compactly generated groups namely weak assumptions every action finite rank median space essentially arises continuous actions median spaces lower rank cube complexes known due superrigidity actions finite rank median spaces general context cat spaces similar results obtained long ago unfortunately applying median space provides actions factors cat subspaces subspaces might bear relation median structure might seem like irrelevant subtlety contrary key fixed point properties paper provides illustration consider irreducible lattice acts properly cocompactly cat space particular product surface groups easily recognised cubulated shown acts properly coboundedly infinite rank median space group moreover coarse median sense thus appear particularly striking every action connected finite rank median space fixes point follows superrigidity result see corollary proof superrigidity theorem follows similar outline monod mostly hidden application theorem thus believe important highlight analogy already mentioned finite rank median space one however one also associate finite dimensional cat space consider infinite dimensional cat spaces close relative introduced retracing proof monod superrigidity context would first induce continuous action infinite dimensional cat space see definition would prove splits product action subspace factor depends projection finally information gained would carry back application shalom machinery instead constructs continuous action infinite dimensional cat space one proves splitting theorem action carries gained insight back main contribution lies transferring information back forth actions indeed shalom machinery set motion nonvanishing reduced cohomology class similarly superrigidity statement need translate back describe results greater detail cohomological characterisation elementary actions median space distinguished collection subsets called halfspaces every wall gives rise two halfspaces halfspace arises wall collection equipped measure see case cube complexes one recovers usual notion halfspace simply counting measure elia fioravanti given topological group isometric action one naturally obtains unitary representation cocycle referred haagerup cocycle construction appears instance continuous orbits continuous thus induces reduced continuous cohomology class introduced notion elementarity actions median spaces namely say roller elementary least one finite orbit within certain compactification roller compactification roller elementarity implies existence finite orbit visual compactification cat space isometric action continuous orbits roller elementarity described terms reduced cohomology class theorem let complete finite rank median space haagerup class vanishes roller elementary theorem extends various known results case simplicial trees appears cat cube complexes implication roller nonelementary implicit authors construct family bounded cohomology classes detecting roller elementarity cat cube complexes remark theorem equally holds replace although slightly simpler exploit richer structure hilbert spaces proof superrigidity result relies implication theorem yields believe full statement theorem independent interest proof implication turns quite technical requires careful study structure ubs median spaces generalisation simplices hagen simplicial boundary cat cube complex details relegated appendix superrigidity actions nontrivial reduced cohomology class provided theorem apply machinery namely theorem obtain superrigidity results let complete finite rank median space roller compactification partitioned components subset forms full component every component complete median space strictly lower rank aspect shares similarities refined boundaries cat spaces given component median subalgebra subset median space restriction median metric equivalently median map takes ready state main superrigidity result superrigidity actions finite rank median spaces theorem let complete finite rank median space let uniform irreducible lattice product compactly generated locally compact groups suppose roller nonelementary action exist finite index subgroup invariant component closed median subalgebra action extends continuous action open finite index subgroup remarks theorem also applies nonuniform lattices long technical condition implies finite generation ensures theorem still holds irreducible lattices group points semisimple almost simple linear algebraic group defined local field examples nonuniform lattices include minimal groups sufficiently large finite ground fields regarded irreducible lattices product closed automorphism groups associated buildings theorem compared shalom superrigidity result actions simplicial trees theorem simplicial tree always subcomplex complications statement theorem reflect phenomena happen world trees however soon leave context rank one median spaces result optimal even one restricts cat square complexes see examples remark general cat square complex median algebra might subcomplex take long split nontrivial product median spaces finite see theorem orbits visual compactification however even case action general extends proper median subalgebra cat cube complexes superrigidity result slightly general theorem applies nonuniform lattices due use bounded cohomology namely theorem rather reduced cohomology strategy proof hinted page fixed point properties irreducible lattices unlike automorphism groups cat cube complexes isometry group median space needs totally disconnected still possible exploit theorem derive fixed point property irreducible lattices connected groups elia fioravanti given locally compact topological group denote connected component identity say satisfies condition amenable shalom property see section definition particular groups satisfy condition theorem let complete finite rank median space let irreducible lattice product suppose every compactly generated satisfies condition every action roller elementary virtually map onto every action finite orbit within moreover connected every action global fixed point real tree theorem also follows theorem remark every group virtually maps onto admits roller elementary action unbounded orbits corollary let complete connected finite rank median space let irreducible lattice connected higher rank semisimple lie group every action fixes point analogous result cat cube complexes proved simple factor rank least two property corollary follows theorem assumption finite rank essential corollary hold least one simple factor locally isomorphic lattice admits action infinite rank median space unbounded orbits moreover locally isomorphic even admits proper cobounded action infinite rank median space homomorphisms coarse median groups coarse median spaces introduced attempt formulate coarse notion nonpositive curvature recently received lot attention proved instrumental striking results group said coarse median cayley graphs coarse median spaces examples finite rank coarse median groups include hyperbolic groups cubulated groups fundamental groups closed irreducible modelled nil sol mapping class groups generally groups hhs mainly interested equivariantly coarse median groups view coarse median groups generalisation groups hhs equivariantly coarse median groups generalise hierarchically hyperbolic groups hhg particular hyperbolic groups cubulated groups mapping class groups also equivariantly coarse median finite rank superrigidity actions finite rank median spaces precisely say group equivariantly coarse median equipped finite generating set coarse median elements denotes word metric induced note definition depend choice equivariantly coarse median groups already considered different name coarse median group finite rank every asymptotic cone endowed equivalent median metric example asymptotic cones mapping class groups median geodesics limits hierarchy paths equivariantly coarse median median metric asymptotic cone preserved action ultrapower given group infinite sequence pairwise homomorphisms apply construction result isometric action median space unbounded orbits obtained canonical median space equivalent asymptotic cone along theorem implies following corollary let equivariantly coarse median group finite rank let second part theorem exist finitely many pairwise homomorphisms corollary applies particular case irreducible lattice connected higher rank semisimple lie group property result already appears addition hhg much stronger statement provided corollary note haettel method applied context lattices products rank one groups admit nonelementary actions hyperbolic spaces also compare theorem case hyperbolic arbitrary product locally compact second countable groups remark stronger conclusion reached acts freely complete finite rank median space indeed following immediate consequence theorem proposition let group admitting free action complete finite rank median space suppose every action roller elementary every homomorphism factors virtually abelian subgroup proposition applies instance case free subgroups property satisfies hypotheses first part theorem particular irreducible lattice connected higher rank semisimple lie group every homomorphism finite image motivate certain interest groups acting freely complete finite rank median spaces group acts freely finite dimensional elia fioravanti cat cube complex clearly falls class however unclear stage whether finitely generated examples see partial results direction note infinitely generated group even admits proper action rank two median space splits product simplicial tree real line see example however since divisible group elements must act elliptically possibly cat cube complex even within finitely generated groups actions median spaces tend flexible actions cat cube complexes every group consider dimf minimum rank complete median space admitting free action act freely complete median space set dimf restricting cat cube complexes similarly define dimf consider metrically proper actions obtain dimpm dimpc thus dimpc dimf dimpm dimf remark dimf dimcm dimpm dimpm many finitely generated groups instance dimf free hand work rips dimf free product free abelian surface groups excluding nonorientable surfaces see theorem one use observation construct free actions various raags median spaces rank strictly lower dimension salvetti complex considering general actions mention exist finitely generated groups admitting actions real trees unbounded orbits whose actions simplicial trees fact even finite dimensional cat cube complexes must global fixed point shalom property random groups theorem also allows prove various groups property latter introduced topological group property every unitary representation finite dimensional subrepresentation property trivially satisfied every locally compact group property also opposite end universe groups large class amenable groups includes polycyclic groups lamplighter groups connected locally compact amenable groups example amenable group without provided wreath product prove following see proposition general result corollary let discrete group property acts freely cocompactly cat cube complex virtually abelian property studied almost exclusively within class amenable groups happens invariant key ingredient implicitely explicitely recent elementary superrigidity actions finite rank median spaces proofs gromov theorem groups polynomial growth moreover interesting applications study embeddings hilbert spaces property inherited uniform lattices stable direct products central extensions satisfied groups fall two extremely different classes namely amenable kazhdan groups reasonable expect wide variety groups property however seems answer known following question question every finitely generated group property virtually split direct product amenable group finitely many groups property every word hyperbolic group property also satisfy property corollary results imply random groups low density satisfy corollary overwhelming probability random groups density gromov density model property note however density random groups kazhdan hence satisfy property acknowledgements author warmly thanks brian bowditch caprace indira chatterji yves cornulier thomas delzant mark hagen masato mimura narutaka ozawa romain tessera pierre pansu alain valette helpful conversations author expresses special gratitude cornelia talia contributing many ideas paper work undertaken mathematical sciences research institute berkeley fall program geometric group theory author supported national science foundation grant gear network part work also carried isaac newton institute mathematical sciences cambridge programme curvature group actions cohomology supported epsrc grant author also supported clarendon fund merton moussouris scholarship preliminaries median spaces median algebras let metric space given points interval set points lie satisfy say median space exists unique point lies median map obtain way endows structure median algebra definitions theory median spaces also given arbitrary median algebras elia fioravanti follow approach introducing necessary notions reader consult background median spaces algebras median space taken definition intervals general median algebras median algebra say subset convex whenever intersection finite family pairwise intersecting convex sets always nonempty known helly theorem see theorem subset halfspace convex denote set halfspaces simply ambiguity halfspaces said transverse two distinct elements set comparable poset equivalently intersections nonempty given write subset said ultrafilter two halfspaces intersect instance set ultrafilter given subsets write refer sets form halfspace intervals disjoint convex set nonempty see theorem particular points coincide subset inseparable whenever satisfies given subset inseparable closure smallest inseparable subset contains coincides union sets wall set form say sides wall separates subsets either lies denote set walls separating simply set walls median algebra wall contained halfspace one sides wall contained disjoint halfspaces side wall transverse side wall say transverse rank median algebra maximum cardinality set pairwise transverse walls various alternative equivalent definitions rank found proposition remark rank zero consists single point median space finite rank topological dimension every locally compact subset bounded see theorem lemma moreover complete connected visual equivalent canonical cat space finite dimensional proposition boundary superrigidity actions finite rank median spaces yielding homomorevery isometry extends isometry every convex subset also convex phism isom isom converse true euclidean convex hull points even median subalgebra halfspaces finite rank median spaces fairly proposition let complete median space finite rank every halfspace either open closed possibly moreover chain halfspaces following simple extremely useful observation given ultrafilters either transverse along dilworth theorem yields following lemma let median algebra finite rank let ultrafilters decompose nonempty totally ordered inclusion every infinite subset contains infinite subset totally ordered inclusion subset gate point every gates unique exist gate exists every point say case define associating point unique gate always morphism median algebras satisfies every subsets always convex converse always true every interval gateprojection proof following statements found proposition let sets naturally bijection exists pair gates pair points particular set moreover particular median algebra endowed hausdorff topology said topological median algebra median map continuous equip product topology median spaces always provide elia fioravanti topological median algebras indeed median map case compact median algebras complete median spaces subset gateconvex closed convex moreover continuous median spaces even let complete finite rank median space endowed measure usually denoted set measurable sets unlike simply refer elements map sending halfspace complement measure preserving every inseparable subset measurable particular ultrafilters measurable almost every halfspace thick nonemtpy differ counterparts interior note general proposition let complete finite rank median space ultrafilter exists thus equivalently described collection ultrafilters satisfy identify ultrafilters whose symmetric difference considering space ultrafilters obtain set embeds structure median algebra defined setting endow topology ultrafilters converge lim sup refer roller compactification proposition roller compactification compact topological median algebra inclusion continuous morphism median algebras dense convex image general open inclusion homeomorphism onto image roller boundary defined point general represented several distinct ultrafilters null symmetric differences however unique preferred ultrafilter representing seen generalisation ultrafilters extend halfspace halfspace indeed suffices define save notation set instead analogous subset closed median subalgebra restriction metric turns complete median space rank rank moreover superrigidity actions finite rank median spaces lemma canonical morphism median algebras proof write intersecting gives map lemma implies surjective thus every ultrafilter unique ultrafilter applying canonical ultrafilters yields required embedding given ultrafilters set refer extended metric satisfies axioms metric even though value allowed note points original median metric component maximal set points finite pairwise distances components convex subsets one component always coincides components contained following appears proposition let complete median space finite rank let component let denote extended metric metric space complete median space rank every thick halfspace form unique closed convex closure naturally identified roller compactification thus notation ambiguous denote corresponding gateprojection extends usual ultrafilter representing point set ultrafilter represents similarly component closure gateconvex naturally identified roller compactification satisfies terms ultrafilters takes point represented point represented intersection makes sense part proposition almost every halfspace arises halfspace let group isometric action said roller elementary exists finite orbit within action roller minimal rank preserve proper closed convex subset roller elementarity implies general much stronger existence finite orbit visual compactification cat cube complex action roller minimal terminology essential fix point visual boundary neither roller elementarity roller minimality implies one however roller minimal actions naturally arise roller nonelementary ones elia fioravanti proposition let complete finite rank median space isometric action either fixes point exist component closed convex subset roller minimal closed convex subset always gives rise introduced measurable decomposition sets note part proposition measure spaces isomorphic say action without wall inversions exist proposition action connected complete finite rank median space without wall inversions following proved compare proposition let complete finite rank median space thick halfspaces let roller minimal action without wall inversions exists exists topological group isometric actions implicitely required continuous orbit maps equivalently homomorphism isom continuous endow isom topology pointwise convergence remark isom hausdorff sequentially complete topological group soon complete median algebras product median algebra defined denotes projection onto factor median spaces endow product metric namely median algebra associated median space product median algebra arising median space said irreducible isometric nontrivial product proposition let irreducible complete finite rank median spaces consider product measurable partition canonically identified halfspaces transverse every isometry permutes members partition product isom isom sits inside isom open finite index subgroup every closed convex subset form closed convex subset roller compactification naturally identified product median algebra product topology superrigidity actions finite rank median spaces proof part first half part see corollary proposition conclude proof part showing isom isom open isom choose points real number denote projection onto factor let point coordinates also consider points suppose isom claim implies product isometry groups factors open isom suppose sake contradiction indices induces isometry see particular thus contradiction irreducibility factors plays role part suffices consider case let projections exist points lie immediate observe point lies finally part lemma halfspaces form facing pairwise disjoint say strongly separated transverse see following proposition let irreducible complete finite rank median space let thick halfspace admits roller minimal action without wall inversions exist thick halfspaces strongly separated admits roller nonelementary roller minimal action without walls inversions part facing thick halfspaces every every complete finite rank median space isometrically embedded barycentric subdivision complete median space rank see cat cube complex metric space given customary barycentric subdivision natural homomorphism isom isom given isometric action induced action without wall inversions write instead inclusion preserving map surjective isom fibres cardinality two atom case refer element hemiatom see following lemma elia fioravanti lemma let complete finite rank median space action roller elementary induced action sets inherit median algebra structure median space particular consider product median algebras every every point exists canonical subset isomorphic via isomorphism takes centre intersection corresponds subset set see details lemma let complete finite rank median space every infinite convex subset intersects proof let point minimising rank otherwise exists point lie lies particular contained face strictly lower rank contradiction since particular obtain following extension lemma lemma roller nonelementary roller minimal action proof suppose sake contradiction nonempty closed convex subset corollary exists component note unbounded corollary since roller nonelementary lemma component barycentric subdivision component lemma implies since roller minimal must hence part proposition conclude let fix basepoint complete finite rank median space following discussion independent choice diverging chain halfspaces sequence use terminology set given ubs inseparable subset containing diverging chain halfspaces given ubs say almost contained halfspaces lie uniformly bounded distance denoted ubs equivalent write denote equivalence class set equivalence classes ubs relation descends partial order ubs said minimal superrigidity actions finite rank median spaces minimal element minimal ubs equivalent inseparable closure diverging chain contains define directed graph follows vertex set identified set minimal elements given diverging chains respectively draw oriented edge almost every transverse almost every vice versa independent choices involved subset said inseparable every directed path vertices crosses vertices following found proposition let complete median space finite rank graph vertices contains directed cycles poset isomorphic poset inseparable subsets ordered inclusion isomorphism maps set equivalence classes minimal ubs almost contained particular set finite given ubs set representatives equivalence classes minimal ubs almost contained sup minimal ubs denote subset halfspaces transverse diverging chain halfspaces say reduced say strongly reduced write totally ordered inclusion contains diverging chain halfspaces consider median spaces figures subsets restriction metric cases ubs minimal figure shows reduced strongly reduced figure ubs strongly reduced exhibits decomposition require lemma let complete finite rank median space consider let minimal ubs subset reduced ubs equivalent exists strongly reduced ubs contained strongly reduced strongly reduced reduced proof lie transverse diverging chain halfspaces transverse infinite subchain implies inseparable moreover contain diverging chain would contain two inequivalent ubs proves part obtain part decompose lemma let union sets contain diverging chain elia fioravanti figure figure superrigidity actions finite rank median spaces exists every set strongly reduced ubs regarding part decompose totally ordered inclusion contains diverging chain existed would particular would transverse diverging chain since minimal would transverse diverging chain contradiction given denote group isometries fix let kernel action subgroup ubs denote subgroup fixes define mula homomorphism depends equivalence class set representatives also consider full transfer homomorphism proposition let complete finite rank median space consider subgroup open full transfer homomorphism continuous every finitely generated subgroup ker finite orbit connected every finitely generated subgroup ker fixes point proving proposition need obtain following lemma note every point every halfspace set ubs lemma every thick halfspace every exists neighbourhood identity proof pick point neighbourhood identity part proposition since thus conclude every exists neighbourhood identity decompose lemma let union nonempty halfspace elements contained form subset measure let intersection vki set measure consists halfspaces uniformly bounded distance suffices consider proof proposition need prove open continuous rest statement contained theorem elia fioravanti every exists vertices precisely end incoming edge indeed given diverging chain ubs representing almost every halfspace chain chosen let set vertices exists directed path length ends note proposition show subgroup fixes pointwise open every homomorphism continuous proceed induction setting base step trivial let consider halfspaces hvs setting lemma provides neighbourhood minimal ubs almost contained projects element hence gvj every since inductive hypothesis open continuity transfer characters obtained similar argument bridges let median algebra two subsets fixed throughout section following results analogues section denote refer sets gates gates shores respectively part proposition coincide hence map bijection inverse arises median space isometry bridge set union disjoint pair gates follows part proposition observation proposition bridge pair gates bridge canonically isomorphic product isometry arises median space proof pick pair gates set consider morphism median algebras another superrigidity actions finite rank median spaces pair gates projection provides isomorphism mapping observation decomposition imply restriction bijective map surjective proposition part proposition discussion every wall arises either wall cutting wall cutting latter correspond part proposition left show follows fact arises median space measure induces measure set case fact isometry follows decomposition observation extend notion strong separation arbitrary subsets median algebras say strongly separated disjoint note condition alone already implies consists one point median space two halfspaces strongly separated sense section closures strongly separated according definition see lemma stronger result proposition implies two disjoint sets strongly separated shores singletons yields following result corollary let strongly separated exists unique pair gates also need following lemma let strongly separated halfspaces closures strongly separated subsets proof since disjoint closures sets disjoint see proof theorem proposition suffices prove shore singleton suppose sake contradiction contains distinct points let projections particular given argument beginning proof shows closures inside intersect nontrivially lemma almost every intersects considering complements conclude almost every transverse similarly since exists contradicting fact strongly separated haagerup class let median space topological group given banach space denote group linear isometries elia fioravanti isometric action corresponds measure preserving action obtain continuous representation simply write use interchangeably notations denote continuous cohomology given consider continuous defined satisfies kbx refer haagerup cocycle cohomology class depend point simply denote action bounded orbits affine action induced fixes point follows instance theorem theorem case thus action bounded orbits also consider projection reduced continuous cohomology carries interesting geometrical information see theorem introduction refer haagerup class choice particularly relevant discussion could equally carried complications see remark conclude section collecting straightforward lemmata later use let hilbert space continuous unitary representation lemma open functoriality induces injective map lemma given decomposition projections onto two factors induce recall denote barycentric subdivision standard inclusion write instead every isometric action also induces continuous representation lemma let complete finite rank let isometric action projection induces isometric embedding monomorphism taking haagerup class haagerup class proof fact isometric embedding follows observation injectivity follows lemma applied orthogonal complement finally corresponding haagerup cocycle cocycle haagerup cocycle relative point superrigidity actions finite rank median spaces haagerup class elementarity actions main statement let complete finite rank median space let isometric action topological group goal section prove theorem lemmata suffices consider case without wall inversions standing assumption throughout rest section lemma suppose irreducible roller minimal roller nonelementary exists free subgroup vectors unbounded orbits proof part proposition provides free subf group measurable partition ghh hgh immediate construction acts unbounded orbits existed sequence almost invariant vectors say kfn could define functions kfn immediate check kfn every every kgfn kfn kfn gfn kfn gfn kgfn thus regular representation would contain vectors implying amenability see theorem contradiction already prove half theorem proposition roller nonelementary proof note functoriality reduced cohomology suffices consider case discrete topology thus need worry continuity issues proceed induction rank actions roller elementary assume rest proof also assume roller minimal indeed subsets provided proposition action without wall inversions rank proposition induces orthogonal decommeasurable partition position splitting elia fioravanti orthogonal projections onto two direct summands write every note maps point cocycle precisely haagerup cocycle action part proposition particular conclude thus assume rest proof irreducible lemma provides unbounded orbits vectors first condition implies second condition theorem thus yield particular instead splits nontrivial product subgroup preserving decomposition suffices show writing instead proposition lemma imply hence action roller nonelementary since thus exchanging two factors roller nonelementary since rank inductive hypothesis guarantees concludes proof proving rest theorem need obtain results lemma finite open proof suppose extended metric proposition find denoting projection neighbourhood identity element gxi gyi gxi gyi addition would consequence would contradict choice conclude contained stabiliser must open proof following fact rather lengthy technical carried appendix proposition let compact set isometries acting trivially exists point coincides null set strongly reduced minimal ubs superrigidity actions finite rank median spaces whenever whenever given points ubs define function dependence point particularly relevant record notation consider sets appendix obtain following result see lemma proposition suppose minimal strongly reduced let compact set isometries functions converge uniformly instead converge function finally ready complete proof theorem proof theorem proposition suffices consider case finite orbit lemmata actually assume fixes point suppose instead proposition open subgroup acts trivially lemma suffices consider case fix every every compact subset need construct function kbx considering point provided proposition suffices find function kbxk set considering equalities null sets particular since construction whenever introducing notation subsets rewrite bxk elia fioravanti consider function proposition shows suffices take large remark theorem also holds analogous class every indeed lemma applies decomposition banach space closed subspaces proof lemma closed complement within always provided subspace functions take opposite values hemiatoms theorem holds representations general banach spaces finally free group vectors every value also little importance material appendix note however proposition fails one needs consider functions quicker decay case elementarity shalom property let complete finite rank median space let topological group main result property following proposition property every isometric action roller elementary need following lemma lemma suppose irreducible let roller nonelementary roller minimal action let measurable subset either proof without loss generality assume without wall inversions lemma allows pass barycentric subdivision necessary suppose set since finite rank find thick halfspace replacing necessary set satisfies part proposition halfspace part facing proposition exist facing sets pairwise disjoint contained null sets however union measure contradiction proof proposition suppose sake contradiction roller nonelementary without loss generality assume minimal rank among complete median spaces admitting roller nonelementary actions particular must irreducible see proof proposition proposition also assume roller minimal theorem guarantees since superrigidity actions finite rank median spaces property exists finite dimensional subrepresentation construct measurable subset positive finite measure thus violating lemma let measurable functions whose equivalence classes form orthonormal basis define since definition suffices look lying countable dense subset conclude measurable must hence sufficiently small given exist real numbers outside measure zero set gfj every gec must gfj conclude gec corollary let discrete group property acts freely cocompactly cat cube complex virtually abelian proof cocompactness action implies finite dimensional propositions exists subgroup normal subgroup consisting elliptic elements abelian since acts freely trivial recall gromov density model random groups density nonelementary hyperbolic overwhelming probability together theorem corollary immediately implies following result corollary overwhelming probability random groups density gromov density model property superrigidity superrigidity result let complete median space finite rank action isometries discrete group lemma suppose form facing triple let strongly separated exists point whenever kei proof let intersection closures inside nonempty closed convex given points kei set convexity hence permuting indices obtain particular denoting gate projection closures eki strongly separated lemma let shore set corollary elia fioravanti hence matter points kei chosen lemma suppose irreducible assume roller nonelementary roller minimal given consider exists every exists proof part proposition barycentric subdivision irreducible median space rank action without wall inversions roller nonelementary roller minimal lemma usual write function induces approximate linear combination characteristic functions halfspace intervals proposition implies halfspace intervals finite measure exists propositions provide thick halfspace strongly separated particular contains every wall set propositions also provide strongly separated strongly separated assume without loss generality proposition quantity diverges goes infinity thus choose elements halfspaces strongly separated distance least denote support function set points let maximum distance point let fix integer kgf prove straightforward repeated application triangle inequality yields kgf thus let wall corresponding halfspace since contained without wall inversions conclude similar argument shows let wall corresponding contained either former case immediately yields latter leads contradiction intersect superrigidity actions finite rank median spaces instead contained contained strong separation let minimum contains since let side contained since either lies exists hence strong separation minimality wall contained hence since otherwise would violate fact consider intersection closures halfspaces consists single point since would transverse almost violating strong separation strong separation also implies actually lies given assume removing finite number elements necessary kgn let natural number kgn shown thus case strong separation implies consists halfspaces whose corresponding walls contained shows lim sup conclude topology every finally construct point let thick halfspaces strongly separated part proposition provides facing triple consisting choose thick halfspaces strongly separated proposition find let point provided lemma applied particular hence since set closed conjugation elements hence given exists every eji thus rest section consider locally compact group lattice borel fundamental domain defines cocycle say finitely generated chosen haar measure denotes word length respect finite generating set integrability depend elia fioravanti choice uniform lattices always nonuniform examples mentioned introduction see details examples assume splits product compactly generated also require lattice irreducible project densely factor consider unitary representation denote subspace invariant vectors let make use following result shalom essential way see page theorem proof theorem suppose exist closed subspaces restriction extends continuous representation factors projection furthermore cocycles represent class following version theorem stronger hypotheses theorem suppose irreducible let roller nonelementary roller minimal exists closed median subalgebra extends continuous action moreover factors projection proof theorem lemma implies nonzero invariant vectors thus theorem provides subspace action extends continuous action factoring projection pick consider set introduced lemma sequence lies thus lemma implies set nonempty note median subalgebra thus restriction metric gives structure median space since closed subset complete median space finally proposition provides continuous extension factors assumption roller minimal roller nonelementary replaced stronger requirement finite orbit see proposition visual boundary cat space homomorphism isom provided theorem continuous respect topology pointwise convergence remark however remains continuous even endow isom topology mentioned remark key point proof theorem superrigidity actions finite rank median spaces remark proof theorem lemma actually yields smaller set nonempty thus isom continuous respect topology isom generated stabilisers points statement theorem always take closure topology isom might seem lot finer topology pointwise convergence clarify phenomenon mention following fact without proof let irreducible complete finite rank median space admitting roller nonelementary roller minimal action exists dense convex subset every stabiliser open topology pointwise convergence isom essentially follows lemma relaxing hypotheses theorem obtain theorem lattices corollary suppose let roller nonelementary exist finite index subgroup component closed median subalgebra action extends continuous action open finite index subgroup proof proceed induction rank rank zero nothing prove assume statement holds median spaces rank proposition exists closed convex subset component roller minimal roller nonelementary rank conclude inductive hypothesis thus assume let splitting irreducible factors result follows theorem let finite index subgroup preserving splitting permuting factors assume roller nonelementary roller elementary finite index subgroup fixes point denote component containing note irreducible lattice open finite index subgroup since rank inductive hypothesis yields finite index subgroup open finite index subgroup closed median subalgebra component action extends continuous action let intersection intersection set closed median subalgebra component particular closed convex subset component action trivially extends continuous action elia fioravanti figure describe two examples illustrate theorem space taken coincide convex subset example corollary avoided pass finite index subgroup even action roller minimal example actions consider actually cat square complexes since groups play important role construction two examples briefly recall facts regarding construction given integer denote tree group even permutations elements fix legal colouring way associating integer every edge see integers around vertex particular bijection every vertex let isom subgroup isometries igv every vertex denote intersection subgroup isom generated edge stabilisers subgroup index see proposition subgroup closed isom particular locally compact second countable compactly generated theorem proposition theorem exists uniform irreducible lattice every integer next two examples fix lattice let projections two factors set irreducible lattice open index subgroup let homomorphism kernel example given tree blow every edge square figure thus obtaining tree squares adjacent squares share vertex leaves square pair opposite vertices shared squares pair opposite vertices shared space complete rank two median space embeds median subalgebra edges correspond diagonals joining shared pairs vertices square superrigidity actions finite rank median spaces embed isom isom extending isometry restriction square orientation preserving let isom isometry fixes pointwise image embedding acts square reflection diagonal isom isom viewing homomorphism isom define homomorphism isom denote composition map embedding isom isom action induced roller nonelementary roller minimal since action induced irreducible theorem guarantees continuous extension median subalgebra indeed one take image however taken convex subspace even subcomplex indeed would forced whole convex subset action extend factoring via whenever elements satisfy sequence must diverge however also extend factoring contained closed subgroup isom isom example choose element consider action given since action preserve proper closed subtree holds action part proposition implies leave proper closed convex subset invariant note component preserved would correspond fixed point hence fixed point conclude roller minimal argument also shows roller nonelementary one easily check extended action whole setting action also roller minimal roller nonelementary show however exists isometric embedding median space action extends continuously factoring one factors let embedding note entirely contained component particular previous discussion shows lemma wall arises wall wall one two factors see part proposition since two factors exchanged conclude splits preserving decomposition exchanging elia fioravanti suppose sake contradiction extends action factoring one two factors example see extension factor via however since dense preserves splitting part proposition implies extension factoring would also preserve splitting contradicts fact exchanges conclude section proving theorem proof theorem begin observing part follows part proposition suppose sake contradiction admits roller nonelementary action proof proposition assume irreducible roller minimal theorem yields factor closed median subalgebra actions without loss generality assume closure inside remark stabilisers points open thus identity component must fix pointwise dense entire action vanishes descends action group since satisfies condition proposition corollary imply action roller elementary however lemma actions roller nonelementary contradiction homomorphisms coarse median groups defined equivariantly coarse median groups introduction simply prove corollary proof corollary fix ultrafilter let corresponding ultrapower endow word metric arising finite generating set given denote asymptotic cone obtained taking basepoints identity sequence scaling factors let denote metric induces geodesic metric preserved natural action coarse median induces structure finite rank median algebra see section denote corresponding median map action automorphisms median algebra structure propositions endow median metric bilipschitz equivalent preserved furthermore median algebra structure associated given map suppose sake contradiction exist pairwise nonconjugate homomorphisms correspond homomorphism hence action every asymptotic cone preserves median metric construction superrigidity actions finite rank median spaces provides sequence modifying within conjugacy class necessary induced action global fixed point however contradicts theorem appendix structure ubs let complete median space finite rank fix points let minimal reduced ubs lemma let isometry satisfying consider ubs proof observe fix diverging chain halfspaces lie pairwise transverse thus exists either hence every either suppose set cofinite subchain diverging chain hence sufficiently large since reduced particular conclude lies proposition guarantees diverging chain let inseparable closure ubs equivalent satisfies observe since conclude argument applied shows exists proves part shows latter case every sufficiently large since reduced thus corollary let isometry define previous lemma consider ubs rest appendix also consider compact subset every lemma exists constant every lies every minimal reduced ubs exists ubs disjoint elia fioravanti proof let diverging chain thick every cofinite subchain contained since reduced exists proposition assume lemma provides neighbourhood particular exist maximum let inseparable closure ghn every sufficiently large since reduced shows contained since exists constant every lies prove part let supremum distances since let maximum distance consider max define set existed would thus part implies contradicts fact recall introduced function defined formula sets observe whenever particular measurable say small otherwise large cat cube complex every ubs large example small ubs rank two median space appears figure small every isometry fixing note supremum precisely lemma let halfspaces diverging chain halfspaces proof fact diverging chain follows fact reduced implication let diverging chain since suffices consider case small every set measure proposition large hence exists particular since arbitrary shows diverging chain lemma every set ubs proof since reduced contains almost every halfspace diverging chain provides diverging chain inseparability follows monotonicity superrigidity actions finite rank median spaces prove part decompose lemma hence lemma implies inseparable closure measure conclude lemma assume particular constant every proof indeed take maximum exists due proposition regarding part observe lemma assume strongly reduced every exists constant ubs proof decompose totally ordered inclusion contains diverging chain pick halfspaces part lemma halfspace transverse diverging chain thus part lemma guarantees halfspaces transverse lie uniformly bounded distance conclude exists every contained hence also proves part part immediate consequence rest section assume minimal strongly reduced lemma suppose elia fioravanti exists constant every halfspace every exists constant whenever proof first observe given minimal ubs isometry since note thus equals consider set corollary part lemma exists constant lies every part lemma part lemma conclude whenever maximum distances prove part let part part lemma provides constant lies every let supremum values max lemma satisfies max hence particular observe choice constant thus conversely clear consider functions superrigidity actions finite rank median spaces lemma assume every exists constant instead proof observe analyse three summands separately lemma part lemma part lemma exists constant lies every particular part lemma maximum exists proposition finally part lemma part lemma instead previous discussion shows large conclusion follows applying lemma let ubs let pairwise inequivalent reduced ubs representing minimal equivalence classes ubs almost contained every set exist increasing sequences inseparable proof proceed induction lemma immediate suppose without loss generality assume corresponds vertex incoming edges full subgraph elia fioravanti vertices fix construct satisfying inseparability condition pick diverging chain replacing cofinite subchain assume halfspace transverse almost every lemma exists contained inseparable closure consequence every every halfspaces transverse almost every halfspaces lying inseparable closure neither uniformly bounded distance part proposition say distances bounded enlarge lie part proposition exists ubs contained equivalence classes minimal ubs almost contained inductive hypothesis lemma imply find inseparable inseparable would exist halfspaces would lie particular observe otherwise would transverse diverging chains thus moreover halfspace must transverse almost every since lie inseparable since fact reduced implies choice contradiction lemma exists ubs every ubs form null set proof let pairwise inequivalent minimal ubs representing minimal elements assume reduced lemma halfspaces transverse diverging chain lie uniformly bounded distance part proposition say distances bounded let consist ubs equivalent observe ubs equivalent halfspaces indeed otherwise would contain halfspace inseparability would therefore transverse diverging chain hence contradiction superrigidity actions finite rank median spaces given consider set observe ultrafilter indeed since suffices check whenever halfspaces disjoint would contradicting observation made since finally lemma decompose set particular since disjoint hence proposition implies exists null thus measure zero ready prove proposition proof proposition let pairwise inequivalent ubs representing minimal elements poset lemma assume strongly reduced replacing smaller ubs part lemma guarantees assume part lemma lemma provides constants inseparable thus ubs equivalent part proposition conclude lemma enlarging constants necessary possible lemma references goulnara arzhantseva graham niblo nick wright jiawen zhang characterization asymptotic dimension growth jacek brodzki sarah campbell erik guentner graham niblo nick wright property cat cube complexes funct bachir bekka pierre harpe alain valette kazhdan property volume new mathematical monographs cambridge university press cambridge jason behrstock cornelia mark sapir addendum median structures asymptotic cones homomorphisms mapping class groups proc lond math soc jason behrstock cornelia mark sapir median structures asymptotic cones homomorphisms mapping class groups proc lond math soc mladen bestvina degenerations hyperbolic space duke math mladen bestvina mark feighn stable actions groups real trees invent uri bader alex furman boundaries rigidity representations lyapunov exponents uri bader tsachik gelander nicolas monod fixed point theorem spaces invent elia fioravanti martin bridson haefliger metric spaces curvature volume grundlehren der mathematischen wissenschaften fundamental principles mathematical sciences berlin jason behrstock mark hagen alessandro sisto hierarchically hyperbolic spaces combination theorems distance formula jason behrstock mark hagen alessandro sisto hierarchically hyperbolic spaces curve complexes cubical groups geom jason behrstock mark hagen alessandro sisto quasiflats hierarchically hyperbolic spaces marc burger shahar mozes groups acting trees local global structure inst hautes sci publ marc burger shahar mozes lattices product trees inst hautes sci publ marc burger nicolas monod continuous bounded cohomology applications rigidity theory geom funct brian bowditch coarse median spaces groups pacific brian bowditch invariance coarse median spaces relative hyperbolicity math proc cambridge philos brian bowditch rigidity properties mapping class groups preprint brian bowditch properties median metric spaces groups geom caprace amenable groups hadamard spaces totally disconnected isometry group comment math indira chatterji cornelia median geometry spaces measured walls groups indira chatterji cornelia haglund kazhdan haagerup properties median viewpoint adv yves cornulier pierre harpe metric geometry locally compact groups volume ems tracts mathematics european mathematical society ems winner ems monograph award indira chatterji talia alessandra iozzi median class superrigidity actions cat cube complexes appendix caprace caprace alexander lytchak infinity finitedimensional cat spaces math caprace nicolas monod isometry groups nonpositively curved spaces structure theory cherix florian martin alain valette spaces measured walls haagerup property property ergodic theory dynam systems caprace bertrand simplicity superrigidity twin building lattices invent caprace bertrand twin building lattices geom dedicata montserrat ilya kazachkov limit groups partially commutative groups group actions real cubings geom superrigidity actions finite rank median spaces caprace michah sageev rank rigidity cat cube complexes geom funct yves cornulier romain tessera alain valette isometric group actions hilbert spaces growth cocycles geom funct yves cornulier romain tessera alain valette isometric group actions banach spaces representations vanishing infinity transform groups patrick delorme des unitaires des groupes lie produits tensoriels continus bull soc math france dilworth decomposition theorem partially ordered sets ann math thomas delzant pierre cubulable groups talia boundary cat cube complexes elia fioravanti roller boundaries median spaces algebras elia fioravanti tits alternative finite rank median spaces talia alain valette sequence graphs groups property first number victor gerasimov groups actions cubings algebra geometry analysis mathematical physics russian novosibirsk pages izdat ross akad nauk sib otd inst novosibirsk erik guentner nigel higson weak amenability cat groups geom dedicata gromov asymptotic invariants infinite groups geometric group theory vol sussex volume london math soc lecture note pages cambridge univ press cambridge alain guichardet sur cohomologie des groupes topologiques bull sci math thomas haettel higher rank lattices coarse median algebr geom thomas haettel hyperbolic rigidity higher rank lattices haglund isometries cat cube complexes mark hagen simplicial boundary cat cube complex algebr geom mark hagen corrigendum simplicial boundary cat cube complex haglund paulin groupes automorphismes espaces courbure epstein birthday schrift volume geom topol pages geom topol coventry mark hagen tim susse hierarchical hyperbolicity cubical groups marcin kotowski kotowski random groups property theorem revisited lond math soc elia fioravanti bruce kleiner new proof gromov theorem groups polynomial growth amer math aditi kar michah sageev ping pong cat cube complexes comment math bernhard leeb characterization irreducible symmetric spaces euclidean buildings higher rank asymptotic geometry volume bonner mathematische schriften bonn mathematical publications bonn mathematisches institut bonn florian martin reduced connected locally compact groups applications lie theory ashot minasyan new examples groups acting real trees nicolas monod superrigidity irreducible lattices geometric splitting amer math bogdan nica group actions median spaces amos nevo michah sageev poisson boundary cat cube complex groups groups geom graham niblo nick wright jiawen zhang four point characterisation coarse median spaces yann ollivier sharp phase transition theorems hyperbolicity random groups geom funct yann ollivier daniel wise cubulating random groups density less trans amer math narutaka ozawa functional analysis proof gromov polynomial growth theorem paulin outer automorphisms hyperbolic groups small actions arboreal group theory berkeley volume math sci res inst pages springer new york bertrand construction acad sci paris bertrand integrability induction cocycles groups math martin roller poc sets median algebras group actions extended study dunwoody construction sageev theorem preprint university southampton michah sageev ends group pairs curved cube complexes proc london math soc yehuda shalom rigidity commensurators irreducible lattices invent yehuda shalom harmonic analysis cohomology geometry amenable groups acta jan spakula nick wright coarse medians property property kazhdan constants discrete groups geom funct rudolf zeidler coarse median structures homomorphisms kazhdan groups geom dedicata
| 4 |
massive mimo performance comparison beamforming multiplexing terahertz band sayed amir ming mahbub oct school computer science engineering university new south wales sydney australia csiro sydney australia email paper compare performance two main mimo techniques beamforming multiplexing terahertz thz band main problem thz band huge propagation loss caused tremendous signal attenuation due molecule absorption electromagnetic wave overcome path loss issue massive mimo suggested employed network expected provide tbps distance within meters context beamforming studied recently main technique take advantage mimo thz overcome high path loss assumption thz communication channel los significant multipath rays hand recent studies also showed absorbed energy molecules reradiated immediately frequency signal correlated main signal provide rich scattering paths communication channel means significant mimo multiplexing gain achieved even los scenario thz band simulation results reveal surprising observation mimo multiplexing could better choice mimo beamforming certain conditions thz communications ntroduction respond huge increasing demand wireless data traffic recently terahertz thz band thz envisioned make tbps wireless link feasible spite wide unused bandwidth spectrum high propagation loss main issue using spectrum thus potential applications thz link limited short range communications nanosensors wireless communications wireless personal area networks moreover part radio signal attenuation thz frequencies due molecular absorption frequency selective increases total loss frequencies distance basically overcome high path loss transmit power could largely increased unfortunately feasible current technology limited alternately channel gain significantly improved means beamforming technique indeed due small footprint large number antennas thz band beamforming using large scale multiple input multiple output mimo systems considered field practical solution provide channel gain thz however beamforming comes cost system complexity signaling overhead transmitter receive channel state information continuously align beam receiver hand achieve significant mimo beamforming gain high frequency spectrum beam would become narrow sometimes described pencil beam makes beamforming vulnerable mobility difficult perform beam short time interval another approach take advantage mimo mimo multiplexing technique beamforming technique strives focus transmission energy achieve large channel gain specific direction multiplexing technique builds strength creating parallel information channels however multiplexing gain significant enough multipath signal components rich scattering environment huge path loss thz communication usually assumed applied los dominant channel thus research focus beamforming rather multiplexing however recent studies show channel medium molecules absorb electromagnetic energy thz band transforms los channel environment usually considered noise theorical model shows highly correlated main signal paper theoretically investigate thz channel capacity cases beamforming multiplexing mimo find multiplexing technique provide considerable capacity gain comparison beamforming technique certain conditions also conditions beamforming yields higher capacity multiplexing technique still preferable choice due easier implementation note work assume multiplexing technique using blind precoding scheme without channel state information csi contrast beamforming technique always requires accurate csi smartly direct energy spatial domain rest paper structured follows section present molecular absorption model calculation channel transfer function section iii analyzes mimo channel model considering molecular followed simulation results section finally conclude paper section hannel model mimo capacity molecular absorption model defines different species molecules communication channel absorb energy electromagnetic signals back environment section first explains concept absorption coefficient used characterize absorption capacity given molecule species followed attenuation models built upon coefficient molecular absorption coefficient medium absorption coefficient frequency weighted sum molecular absorption coefficients medium formulated molecular absorption coefficient species condition temperature pressure obtained hitran work get values use predefined standard atmosphere conditions corresponding ratio molecules air tabulated attenuation radio signal attenuation radio signal thz frequencies due spreading molecular absorption detail spreading attenuation given molecular existing molecules communication medium excited electromagnetic waves specific frequencies excitement temporary energy level molecules come back steady state absorbed energy frequency waves usually considered noise literature molecular absorption white power spectral density psd flat different resonant frequencies various species molecules psd molecular absorption noise affects transmission signal snabs contributed atmospheric noise noise addressed snabs absorption coefficient medium frequency reference temperature boltzmann constant power spectral density transmitted signal speed light first term called sky noise defined independent signal wave however noise highly correlated signal wave considered distorted copy signal wave thus equation revised received power reradiated signal molecules receiver since phase wave depends phase molecular vibration varies molecules molecules received power case affected large number photons thus assume uniformly distributed random phase received signal power given channel transfer function channel transfer function single los channel given absorption coefficient medium frequency thus los received power receiver becomes los partial channel transfer function resulted molecular absorption excluding los component represented aspread speed light attenuation due molecular absorption characterized aabs hence total channel transfer function superposition partial channel transfer functions written mimo channel model capacity paper consider mimo system consisted transmitting antennas receiving ones received signal vector receiving antennas formulated transmitted signal vector form transmitting antennas vector independent noises variance channel matrix elements complex value denoting transfer coefficient associated jth transmitter antenna ith receiver antenna note obtained frequency distance dij capacity mimo channel written det inr total transmitting power identity matrix since determinant inr computed product eigenvalues matrix mimo capacity thus written form product eigenvalues denotes singular values matrix hence squared singular values denotes eigenvalues matrix characterize equivalent information channel corresponding ratio snr channel receiver note denotes number beamforming technique equal one multiplexing technique could rank min however use blind precoding uniform power allocation multiplexing technique therefore equation valid uniform power allocation transmitter furthermore equivalent channel snr meet minimum receiver threshold reliably detectable receiver paper assumed snr threshold uniform power allocation transmitter main difference beamforming multiplexing techniques tune exploit eigenvalue distribution details beamforming technique aims maximize improve channel snr single data stream multiplexing technique uniform eigenvalue distribution preferable way multiplexing technique utilize parallel data streams mimo maximize data rate complexity beamforming comes eigenvalues tuning means channel state information csi measured sent back transmitter periodically optimum precoding also results protocol overhead channel hand multiplexing gain take advantage eigenvalue value distribution even blind precoding beneficial rich scattering environment channel next section discuss provide rich scattering environment iii nalysis channel molecular absorption analyze mimo channel capacity characterize scattering richness channel quantitatively lets decompose normalize channel transfer function hlos hlos normalized corresponding channel gain uniformly distributed random phase received signal elements independent identically distributed complex gaussian random variables zero mean unit magnitude variance ratio powers los signal components assume channel distance much longer antenna space obtained los rician channel model called rician equivalently shows much channel rich term scattering multipath rays equation shows function absorption coefficient channel medium distance transmitter receiver longer distance higher absorption result smaller shown figure capacity mimo channel considering rician studied several works authors showed lower bound rician channel expected capacity large number antennas expected capacity channel considering nlos component denotes expectation clear lower band increasing function absorption coefficient emin emin mimo capacity using beamforming mimo capacity using multiplexing fig increasing function distance absorption coefficient multiplexing beamforming techniques performance gain affected capacity calculated mimo system imulation discussion simulation section evaluate molecular absorption impact thz mimo capacity consider simple mimo system square uniform arrays transmitter receiver spacing equal half wavelength channel distance moreover consider uniform power allocation transmitter arrays operating los scenario default values parameters listed table different values explained necessary since apply random phases nlos components created molecular conduct evaluation mimo capacity molecular times show average result use online browsing plotting based hitran databases generate absorption coefficients different single gas predefined standard gas mixture atmosphere sea level shown table since water molecules play main roles normal air environment thz bands use highest lowest water ratio table usa model high latitude winter usa model tropics corresponding absorption coefficients thz bands shown figure ambient temperature sea level pressure atm tropic atmosphere water ratio higher winter atmosphere thus see significant increase absorption coefficient among two gas mixtures simulation assume constant transmit power entire frequency spectrum display mimo capacity thz bands consider mimo antennas side uniform square planar array aim compare beamforming multiplexing http table simulation parameters transmitter receiver distance spacing transmitter arrays angle receiver arrays angle number arrays side transmit power noise power wave length dbm dbm techniques different channel conditions first calculate channel capacity beamforming totally ignored channel next beamforming capacity taken account finally multiplexing gain calculated without consideration scenarios capacity obtained first step simulation run ghz practical range absorption coefficient thz spectrum shown figure noted actual value absorption coefficient ghz shown figure beamforming multiplexing techniques capacity calculated range distance transmit power secondly channel simulated two different transmit power three distances realistic absorption coefficients assumption transmit power based current technology previous work thz massive mimo furthermore distances chosen cover various application scenarios example thz nanosensors considered communicate short distance order less thz communications also nominated provide terabit per second ultra high video communication link around distance home entrainment devices like virtual reality addition longer distances meters characterize wireless personal local networks simulation results presented figure mimo capacity figure illustrates channel transformed los dominant channel rayleigh channel effects mimo beamforming multiplexing capacity gain seen figure beamforming gain decreasing absorption coefficient increases high absorption channel los dominant anymore significant nlos signal component generated molecule equivalently lower contrast figure shows multiplexing technique takes advantage higher absorption reach huge data rate however low snr limit multiplexing gain longer distances drops sharply zero beyond figure results thz spectrum realistic absorption coefficients presented table atmosphere standard gas mixture ratio percentage different climates usa usa usa usa usa model model model model model mean latitude summer mean latitude winter high latitude summer high latitude winter tropics mimo capacity transmit power distance channel attenuation including molecular attenuation spreading attenuation illustrated figure spreading attenuation increasing linearly distance frequency molecular attenuation also increasing distance frequency selective example total loss ghz total attenuation ghz grows mostly high absorption water molecules channel medium frequency note channel atmosphere case tropic data ratio water molecules air shown table figure illustrate capacity investigated transmission techniques distance transmit power increased figure figure seen huge performance difference exists multiplexing beamforming thanks tremendous multiplexing gain provided rich scattering environment due molecule furthermore high absorption frequencies existing studies consider infeasible windows thz communications significant capacity improvement observed absorption leads transforms los dominant channel rayleigh channel details found section iii discussed decreases creates rich scattering environment sum improves multiplexing gain fundamentally supported better eigenvalue distribution channel matrix rank mathematical analysis figure distance increased relatively large distance thz communications seen beamforming gain comparable multiplexing gain however see multiplexing gain high absorption windows ghz significantly higher rest spectrum transmit power different story transmit power capacity drops zero high absorption windows equivalent snr parallel channels created multiplexing technique less practically parallel channels useless receiver reliably detect received signals results surprising since shown several works conventional communication band multiplexing performance drops dramatically low snr however considering implementation challenges beamforming multiplexing technique might still preferable choice frequency thz example observed figure thz capacity multiplexing beamforming techniques respectively finally figures present results distance distance path loss leads low reception snr thus beamforming performance significantly better multiplexing performance wellknown beamforming technique effective strong multipath rays thus observed high absorption frequency windows beamforming performance drops sharply receiving strong nlos rays caused molecule also due los signal attenuation note multiplexing technique take advantage windows high snr discussed figure onclusion paper compared beam forming multiplexing techniques mimo terahertz band showed high snr high transmit power lower distance multiplexing technique provide considerable capacity gain compared beamforming however beyond meters meters enough transmitting power possibility use multiplexing technique otherwise capacity drops zero beamforming technique still provide effective spectrum efficiency cost complexity protocol overhead theoretical model also showed molecules thz band helpful massive mimo system improve channel performance using multiplexing technique provide significantly strong multipath components achieve full spatial multiplexing gain receiver enough snr coverage means high absorption frequency windows formerly pointed feasible communication might preferable choices mimo certain applications eferences akyildiz jornet realizing mimo communication terahertz band nano communication networks vol zarepour hassan chou adesina semon sensorless event monitoring wireless nanosensor networks acm transactions sensor networks tosn vol akyildiz jornet han terahertz band next frontier wireless communications physical communication vol kokkoniemi juntti discussion molecular absorption noise terahertz band nano communication networks jornet akyildiz modulation terahertz band communication nanonetworks ieee transactions communications vol may model tropic high absorption coefficient model high latitude winter low frequency ghz absorption coefficient atm signal attenuation beamforming beamforming multiplexing multiplexing beamforming beamforming multiplexing multiplexing capacity capacity frequency ghz frequency ghz transmit transmit beamforming beamforming multiplexing multiplexing beamforming beamforming multiplexing multiplexing capacity capacity frequency ghz transmit transmit beamforming beamforming multiplexing multiplexing beamforming beamforming multiplexing multiplexing capacity capacity frequency ghz frequency ghz transmit frequency ghz transmit fig mimo channel performance tropic atmosphere jornet montana fundamentals electromagnetic nanonetworks terahertz band dissertation georgia institute technology rothman gordon babikov barbe molecular spectroscopic database journal quantitative spectroscopy radiative transfer vol barron molecular light scattering optical activity cambridge university press tse viswanath chapter mimo spatial multiplexing channel modeling fundamentals wireless communication farrokhi foschini lozano valenzuela processing multiple transmit receive antennas ieee communications letters vol lebrun faulkner shafi smith mimo ricean channel capacity asymptotic analysis ieee transactions wireless communications vol gesbert shafi shiu smith theory practice overview mimo coded wireless systems ieee journal selected areas communications vol
| 7 |
emptiness algorithm regular types set operators arxiv nov lunjin john cleary department computer science university waikato hamilton new zealand phone lunjin jcleary abstract algorithm decide emptiness regular type expression set operators given set parameterised type definitions presented algorithm also used decide equivalence two regular type expressions inclusion one regular type expression another algorithm strictly generalises previous work tuple distributivity assumed set operators permitted type expressions keywords type emptiness prescriptive type introduction types play important role programming languages make programs easier understand help detect errors types introduced logic programming forms type checking inference type analysis typed languages recent logic programming systems allow programmer declare types predicates type errors detected either compile time run time reader referred details types logic programming type possibly infinite set ground terms finite representation integral part type system type language specifies sets ground terms types useful types closed intersection union complement operations decision problems emptiness type inclusion type another equivalence two types decidable regular term languages called regular types satisfy conditions used widely used types type systems use tuple distributive regular types strictly less powerful regular types tuple distributive regular types regular types closed tuple distributive closure intuitively tuple distributive closure set terms set terms constructed recursively permuting argument position among terms function symbol paper gives algorithm decide type expression denotes empty set terms correctness algorithm proved complexity analysed algorithm works prescriptive types prescriptive types mean meaning type determined given set type definitions allow parametric overloading polymorphism type definitions prescriptive types useful compilers program manipulation tools debuggers easy understand programmers type expressions may contain set operators usual interpretations thus algorithm used decide equivalence two type expressions inclusion one type expression another introduction set operators type expressions allows concise intuitive representation regular types though using regular term languages types allow make use theoretical results field tree automata algorithms testing emptiness tree automata applied directly type definitions may parameterised instance order decide emptiness type expression given set type definitions would necessary construct tree automaton type expression set type definitions algorithm determining emptiness tree automaton used type definitions parameterised would make necessary construct different automaton time emptiness type expression tested thus algorithm works directly type definitions desirable avoids repeated construction automata attempts made past find algorithms regular types knowledge dart zobel work one present decision algorithms emptiness inclusion problems prescriptive regular types without tuple distributive restriction unfortunately decision algorithm inclusion problem incorrect regular types general see counterexample moreover type language dart zobel less expressive considered paper since allow set operators parameterised type definitions set constraint solving also used type checking type inference however set constraint solving methods intended infer descriptive types rather testing emptiness prescriptive types therefore useful different settings algorithm presented paper moreover algorithms proposed set constraint solving applicable emptiness problem considered paper take type definitions account remainder paper organised follows section describes language type expressions type definitions section presents algorithm testing type expression denotes empty set terms section addresses algorithm section presents complexity algorithm section concludes paper lemmas presented appendix type language let fixed ranked alphabet symbol called function symbol fixed arity assumed contains least one constant function symbol arity arity symbol denoted arity may considered set function symbols program let set terms set possible values program variable take shall use regular term languages types type represented ground term constructed another ranked alphabet called type constructors assumed thus type expression term denotations type constructors determined type definitions whilst fixed denotations given soon several equivalent formalisms tree automata regular term grammars regular unary logic programs used define regular types define types type rules type rule production rule form different type parameters restriction every type parameter righthand side type rule must occur lefthand side type rule often referred type preserving used type definition formalisms note overloading function symbols permitted function symbol appear righthand sides many type rules def denote set type rules define restricted form term grammar example let nil cons nat even list defines natural numbers even numbers lists nat nat even even list nil cons list instance nat nat abbreviation two rules nat nat nat called simplified production rule form either form shall assume simplified loss generality use simplified set type rules since every set type rules simplified introducing new type constructors rewriting adding type rules spirit example following simplified version set type rules example nil cons nat even odd list nat nat even odd odd even list nil cons list type valuation mapping instance production rule obtained replacing occurrence type parameter list cons list instance list cons list type valuation maps let def ground ground set ground instances grammar rules plus rules form every given set type definitions type denoted type expression determined following meaning function def def def def def def gives fixed denotations interpreted set intersection set union set complement respect denotes empty set example let example nat even list cons nil cons nil lemma appendix states every type expression denotes regular term language regular type extend sequences type expressions follows def def hei empty sequence infix sequence concatenation operator hei sequence consisting type expression cartesian product operator sequence type expressions thought consisting zero instance use denote sequence consisting zero instance define shall call sequence type expressions simply sequence sequence expression expression consisting sequences length length sequences sequence expression called dimension denoted let sequence expressions length def def def times conjunctive sequence expression sequence expression form sequences emptiness algorithm section presents algorithm decides type expression denotes empty set respect given set type definitions algorithm also used decide denotation one type expression included denotation another included iff empty first introduce terminology notations type atom type expression principal type constructor set operator type literal either type atom complement type atom conjunctive type expression form type literal let type atom defined set principal function symbols terms def ground let define def ground finite even though ground usually finite algorithm repeatedly reduces emptiness problem type expression emptiness problems sequence expressions reduces emptiness problem sequence expression emptiness problems type expressions tabulation used break possible loop ensure termination let type def expression sequence expression define empty two reduction rules shall first sketch two reduction rules add tabulation form algorithm initially algorithm decide validity formula form empty type expression first reduction rule rewrites formula form conjunction formulae following form reduction rule one empty sequence expression applied type expressions sequence expression obvious type expression unique modulo equivalence denotation disjunctive normal form let dnf disjunctive normal form empty written empty conjunctive type expression assume contains least one positive type literal cause loss generality conjunctive type expression also assume contain repeated occurrences type literal let type atoms def set positive type literals denoted pos set complemented type atoms denoted def neg lit denotes set literals occurring lemma appendix empty equivalent empty intuition behind equivalence follows empty iff every function symbol set sequences terms empty function symbols need considered note following two special cases formula formula true true particular thus pos hence formula true neg thus effect subformula order get rid complement operators sequence subexpressions complement operator pushed inwards function push defined following def push push def push def push follows morgan law definition push substituting push formula gives rise formula form second reduction rule rewrites formula form conjunction disjunctions formulae form formula written disjunction formulae form empty reduction rule two conjunctive sequence expression case lemma appendix empty decided without reduction empty true otherwise empty false case empty equivalent empty def letting component note type expression empty form algorithm two reduction rules previous section form core algorithm however alone used algorithm formula empty may reduce formula containing empty leading nontermination suppose null null null clearly empty null true however first reduction rule empty null reduces empty hnulli reduces empty null second reduction rule process terminate solution inspired remember table particular kind formulae truth tested formula kind tested table first looked formula implied formula table determined true otherwise formula added table reduced reduction rule emptiness algorithm presented remembers every conjunctive type expression emptiness tested thus table set conjunctive type expressions let def conjunctive type expressions define lit lit since implies hence empty implies empty adding tabulation two reduction rules obtain following algorithm testing emptiness prescriptive regular types let bcf push def etype etype def etype dnf conj def etype conj pos neg true true otherwise def eseq dnf conj def eseq conj true false equation initialises table empty set equations implement first reduction rule equations implement second reduction rule etype etype conj test emptiness arbitrary type expression conjunctive type expression respectively eseq tests emptiness sequence expression consisting sequences operators eseq conj tests emptiness conjunctive sequence expression expression emptiness tested passed first argument functions table passed second argument used etype conj detect conjunctive type expression emptiness implied emptiness tabled conjunctive type expression shall show later ensures termination algorithm four binary functions returns true iff emptiness first argument implied second argument set type definitions tabling kind expressions arbitrary type expressions also ensure termination however tabling conjunctive type expressions makes easier detect implication emptiness one expression another lit easily computed given conjunctive type expression implementation conjunctive type expression table represented lit first two definitions etype conj equation terminates algorithm emptiness decided without using type definitions first definition also excludes table conjunctive type expression contains type atom complement examples illustrate algorithm examples example let type definitions given example tree figure depicts evaluation etype algorithm nodes labeled function calls identity node label arcs node children labeled number equation used evaluate node abbreviations used labels defined legend right tree though syntactically different type expressions evaluation returns true verifying consider etype conj lit lit thus equation etype conj true etype etype etype conj eseq eseq eseq conj eseq conj true etype etype conj true legend fig evaluation etype example let type definitions given example tree figure depicts evaluation etype list algorithm evaluation returns false verifying list indeed list nil rightmost node evaluated sibling returns false enough establish falsity parent node etype etype etype conj eseq eseq eseq conj false legend list fig evaluation etype list example following simplified version type definitions used show incorrectness algorithm dart zobel testing inclusion one regular type another let let see example details verified algorithm follows let applying equations order etype etype conj equation etype eseq eseq eseq choose simplify expressions make example easy follow applying equations eseq true eseq true etype eseq let show etype false suffices show eseq conj false equation dnf etype eseq figure depicts evaluation eseq conj node linked parent dashed line evaluated one siblings returns false sufficient establish falsity parent clear figure etype conj false hence etype false correctness section addresses correctness algorithm shall first show tabulation ensures termination algorithm table finite size establish partial correctness algorithm etype conj etype etyp etyp conj etype conj eseq eseq eseq eseq eseq conj eseq conj eseq conj true false false legend fig evaluation etype conj termination given type expression type atom type atom type atom set type atoms denoted tla instance letting nat ree tla list nat ree extend tla sequences def tla tla given type expression evaluation tree etype contains nodes form etype etype conj eseq eseq conj addition root etype nodes form etype conj add conjunctive type expressions table forms nodes pass table around therefore suffices show type atoms occurring first argument nodes finite set conjunctive type expression added table first argument node form etype conj set rta type atoms relevant type expression smallest set type atoms satisfying tla rta rta ground tla rta height ground thus height type atom rta finite finite number type constructors thus rta finite size follows examining algorithm type atoms first argument nodes evaluation tree etype rta finite therefore algorithm terminates partial correctness partial correctness algorithm established showing etype true iff empty let set conjunctive type def expressions define empty following two lemmas form core proof partial correctness algorithm lemma let set conjunctive type expressions type expression conjunctive type expression sequence expression conjunctive sequence expression empty empty empty empty etype conj true etype true etype true etype true proof proof done induction size complement respect set possible conjunctive type expressions type atoms rta type expression basis complement empty contains possible conjunctive type expressions type atoms rta hence etype conj true equation therefore holds follows equation follows equation lemma appendix follows equation induction lemma appendix empty implies empty bcf thus empty bcf complement smaller complement induction hypothesis eseq bcf true equation etype conj true therefore holds follows equation follows equation lemma appendix follows equation completes proof lemma lemma establishes completeness etype etype conj eseq eseq conj following lemma establishes soundness lemma let set conjunctive type expressions type expression conjunctive type expression sequence expression conjunctive sequence expression empty empty empty empty etype conj true etype true etype true etype true proof suffices prove since follow lemma proof done induction depth evaluation tree etype conj basis etype conj true implies either pos neg case empty true empty consider case definition etype conj true implies empty induction assume etype conj true lemma bcf bcf bcf induction hypothesis etuple bcf false otherwise bcf equation etype conj false contradicts etype conj true empty etype conj true completes induction proof lemma following theorem corollary lemmas theorem type expression etype true iff empty proof equation etype etype lemma lemma etype true iff empty result follows since true complexity address issue complexity algorithm consider time complexity algorithm time spent evaluating etype given type expression measured terms number nodes evaluation tree etype algorithm cycles etype etype conj eseq eseq conj thus children node form etype form etype conj let number elements given set largest possible table evaluation etype contains conjunctive type expressions type atoms rta therefore table contain conjunctive type expressions height tree bounded show branching factor tree also bounded equation number children etype bounded two power number type atoms bounded contain type atoms rta equation number children etype conj bounded largest number children node eseq bounded two power number sequences bcf neg arity thus number sequences arity hence number children eseq since arity constant equation number children eseq conj bounded maxf arity therefore branching factor tree bounded discussion leads following conclusion proposition time complexity algorithm fact algorithm exponential time expected complexity coincides complexity deciding emptiness tree automaton constructed type expression type definitions deterministic tree automaton recognising consist states observed proof lemma decision emptiness language deterministic tree automaton takes time polynomial number states tree automaton therefore complexity algorithm best expect algorithm deciding emptiness regular types contain set operators conclusion presented algorithm deciding emptiness prescriptive regular types type expressions constructed type constructors set operators type definitions prescribe meaning type expressions algorithm uses tabulation ensure termination though tabulation inspired dart zobel decision problem consider paper complex type expressions may contain set operators reason algorithm also used inclusion equivalence problems regular types way use tabulation leads correct algorithm regular types algorithm proved incorrect regular types general best knowledge algorithm correct algorithm prescriptive regular types addition correctness algorithm generalises work dart zobel type expressions contain set operators type definitions parameterised parameterised type definitions natural monomorphic type definitions set operators makes type expressions concise combination two features allows natural type declarations instance type logic program append declared inferred append list list list algorithm exponential time coincides deciding emptiness language recognised tree automaton constructed type expression type definitions however algorithm avoids construction tree automaton constructed priori type definitions parameterised another related field set constraint solving however set constraint solving methods intended infer descriptive types rather testing emptiness prescriptive type therefore useful different settings gorithm presented paper addition algorithms proposed solving set constraints applicable emptiness problem considered paper take example constructor rule states emptiness equivalent emptiness however empty list equivalent empty latter true former false since list nil constructor rule apply deals function symbols take type definitions account references aiken kozen vardi wimmers complexity set constraints proceedings computer science logic conference pages aiken lakshman directional type checking logic programs charlier editor proceedings first international static analysis symposium pages aiken wimmers solving systems set constraints proceedings seventh ieee symposium logic computer science pages ieee computer society press aiken wimmers type inclusion constraints type inference proceedings conference functional programming languages computer architecture pages copenhagen denmark june beierle type inferencing polymorphic logic programs sterling editor proceedings twelfth international conference logic programming pages mit press cardelli wegner understanding types data abstraction polymorphism acm computing surveys codish lagoon type dependencies logic programs using aciunification proceedings israeli symposium theory computing systems pages ieee press june comon dauchet gilleron lugiez tison tommasi tree automata techniques applications draft dart zobel efficient type checking typed logic programs journal logic programming dart zobel regular type language logic programs frank pfenning editor types logic programming pages mit press devienne talbot tison set constraints membership expressions jaffar editor proceedings joint conference symposium logic programming pages mit press fruhwirth shapiro vardi yardeni logic programs types logic programs proceedings sixth annual ieee symposium logic computer science pages ieee computer society press gallagher waal fast precise regular approximations logic programs bruynooghe editor proceedings eleventh international conference logic programming pages mit press steinby tree automata steinby tree languages rozenberg salomma editors handbook formal languages pages hanus horn clause programs polymorphic types semantics resolution theoretical computer science heintze jaffar finite presentation theorem approximating logic programs proceedings seventh annual acm symposium principles programming languages pages acm press heintze jaffar decision procedure class set constraints technical report university february later version paper proc ieee symposium lics heintze jaffar semantic types logic programs frank pfenning editor types logic programming pages mit press heintze jaffar set constraints analysis alan borning editor principles practice constraint programming volume lecture notes computer science springer may ppcp second international workshop orcas island seattle usa jacobs type declarations subtype constraints logic programming sigplan notices type analysis logic programs presence type definitions proceedings acm sigplan symposium partial evaluation program manipulation pages acm press polymorphic type analysis logic programs abstract interpretation journal logic programming cleary algorithm testing regular type inclusion technical report department computer science university waikato october http mishra towards theory types prolog proceedings ieee international symposium logic programming pages ieee computer society press mycroft keefe polymorphic type system prolog artificial intelligence frank pfenning editor types logic programming mit press cambridge massachusetts reddy types logic programs debray hermenegildo editors logic programming proceedings north american conference pages mit press soloman type definitions parameters conference record fifth acm symposium principles programming languages pages tiuryn type inference problems survey roven editor proceedings fifteenth international symposium mathematical foundations computer science pages yardeni fruehwirth shapiro polymorphically typed logic programs furukawa editor logic programming proceedings eighth international conference pages mit press yardeni shapiro type system logic programs journal logic programming zobel derivation polymorphic types prolog programs lassez editor logic programming proceedings fourth international conference pages mit press appendix lemma let conjunctive type expression empty iff empty proof let sequence terms function symbol definition iff iff thus empty iff empty lemma let conjunctive sequence expression empty iff kempty proof let iff iff iff empty lemma regular term language type expression proof proof done constructing regular term grammar first consider case let hrta ground rta regular term grammar suffices prove iff sufficiency assume proof done induction derivation steps basis must constant implies ground definition induction suppose induction hypothesis hence definition necessity assume proof done height denoted height height implies constant implies ground hence therefore let height implies ground definition definition rta rta induction hypothesis therefore consider case complete proof induction height height contain set operator already proved regular term language suppose height contain set operator lemma already proved principal type constructor one set operators result follows immediately regular term languages closed union intersection complement operators suffices prove case let different new type constructor arity let regular term language contain set operators induction hypothesis regular term language definition regular term language set terms obtained term replacing occurrence possibly different term syj completes induction proof proof also indicates tree automaton recognises states deterministic tree automaton recognises states
| 6 |
study allan variance processes jun haotian guerrier roberto molinari yuming zhang allan variance widely used quantity areas focusing error measurement well general analysis variance autocorrelated processes domains engineering specifically metrology form quantity widely used detect noise patterns indications stability within signals however properties quantity known commonly occurring processes whose covariance structure cases erroneous interpretation could lead misleading conclusions paper generalizes theoretical form processes time valid also weakly stationary processes simulation examples show new form help understand processes able distinguish stationary cases hence allow better interpretation quantity applied cases index sensor calibration longitudinal studies haar wavelet variance heteroscedasticity ntroduction allan variance widely used quantity areas going engineering physics interest studying stochastic stability error measurements various instruments among others clocks oscillators usefulness resides fact provides extremely informative summary variance time series generally autocorrelated processes especially infinite variance indeed underlined better measure uncertainty compared standard methods moving average variance processes random walks fractional average arfima models considerably useful also stationary processes processes well known form help detect kind process example plot observed signal behaviour forms stationary processes studied used detect understand process underlying signal issued different voltage measurements however many applications interest phd student geneva school economics management university geneva switzerland guerrier assistant professor department statistics pennsylvania state university usa molinari visiting assistant professor department statistics applied probability university california santa barbara usa zhang graduate student department statistics pennsylvania state university usa detection noise terms characterising inertial sensors see many others see overview however although extremely useful settings known behaves presence types processes whether able distinguish paper intend investigate form particular class processes includes processes constant mean structure example generalized autoregressive conditional heteroscedasticity garch models see processes specific forms mean considered since either dealt statistical regression techniques simply detected particular focus processes characterized dependence structure blocks since common settings longitudinal studies sensor calibration navigation engineering latter cases often approximated known stationary processes example process whose often approximated firstorder autoregressive process see example moreover clear whether actually help distinguish processes processes form currently known latter aspect particular relevance since could lead erroneous interpretation observed process example assuming stationarity case reaching false conclusions order deal mentioned processes paper intends study theoretical form covariance structure consequent advantage study considering varying covariance structure definition extends applicability approaches make use raises awareness limitations inappropriate interpretation distinguishing identifying processes stationary ones mind section briefly defines describes theoretical form processes considered section iii introduces new theoretical form overlapping processes whose covariance structure shows form stationary processes special case new form section three case studies presented highlight importance findings order better interpret processes finally section concludes overview llan variance cov mean estimate different kind processes see example however many commonly encountered processes whose known unclear point actually distinguished stationary processes next section delivers general form includes processes studies quantity actually helpful detecting depends solely distance observations process variance consequently define autocorrelation function iii llan variance ean tationary rocesses introduce let first define weakly stationary discrete time regularly spaced stochastic process constant mean autocovariance function defined follows consider computed dyadic scales starting local averages process denoted therefore determines number consecutive observations considered average process constant mean implies also mean based averages following moav avarn underlined previous sections particularly useful measuring uncertainty processes especially infinite variance nevertheless forms properties unknown consist processes constant mean independent time covariance structure implies covariance function observations distance also function time therefore denoted type process common different areas going engineering see economics see study theoretical form class processes let first define following vector consecutive observations starting whose corresponding estimator given avar denotes sample equivalent based realization process another version noav whose estimator however statistically efficient moav see app details keeping mind definitions moav delivered general theoretical form quantity applied weakly stationary processes given avarn based equation exact form different stationary processes general class autoregressive moving average arma models derived moreover provided theoretical nonstationary processes random walk arfima models mentioned earlier represents better measure uncertainty compared methods using known theoretical forms therefore possible detect distinguish different processes based pattern due quantity similar quantities haar wavelet variance used contains observations used build average using vector define matrix var define matrix cov matrices represent covariance matrices obser vations contained consecutive average indeed represents covariance matrix observations within average used definitions represents two sets observations visual representation quantities given app section consider moav form noav given app based matrices also define different quantities according matrix reference lags observations specifically let first consider case interested lags overlapping nature observations lags belong sets observations within matrix matrix sets observations within matrix define following quantity cov cov however observations considered lags among set observations within matrix define quantity cov finally considering lags set observations lags considered within matrix final case define quantity cov definitions seen generalized definitions autocovariance consists average autocovariance given lag must underlined definitions equivalent covariance function correspond average function times specified provide following lemma emma moav given avarn proof lemma given app considering expression aspect must underlined definitions functions given earlier simplify autocovariance function dealing weakly stationary process latter case form consequently reduces expression detailed discussion given app final note result also highlighted considered cases estimators moav defined noav see app necessarily expectation underlined points general definition given lemma investigate properties assuming process interest within class processes treated paper next sections report simulation studies regarding processes attempt understand also whether useful quantity detect distinguish weakly stationary processes cases simulated process length except process simulated times estimated represented plots along theoretical stationary forms order understand behaviour different assumptions white noise first process study white noise intend without loss generality processes whose variance changes time evolution variance time either completely random follow specific fig logarithm moav white noise process scales estimated moav lines theoretical moav black line dots theoretical stationary based average variance red line triangles parametric model example garch process goal studying types processes understand whether able detect structure time series distinguished stationary white noise process purpose true process considered simulation study generated following model theoretical stationary form based average variances used simulate processes example fig represents estimated avs along theoretical forms stationary case seen theoretical forms correspond estimated avs closely follow quantities example confirms therefore unable distinguish stationary white noise process white noise process whose secondorder behaviour process commonly known process engineering domain specifically inertial sensor calibration navigation characteristic process consists different concatenated sequences blocks within block realization random variable repeated constant formally let represent set time indices belong iid ith block within time series let define process one realization process illustrated top panel fig length block since theoretical form process known exactly often approximated autoregressive process although approximation useful nevertheless still approximation using form given lemma obtain theoretical form process represented bottom panel fig fig top realization process length block bottom logarithm moav process scales estimated moav lines theoretical moav black line dots theoretical stationary moav approximating biasinstability moav red line triangles fig top realization autoregressive process length block bottom logarithm moav firstorder autoregressive process scales estimated moav lines theoretical moav black line dots theoretical stationary moav assuming block structure red line triangles process indeed latter plot shows estimated avs closely follow theoretical form given earlier red line represents stationary process supposed approximate true latter result averaging theoretical stationary process estimated via simulated processes clear although close scales approximation good enough considering logarithmic representation therefore knowing exact form process would allow better interpret signals characterised autoregressive processes final example consider process similarly process within paper define process process whose parameters fixed made concatenated time periods blocks observations within block generated independently blocks example given settings longitudinal studies subject measured time although subjects independent measurements explained autocorrelated process within subject define process formally let denote following parameter vector iid let denote ith block process defined independent defining simulation study top panel fig shows realization process bottom panel fig illustrates results simulations particular process observed stationary form consider block structure close estimated avs form provided paper adequately represents process therefore allow distinguish stationary autoregressive process one onclusions within paper wanted underline issue concerning yet studied indeed behaviour commonly occurring settings covariance structure processes unknown many cases either ignored dealt approximations consequence latter approaches would probably consist erroneous interpretations conclusions drawn analysis reason paper studied form class processes thereby generalizing form also weakly stationary processes based several examples provided properties studied highlighting ability detect processes eventually distinguish stationary ones making researchers practitioners aware issues related interpretation use quantity general common settings trix var eferences bollerslev generalized autoregressive conditional heteroskedasticity journal econometrics hou niu analysis modeling inertial sensors using allan variance ieee transactions instrumentation measurement gallegati semmler wavelet applications economics finance vol springer guerrier skaloud stebler estimation composite stochastic processes journal american statistical association percival wavelet perspective allan variance ieee transactions ultrasonics ferroelectrics frequency control percival guttorp processes allan variance wavelets wavelets geophysics unsal demirbas estimation deterministic stochastic imu error parameters position location navigation symposium plans ieee zhang allan variance time series models measurement data metrologia ppendix raphical illustration moav graphically illustrate quantities defined section iii fig represents true covariance matrix given process highlights related matrix overlapping square matrices along diagonal composed quantities defined fig graphical illustration matrices estimator less efficient moav mainly based fewer averages therefore smaller sample size define theoretical form noav non stationary processes interest first define vector consecutive observations starting xjn using define matrices follows var var cov appendix matrices graphically represented fig opposed moav matrices overlap along diagonal covariance matrix process let denote averages matrices respectively ppendix heoretical form noav tationary rocesses moav corresponding estimator quantity given avar noav defined avarn define follows emma define noav avarn proof lemma proof lemma direct definitions indeed var var cov trix var fig graphical illustration matrices noav using obtain avarn based earlier defined matrices moav also define different quantities according matrix reference lags observations specifically let first consider case interested lags observations lags belong sets observations within matrix sets observations within matrices define following quantity cov however observations considered lags among set observations within matrix define quantity cov finally considering lags set observations lags considered within matrix final case define quantity concludes proof ppendix proof lemma order prove lemma let denote averages matrices respectively define follows cov based definitions moav case definitions seen var var generalized definitions autocovariance consists average autocovariance given lag using cov notations definitions provide following result using obtain avarn concludes proof
| 10 |
accepted ieee transactions cybernetics learning subspace using domain features independence maximization jun yan kou david zhang fellow ieee adaptation algorithms useful distributions training test data different paper focus problem instrumental variation drift field sensors measurement viewed discrete continuous distributional change feature space propose maximum independence domain adaptation mida mida smida address problem domain features first defined describe background information sample device label acquisition time mida learns subspace maximum independence domain features reduce discrepancy distributions feature augmentation strategy also designed project samples according backgrounds improve adaptation proposed algorithms flexible fast effectiveness verified experiments synthetic datasets four realworld ones sensors measurement computer vision greatly enhance practicability sensor systems well extend application scope existing domain adaptation algorithms uniformly handling different kinds distributional change index reduction domain adaptation drift correction independence criterion machine olfaction transfer learning ntroduction many machine learning problems labeled training data source domain test ones target domain samples two domains collected different conditions thus different distributions labeling samples target domain develop new prediction models often therefore domain adaptation transfer learning needed improve performance target domain leveraging unlabeled maybe labeled target samples topic receiving increasing attention recent years due broad applications computer vision text classification also important field sensors measurement variations work partially supported grf fund hksar government central fund hong kong polytechnic university nsfc fund shenzhen fundamental research fund key laboratory network oriented intelligent computation shenzhen china yan department electronic engineering graduate school shenzhen tsinghua university shenzhen china yankethu kou department computing hong kong polytechnic university kowloon hong kong cslkou zhang shenzhen graduate school harbin institute technology shenzhen china also department computing biometrics research centre hong kong polytechnic university kowloon hong kong csdzhang fabrication sensors devices responses signal source may identical different instruments known instrumental variation furthermore sensing characteristics sensors operating condition even signal source change time leads complex drift result prediction model trained samples initial device earlier time period source domain suitable new devices latter time target domains typical application plagued problem machine olfaction uses electronic noses pattern recognition algorithms predict type concentration odors applications machine olfaction range agriculture food environmental monitoring robotics biometrics disease analysis however owing nature chemical sensors many prone instrumental variation drift mentioned greatly hamper usage applications traditional methods dealing two kinds drift drift correction methods hereinafter require set transfer samples predefined gas samples needed collected device time period often used learn regression models map features target domain source domain nevertheless collecting transfer samples repeatedly demanding job especially nonprofessional users cases domain adaptation techniques unlabeled target samples desirable intuitive idea reduce discrepancy feature level learn feature representation example pan proposed transfer component analysis tca finds latent feature space minimizes distributional difference two domains sense maximum mean discrepancy related methods introduced section applied drift correction however existing domain adaptation algorithms faced two difficulties first designed handle discrete source target domains drift however samples come stream change data distribution often continuous one solution split data several batches lose temporal order information second variation sensitivity chemical sensors signal different conditions may indicate different concepts words conditional probability may change samples different backgrounds background means device sample collected methods like accepted ieee transactions cybernetics tca project samples common subspace hence samples similar appearance different concepts distinguished paper present simple yet effective algorithm called maximum independence domain adaptation mida algorithm first defines domain features sample describe background finds latent feature space samples domain features maximally independent sense independence criterion hsic thus discrete continuous change distribution handled uniformly order project samples according backgrounds feature augmentation performed concatenating original feature vector domain features also propose mida smida exploit label information hsic mida smida flexible applied situations single multiple source target domains thanks use domain features fact notion domain extended background informative although designed unsupervised domain adaptation problems labeled sample target domains proposed methods naturally allow unlabeled labeled samples domains thus applied unlabeled labeled samples target domains supervised labeled samples target domains problems well label information either discrete classification continuous regression illustrate effect algorithms first evaluate several synthetic datasets drift correction experiments performed two datasets one spectroscopy dataset note spectrometers suffer instrumental variation problem finally domain adaptation experiment conducted object recognition benchmark results confirm effectiveness proposed algorithms rest paper organized follows related work unsupervised domain adaptation hsic briefly reviewed section section iii describes domain features mida smida detail experimental configurations results presented section along discussions section concludes paper information samples binary domain labels viewed primitive version domain features used paper also minimized negated mutual information target samples cluster labels reduce expected classification error transfer subspace learning ltsl algorithm presented reconstruction guided knowledge transfer method aligns source target data representing target sample local combination source samples projected subspace label geometry information retained embedding different subspace learning methods ltsl another class methods first project source target data separate subspaces build connections fernando utilized transformation matrix map source subspace target one subspace represented eigenvectors pca geodesic flow kernel gfk method measures geometric distance two different domains grassmann manifold constructing geodesic flow infinite number subspaces combined along flow order model smooth change source target domain liu adapted gfk correct timevarying drift sample stream first split batches according acquisition time first latest batches domains connected every intermediate batch using gfk another improvement gfk domain adaptation shifting covariance dasc observing modeling one domain subspace sufficient represent difference distributions dasc characterizes domains covariance matrices interpolates along geodesic bridge domains independence criterion hsic hsic used convenient method measure dependence two sample sets let two kernel functions associated rkhss respectively pxy joint distribution hsic defined square norm operator cxy hsic pxy kcxy elated ork unsupervised domain adaptation two good surveys domain adaptation found section focus typical methods extract features order reduce discrepancy preserving useful information researchers developed many strategies algorithms project samples common latent space transfer component analysis tca tries learn transfer components across domains reproducing kernel hilbert space rkhs using maximum mean discrepancy extended tca sstca encode label information preserve local geometry manifold shi measured domain difference mutual expectation independent pairs drawn pxy proved characteristic kernels hsic pxy zero independent large hsic suggests strong dependence respect choice kernels hsic biased empirical estimate suppose kernel matrices respectively hsic hky centering matrix due simplicity power hsic adopted feature extraction feature selection accepted ieee transactions cybernetics researchers typically use maximize dependence features label however knowledge utilized domain adaptation reduce dependence extracted features domain features iii roposed ethod domain feature aim reduce dependence extracted features background information sample background information naturally exist thus easily obtained different distributions training test samples correlate distribution original features domain label domain sample belongs common domain adaptation problems example information according characteristics information clearly interferes testing performance prediction model thus minimizing aforementioned dependence desirable first group new features need designed describe background information features called domain features perspective drift correction two main types background information device label device sample collected acquisition time sample collected actually encode information place collection operation condition useful domain adaptation problems formally consider instrumental variation following coding scheme used suppose ndev devices result ndev different related domains domain feature vector thus rndev sample pth device otherwise drift also considered acquisition time added sample collected pth device time otherwise according kernel matrix domain features needs computed hsic apply linear kernel suppose rmd dimension domain feature vector note traditional domain adaptation problems several discrete domains coding scheme applied construct domain features problems similar instrumental variation feature augmentation feature augmentation used paper learn subspaces author proposed feature augmentation strategy domain adaptation replicating original features however strategy requires data lie discrete domains deal timevarying drift propose general efficient feature augmentation strategy concatenating original features domain features role strategy demonstrated linear dimensionality reduction example suppose projection matrix learned augmented feature vector dimension subspace two parts rmd embedwd ding expressed wxt wdt means bias wdt added dimension embedding another perspective feature augmentation strategy maps samples augmented space higher dimension projecting subspace easier find projection direction augmented space align samples well subspace take machine olfaction example situations conditional probability changes along background instance sensitivity chemical sensors often decays time signal indicates low concentration earlier time actually suggests high concentration later time cases feature augmentation important allows samples similar appearance different concepts treated differently bias strategy also helps align domains better projected dimension effect illustrated several synthetic datasets section analyzed complementary materials maximum independence domain adaptation mida section introduce formulation mida detail suppose matrix samples training test samples pooled together importantly explicitly differentiate domain sample feature vectors augmented use notations instead brevity linear nonlinear mapping function used map new space based kernel trick need know exact form inner product represented kernel matrix projection matrix applied project subspace dimension leading projected samples similar kernel dimensionality reduction algorithms key idea express projection direction linear combination samples space namely projection matrix actually learned thus projected samples intuitively projected features independent domain features distinguish background accepted ieee transactions cybernetics sample projected features suggesting interdomain discrepancy diminished subspace therefore omitting scaling factor get expression minimized hkd hkd kernel matrix domain adaptation goal minimizing difference distributions also preserving important properties data variance achieved maximizing trace covariance matrix project samples covariance matrix cov cov hkx orthonormal constraint added learning problem becomes max hkd hkx hkx using lagrangian multiplier method find eigenvectors corresponding largest eigenvalues note conventional constraint requiring orthonormal lead generalized eigenvector problem however find strategy inferior proposed one adaptation accuracy training speed practice used computing proper kernel function needs selected common kernel functions include linear polynomial gaussian radial basis function rbf exp different kernels indicate different assumptions type dependence using hsic according polynomial rbf kernels map original features higher infinite dimensional space thus able detect types dependence however choosing suitable kernel width parameter also important powerful kernels maximum mean discrepancy mmd criterion used tca measure difference two distributions song showed hsic mmd applied measure dependence features labels classification problem identical constant factor label kernel matrix hsic properly designed however tca feasible two discrete domains hand mida deal variety situations including multiple domains continuous distributional change stationary subspace analysis ssa algorithm able identify temporally stationary components multivariate time series however ssa ensures mean covariance components stationary may suitable preserving important properties data concept drift adaptation algorithms able correct continuous drift however rely newly arrived labeled data update prediction models mida works unsupervisedly mida smida mida aligns samples different backgrounds without considering label information however labels samples known incorporated subspace learning process may beneficial prediction therefore extend mida mida smida since explicitly differentiate domain labels samples unlabeled labeled samples exist domain similar hsic adopted maximize dependence projected features labels biggest advantage strategy types labels exploited discrete labels classification continuous ones regression label matrix defined follows classification problems coding scheme used labeled belongs jth class otherwise regression problems target values centered first equals target value labeled otherwise linear kernel function chosen label kernel matrix objective smida max solution eigenvectors corresponding largest eigenvalues outline mida smida summarized algorithm statements brackets correspond specialized smida algorithm mida smida input matrix samples background information labels samples kernel function output projected samples construct domain features according background information section augment original features domain features compute kernel matrices obtain namely eigenvectors corresponding largest eigenvalues besides variance label dependence another useful property data geometry structure preserved manifold regularization conveniently incorporated smida experiments adding generally increases accuracy slightly cost three consequently adopted paper accepted ieee transactions cybernetics xperiments section first conduct experiments synthetic datasets verify effect proposed methods drift correction experiments performed two enose datasets spectroscopy dataset show universality proposed methods evaluate visual object recognition dataset comparison made recent unsupervised domain adaptation algorithms learn features synthetic dataset fig tca mida compared dataset two discrete domains domain labels used construct domain features mida according coding scheme introduced section similar definition used synthetic datasets methods linear kernel used original features set order quantitatively assess effect domain adaptation logistic regression models trained labeled source data tested target data accuracies displayed caption showing order performance mida tca original feature tca aligns two domains first projected dimension however two classes large overlap dimension direction alignment different discrimination incorporating label information source domain sstca help contrary mida align two domains well projected dimensions domainspecific bias second dimension brought feature augmentation played key role explanation included supplementary materials thus good accuracy obtained using two dimensions classification fig ssa mida compared dataset continuous distributional change resembles timevarying drift machine olfaction samples classes drift upper right chronological order samples used construct domain features mida first sample second sample etc parameter setting mida fig whereas number stationary components ssa set classification accuracies obtained training logistic regression model first halves data classes testing last halves ssa succeeds finding direction free drift however two classes well separated direction plot randomly scattered colors suggest drift totally removed subspace mida first mapped data space third dimension time projected plane orthogonal direction drift space label information used last two experiments keeping label dependence subspace priority smida adopted instead mida synthetic dataset fig best direction align two domains also mixes two classes results output mida plot labels source domain used learning subspace plot observe classes separated fact class separation still found third dimension space learned mida however purpose dimensionality reduction generally hope keep important information first dimensions nonlinear kernels often applied machine learning algorithms data linearly separable besides also useful domain adaptation domains linearly alignable shown fig plot changes distributions different two classes hence difficult find linear projection direction align two domains even domainspecific biases mida actually rotation matrices needed since target labels available rotation matrices obtained accurately however nonlinear kernel used map original features space higher dimensions domains may linearly alignable applied rbf kernel width although domains perfectly aligned plot classification model trained source domain better adapted target domain comparison different kernel kernel parameters two synthetic datasets included supplementary materials gas sensor array drift dataset gas sensor array drift collected vergara dedicated research drift correction total samples collected gas sensors course months six different kinds gases different concentrations split batches authors according acquisition time table supplementary material details dataset aim classify type gases despite concentrations similar took samples batch labeled training samples whereas batches unlabeled test ones evaluation strategy resembles situation applications dataset sample represented features extracted sensors response curves feature first normalized zero mean unit variance within batch timevarying drift preprocessed features across batches visually inspected fig obvious samples different batches different distributions next labeled samples batch adopted source domain unlabeled ones batch target domain proposed algorithms together several recent ones used learn features based samples logistic regression model trained source domain tested target one multiclass classification strategy utilized displayed table compared methods include kernel pca kpca transfer component analysis tca tca sstca subspace alignment http accepted ieee transactions cybernetics pos source data neg source data pos target data neg target data fig comparison tca mida synthetic dataset plots show data original space projected spaces tca mida respectively classification accuracies using first projected dimension pos old data neg old data pos new data neg new data fig comparison ssa mida synthetic dataset plots show data original space projected spaces ssa mida respectively chronological order sample indicated color classification accuracies using first projected dimension pos source data neg source data pos target data neg target data fig comparison mida smida synthetic dataset plots show data original space projected spaces mida smida respectively classification accuracies pos source data neg source data pos target data neg target data fig comparison different kernels synthetic dataset plots show data original space projected spaces mida linear rbf kernels respectively classification accuracies accepted ieee transactions cybernetics average classification accuracy batch batch batch batch batch fig scatter ethanol dots acetone plus signs samples batches gas sensor array drift dataset samples projected subspace using pca different colors indicate different batches geodesic flow kernel gfk manifold regularization combination gfk informationtheoretical learning itl structural correspondence learning scl marginalized stacked denoising autoencoder msda methods tuned best accuracy kpca tca sstca proposed mida smida polynomial kernel degree used kpca learned subspace based union source target data tca sstca mida smida eigenvalue decomposition needs done kernel matrices order reduce computational burden randomly chose samples target domain using methods twice number samples source domain gfk used pca generate subspaces source target domains subspace dimension gfk determined according subspace disagreement measure results copied scl pivot features binarized training pivot predictors using logistic regression also compared several variants methods table notation discrete means two discrete domains source target used mida smida similar compared methods domain feature vector sample thus source domain target however strategy make use samples intermediate batches intuitive assumption distributions adjacent batches similar adapting information batch taking samples batches consideration may improve generalization ability learned subspace concretely samples randomly selected batches instead batch alone sample domain feature defined batch index viewed proxy acquisition time mida smida maximized independence kpca tca sstca itl mida continuous smida continuous projected dimensions fig performance comparison gas sensor array drift dataset respect subspace dimension learned subspace batch indices results labeled continuous table besides accuracies continuous smida without feature augmentation also shown table find batch index increases accuracies methods generally degrade confirms influence drift continuous smida achieves best average domain adaptation accuracy continuous versions mida smida outperform discrete versions proving proposed methods effectively exploit chronological information samples also surpass uses samples intermediate batches build connections source target batches feature augmentation important dataset since removing continuous smida causes drop four percentage points average accuracy fig average classification accuracies varying subspace dimension shown mida smida better methods features extracted breath analysis dataset noninvasive approach disease screening monitoring attracting attention concentration biomarkers breath proved related certain diseases makes possible analyze person health state conveniently example concentration acetone diabetics breath often higher healthy people however instrumental variation drift hinder popularization technology applications unsupervised domain adaptation algorithms applied solve problem collected breath analysis dataset years using two model paper samples five diseases selected experiments including diabetes chronical kidney disease ckd cardiopathy lung cancer breast cancer proved related certain breath biomarkers performed accepted ieee transactions cybernetics table lassification accuracy gas sensor array drift dataset old values indicate best results batch average kpca tca sstca gfk itl scl msda mida discrete smida discrete mida continuous smida smida continuous sensor response volt five classification tasks distinguish samples one disease healthy samples sample represented steady state responses nine gas sensors gas sensor used sense gas sample response reach steady state minutes steady state response close relationship concentration measured gas therefore feature vector contains information needed disease screening show instrumental variation drift dataset draw steady state responses two sensors ckd samples fig data point indicates breath sample plot sensitivity sensor devices gradually decayed time elapsed plot aging effect significant replace sensors two devices new ones day case signal suggest low concentration day high concentration day addition responses different devices different plot day numbers samples six classes healthy five diseases mentioned respectively chose first samples collected device class labeled training samples among samples samples randomly selected class validation rest testing tuned validation sets logistic regression adopted classifier accuracy criterion results compared table kpca tca sstca mida smida rbf kernel used methods stationary subspace analysis ssa mida smida capable handling chronological information simply regarded device discrete domain learned features strategy used discrete mida smida continuous mida smida sensor response volt original feature acquisition time day device device acquisition time day fig illustration instrumental variation drift breath analysis dataset plots show steady state responses ckd samples sensors respectively domain features defined according exact acquisition time converted years number devices ndev ssa naturally considers chronological information treating sample stream multivariate time series identifying temporally stationary components however ssa deal time series multiple sources case dataset thus samples arranged chronological order despite device labels table find improvement made ssa little possibly stationary criterion suitable preserving important properties data example noise data also stationary mida smida achieved obviously better results methods address instrumental variation accepted ieee transactions cybernetics table lassification accuracy breath analysis dataset old values indicate best results task average original feature kpca tca sstca gfk itl ssa scl msda mida discrete smida discrete mida continuous smida smida continuous drift bias brought feature augmentation compensate change conditional probability dataset smida better mida label information first samples class better kept corn dataset similar data collected spectrometers signals indicating concentration analytes instrumental variation also problem section test methods corn spectroscopy dataset collected three spectrometers designated moisture oil protein starch contents corn samples measured device ranges measured values respectively sample represented spectrum features dataset resembles traditional domain adaptation datasets drift three discrete domains defined based three devices adopt source domain target ones domain samples assigned test set rest training set tuning applied crossvalidation training sets three domains best determined algorithm regression model trained training set source domain applied test set target domains regression algorithm ridge regression regularization parameter table iii displays root mean square error rmse four prediction tasks average two target domains also plot overall average rmse http two domains respect subspace dimension fig itl investigated applicable classification problems kpca tca sstca mida smida rbf kernel used semisupervised methods sstca smida target values normalized zero mean unit variance subspace learning domain features defined according device indices using coding scheme find domain adaptation done prediction error large domain adaptation algorithms managed significantly reduce error kpca also good performance probably source target domains similar principal directions also contain discriminative information therefore source regression models fit target samples well dataset different domains identical data composition result corresponding data aligned subspaces alignment explains small error however condition may hold datasets mida smida obtained lowest average errors target domains aiming exploring prediction accuracy instrument variation trained regression models training set two target domains tested domain results listed train target table iii found smida outperforms results could attributed three reasons discrepancy dataset relatively easy correct use rbf kernel smida improves accuracy smida learned subspace basis training test samples although test samples unlabeled provide information distribution samples make learned subspace generalize better viewed merit learning testify assumption conducted another experiment multiple target domains training samples source domain test ones target domains leveraged together subspace learning mida smida average rmse two target domains mida smida compared results table iii single target domain results improved showing incorporating unlabeled samples target domains beneficial visual object recognition dataset gong evaluated domain adaptation algorithms four visual object recognition datasets namely amazon dslr webcam ten common classes selected samples per class per domain images total image encoded histogram using surf features normalized histograms zero mean unit variance dimension following experimental setting provided sample code authors experiments conducted random trials pair domains unsupervised trail labeled samples per class accepted ieee transactions cybernetics table iii egression rmse corn dataset old values indicate best results target domain target domain moisture oil moisture oil protein starch average original feature kpca tca sstca gfk scl msda mida smida train target kpca tca sstca mida smida average regression rmse protein starch average projected dimensions fig performance comparison corn dataset respect subspace dimension randomly chosen source domain training set samples used unsupervisedly domain adaptation unlabeled samples target domain made test set trails three labeled samples per class target domain also assumed labeled averaged accuracies pair domains well standard errors listed tables gfk transfer subspace learning ltsl domain adaptation shifting covariance dasc recent method called integration global local metrics domain adaptation iglda copied best results reported original papers methods tested tuned best accuracy logistic regression adopted classifier polynomial kernel degree used kpca tca sstca mida smida domain features defined according domain labels using onehot coding scheme mida smida achieve best average accuracies unsupervised visual object recognition experiments observe tca sstca comparable performance mida smida may explained fact hsic criterion used mida mmd used tca identical certain conditions one source one target domain besides feature augmentation strategy mida crucial dataset change conditional probability hand tca sstca handle one source one target domains sstca uses manifold regularization strategy preserve local geometry information hence introduces three smida moreover computing data adjacency graph sstca matrix inversion operation tca sstca make slower mida smida compared speed domain adaptation experiment run server intel xeon ghz cpu ram parallel computing used codes algorithms written matlab average running times trial mida smida tca sstca respectively therefore mida smida practical use tca sstca besides initially designed drift correction dataset used show universality onclusion paper introduced maximum independence domain adaptation mida learn features main idea mida reduce discrepancy maximizing independence learned features domain features samples domain features describe background information sample domain label traditional domain adaptation problems field sensors measurement device label acquisition time collected sample expressed domain features unsupervised drift correction achieved using mida feature augmentation strategy proposed paper adds domainspecific biases learned features helps mida align domains accepted ieee transactions cybernetics table nsupervised domain adaptation accuracy visual object recognition dataset old values indicate best results means source domain target one ori average kpca tca sstca itl gfk ltsl dasc scl msda iglda mida smida table emi supervised domain adaptation accuracy visual object recognition dataset old values indicate best results average ori kpca tca sstca itl gfk ltsl scl msda mida smida mida smida flexible algorithms design domain features use hsic criterion applied kinds domain adaptation problems including discrete continuous distributional change multiple domains classification regression etc also easy implement fast requiring solve one eigenvalue decomposition problem future directions may include extending definition domain features applications eferences pan yang survey transfer learning ieee trans knowl data vol patel gopalan chellappa visual domain adaptation survey recent advances signal processing magazine ieee vol cui chen flowing riemannian manifold domain adaptation shifting covariance ieee trans vol bian tao rui human action recognition systems man cybernetics part cybernetics ieee transactions vol pan tsang kwok yang domain adaptation via transfer component analysis neural networks ieee transactions vol seah ong tsang combating negative transfer predictive distribution differences ieee trans vol gardner bartlett brief history electronic noses sens actuators vol barsan weimar electronic nose current status future trends chem vol marjovi marques optimal swarm formation odor plume finding ieee transactions cybernetics vol zhang tian kadri xiao pan zhou sensor calibration transfer among electronic nose instruments monitoring volatile organic chemicals indoor air quality sens actuators vol yan zhang wei design breath analysis system diabetes screening blood glucose level prediction ieee trans biomed vol marco signal data processing machine olfaction chemical sensing review ieee sens vol carlo falasconi drift correction methods gas chemical sensors artificial olfaction systems techniques challenges intech yan zhang improving transfer ability prediction accepted ieee transactions cybernetics models electronic noses sens actuators vol calibration transfer drift compensation via coupled task learning sens actuators vol shi sha learning discriminative clusters unsupervised domain adaptation proceedings intl conf machine learning icml fernando habrard sebban tuytelaars unsupervised visual domain adaptation using subspace alignment proceedings ieee international conference computer vision gong grauman sha learning kernels unsupervised domain adaptation applications visual object recognition international journal computer vision vol shao kit generalized transfer subspace learning constraint international journal computer vision vol blitzer dredze pereira biographies bollywood boomboxes blenders domain adaptation sentiment classification acl vol conference proceedings chen weinberger sha marginalized denoising autoencoders domain adaptation international conference machine learning conference proceedings jiang huang huang yen integration global local metrics domain adaptation learning via dimensionality reduction ieee transactions cybernetics gretton bousquet smola schlkopf measuring statistical dependence norms algorithmic learning theory springer feudale woody tan myles brown transfer multivariate calibration models review chemometr intell vol gong shi sha grauman geodesic flow kernel unsupervised domain adaptation computer vision pattern recognition cvpr ieee conference ieee liu drift compensation electronic nose domain adaption ieee sens vol song smola gretton bedo borgwardt feature selection via dependence maximization mach learn vol song gretton borgwardt smola colored maximum variance unfolding advances neural information processing systems barshan ghodsi azimifar jahromi supervised principal component analysis visualization classification regression subspaces submanifolds pattern vol daum iii frustratingly easy domain adaptation proc ann meeting assoc computational linguistics schlkopf smola mller nonlinear component analysis kernel eigenvalue problem neural computation vol scholkopft mullert fisher discriminant analysis kernels neural networks signal processing vol von meinecke finding stationary subspaces multivariate time series physical review letters vol gama bifet pechenizkiy bouchachia survey concept drift adaptation acm computing surveys csur vol belkin niyogi sindhwani manifold regularization geometric framework learning labeled unlabeled examples mach learn vol vergara vembu ayhan ryan homer huerta chemical gas sensor drift compensation using classifier ensembles sens actuators vol
| 2 |
ieee transactions pattern analysis machine intelligence submitted matrix learning nonconvex regularizers sep quanming yao member ieee james kwok fellow ieee taifeng wang member ieee liu fellow ieee modeling many important applications computer vision machine learning matrix rank often approximated convex nuclear norm use nonconvex regularizers demonstrated better empirical performance however resulting optimization problem much challenging recent requires expensive full svd iteration paper show many nonconvex regularizers singular values obtained proximal operator automatically threshold allows proximal operator efficiently approximated power method develop fast proximal algorithm accelerated variant inexact proximal step convergence rate number iterations guaranteed furthermore show proposed algorithm parallelized resultant algorithm achieves nearly linear speedup number threads extensive experiments performed matrix completion robust principal component analysis significant speedup observed index matrix learning nonconvex regularization proximal algorithm parallel algorithm matrix completion robust principle component analysis ntroduction matrix learning central issue many machine learning computer vision problems example matrix completion one successful approaches collaborative filtering assumes target rating matrix besides collaborative filtering matrix completion also used tasks video image processing another important use matrix learning robust principal component analysis rpca assumes target matrix also corrupted sparse noise rpca popularly used computer vision applications shadow removal background modeling robust photometric stereo besides matrix learning also used face recognition subspace clustering however minimization matrix rank alleviate problem common approach use convex surrogate nuclear norm sum singular values matrix known nuclear norm tightest convex lower bound rank though nuclear norm resultant optimization problem solved efficiently using modern tools proximal algorithm algorithm active subspace selection method despite success nuclear norm recently numerous attempts use nonconvex surrogates better approximate rank function key idea yao kwok department computer science engineering hong kong university science technology clear water bay hong kong qyaoaa jamesk wang liu microsoft research asia beijing china taifengw tyliu larger thus informative singular values less penalized example nonconvex regularizers include penalty penalty lsp truncated nuclear norm tnn smoothly clipped absolute deviation scad minimax concave penalty mcp applied various computer vision tasks image denoising background modeling empirically nonconvex regularizers achieve better recovery performance convex nuclear norm regularizer recently theoretical results also established however resultant nonconvex optimization problem much challenging existing optimization algorithms work nuclear norm applied general approach still used procedure decomposes nonconvex regularizer difference convex functions however sequence relaxed optimization problems solved computationally expensive efficient approach recently proposed iteratively nuclear norm irnn algorithm based observation existing nonconvex regularizers concave irnn iteration involves computing supergradient regularizer singular value decomposition svd however performing svd matrix takes time assuming expensive large matrices recently proximal algorithm used nonconvex matrix learning however requires full svd solve proximal operator expensive paper observe nonconvex regularizers singular values obtained corresponding proximal operator automatically ieee transactions pattern analysis machine intelligence submitted thresholded one needs find leading singular order generate next iterate moreover instead computing proximal operator large matrix one needs use matrix projected onto leading subspace matrix size significantly reduced proximal operator made much efficient besides using power method good approximation subspace efficiently obtained proposed procedure readily used standard proximal algorithm convergence properties directly applicable proximal step approximately solved sequel show inexactness proximal step controlled convergence rate still guaranteed moreover algorithm speeded using acceleration effectiveness proposed algorithms demonstrated two popular matrix learning applications namely matrix completion robust principal component analysis rpca matrix completion show additional speedup possible exploring problem sparse plus structure whereas rpca extend proposed algorithm handle two parameter blocks involved rpca formulation popularity multicore platforms parallelize proposed algorithms handle much larger data sets show achieve almost linear speedup number threads experiments performed synthetic realworld data sets results show proposed nonconvex matrix learning algorithms several orders faster outperform approaches including factorization use nuclear norm regularization preliminary results paper reported full version speed algorithm acceleration demonstrate applied two important instances matrix learning problems namely matrix completion rpca besides show proposed algorithms parallelized extensive empirical evaluations also performed sequential parallel versions algorithms notation sequel vectors denoted lowercase boldface matrices uppercase boldface transpose superscript square matrix trace rectangle matrix kxkf frobenius norm ith leading singular value nuclear norm given diag constructs diagonal matrix whose ith diagonal element denotes identity matrix differentiable function use gradient nonsmooth function use subdifferential smooth loss nonsmooth regularizer regularization parameter make following assumptions necessarily convex differentiable continuous gradient without loss generality assume bounded inf limkxkf recent years proximal algorithm popularly used solving iteration produces prox stepsize prox arg min proximal operator proximal step also rewritten arg miny convex proximal algorithm converges optimal solution rate number iterations accelerated rate replacing proper linear combination recently accelerated proximal algorithm extended problems may nonconvex nonmonotone accelerated proximal gradient nmapg algorithm algorithm iteration may perform two proximal steps steps acceleration performed step objective checked determine whether accepted step problem nonconvex convergence rate still open however empirically much faster algorithm nonmonotone apg nmapg input choose initialize xat prox background else prox otherwise end end return proximal algorithm paper consider matrix learning problems form min nonconvex regularizers proximal algorithm successful proximal operator efficient following shows ieee transactions pattern analysis machine intelligence submitted proximal operator nuclear norm closedform solution proposition svd max aij convex nuclear norm makes optimization easier may good enough approximation matrix rank mentioned section number nonconvex surrogates recently proposed paper make following assumption regularizer satisfied nonconvex regularizers table possibly nonconvex form concave table popular nonconvex regularizers tnn regularizer number leading singular values penalized scad others min lsp log tnn scad mcp otherwise otherwise otherwise recently iteratively reweighted nuclear norm irnn algorithm proposed handle nonconvex matrix optimization problem iteration solves subproblem original nonconvex regularizer approximated weighted version nuclear norm kxkw subproblem solution svd needed takes time solvers designed specific nonconvex regularizers include tnn mcp including irnn perform svd iteration takes time slow proximal algorithm mostly used convex problems recently also applied nonconvex problems generalized proximal gradient gpg algorithm first proximal algorithm handle nonconvex regularizers particular proximal operator computed follows proposition generalized singular value thresholding gsvt satisfying assumption udiag svd arg min problem solved iteration however solutions indeed exist regularizers table nevertheless proposition still involves svd takes time roposed lgorithm section show proximal algorithm made much faster using approximate gsvt automatic thresholding singular values following proposition shows becomes zero smaller threshold proof found appendix proposition exists threshold together proposition solving proximal operator needs leading singular nonconvex regularizers table simple solutions obtained examining optimality conditions proof found appendix corollary values following regularizers lsp min tnn max scad mcp otherwise also used nuclear norm shown max however since focus nonconvex regularizers case nuclear norm pursued sequel approximate gsvt proposition computes proximal operator exact svd section show one use approximate svd efficient reducing size svd assume singular values larger need svd let svd following proposition shows obtained proximal operator smaller proof found appendix proposition assume orthogonal span span obtaining approximate gsvt obtain use power method algorithm recently used approximate svt nuclear norm minimization set number power iterations used via matrix algorithm particularly useful iterative nature proximal algorithm obtaining approximate using algorithm takes mnk time propack noticed similar result conference version paper accepted however considers case nuclear norm regularizer ieee transactions pattern analysis machine intelligence submitted algorithm also used obtain mnk time however finds exactly benefit hence though time complexity power method empirically much less efficient proximal gradient algorithm approximate proximal step solution generated step try ensure algorithm powermethod note less stringent condition lemma holds accept otherwise improve using next iterate following proposition shows convergence algorithm proof found appendix input number power iterations decomposition returning matrix end return approximate gsvt procedure shown algorithm step uses power method efficiently obtain orthogonal matrix approximates span step performs small svd though svd still exact much smaller svd takes time step singular values thresholded using corollary steps obtains approximate using proposition time complexity gsvt reduced mnk proposition number singular values larger prox algorithm inexact proximal step inexactps input approxgsvt break end end return algorithm approximate gsvt approxgsvt input powermethod svd number corollary leading columns leading columns obtain end return components qua diag inexact proximal step section proximal step inexact utilize approximate gsvt algorithm inexact proximal step considered however assumed convex attouch considered nonconvex require difficult expensive condition control inexactness example provided appendix let following shows objective always decreased exact proximal step lemma prox kprox motivated lemma propose control proximal step inexactness algorithm note complete procedure complete procedure solving shown algorithm called fancl fast nonconvex lowrank similar perform using column spaces previous iterates speedup employ continuation strategy step specifically initialized large value decreases gradually algorithm fancl fast nonconvex algorithm input choose initialize random gaussian matrices warm start inexactps end return assume evaluations take time valid many applications matrix completion rpca let rank tth iteration algorithm step takes time step takes mnpkt time columns iteration time complexity thus mnpkt experiment set enough guarantee empirically iteration time complexity algorithm thus reduces mnkt contrast exact gsvt takes time much slower besides space complexity algorithm ieee transactions pattern analysis machine intelligence submitted proposition decomposed convex mnkt besides space complexity algorithm several major differences algorithm nmapg first proximal step algorithm inexact make algorithm robust allow nonmonotonous update larger moreover use simpler acceleration scheme step involved matrix completion problems allows using sparse plus structure greatly reduce iteration complexity section finally require extra comparison objective step reduces iteration complexity based decomposition introduce definition critical point algorithm accelerated fancl algorithm convergence analysis inexact proximal algorithm first considered assumes convex nonconvex extension considered however discussed section use expensive condition control inexactness proximal step thus analysis applied known assumption decomposed difference convex functions following proposition shows also admits decomposition proof appendix definition critical point following proposition shows algorithm generates bounded sequence proof appendix proposition sequence generated algorithm bounded least one limit point let prox known proximal mapping critical point motivates use measure convergence however used nonconvex proximal step inexact proposition guarantees existence limit points use instead measure convergence proximal step exact following corollary shows convergence algorithm proof found appendix corollary following theorem shows limit point also critical point proof appendix theorem assume algorithm returns prox input returned output limit point let xtj subsequence generated algorithm limtj xtj critical point acceleration convex optimization acceleration commonly used speed convergence proximal algorithms recently also extended nonconvex optimization algorithm nmapg algorithm section integrate nmapg fancl whole procedure shown algorithm accelerated iterate obtained step resultant inexact proximal step solution achieve sufficient decrease step iterate accepted step otherwise choose inexact proximal step solution obtained nonaccelerated iterate step note step step algorithm thus iteration time complexity algorithm twice algorithm still input choose initialize random gaussian matrices warm start inexactps else inexactps end end return following proposition shows algorithm generates bounded sequence proof found appendix proposition sequence generated algorithm bounded least one limit point corollary used measure progress proximal step algorithm proximal step may use accelerated iterate iterate hence use step performed otherwise similar corollary following shows convergence rate proof found appendix corollary algorithm min nonconvex optimization problems optimal convergence rate methods thus convergence rate algorithm corollary improve algorithm corollary however practice acceleration still significantly reduce number iterations nonconvex problems hand algorithm may need second proximal step step iteration time complexity higher algorithm however much compensated speedup convergence demonstrated section empirically algorithm much faster ieee transactions pattern analysis machine intelligence submitted following theorem shows limit point iterates algorithm also critical point proof found appendix theorem let xtj subsequence generated algorithm limtj xtj assumption theorem critical point pplications section consider two important instances problem namely matrix completion robust principal component analysis rpca accelerated fancl algorithm algorithm usually faster nonaccelerated variant consider accelerated variant matrix completion section show algorithm made even faster require much less memory using sparse plus structure problem section show algorithm extended deal two parameter blocks rpca matrix completion matrix completion attempts recover matrix observing elements let observed positions indicated oij observed otherwise matrix completion formulated optimization problem aij otherwise following show time space complexities algorithm reduced utilizing problem structure first consider step checks objectives computing relies observed positions singular values hence instead explicitly constructing maintain svd sparse matrix computing takes time computing takes time rank next since linear combination step use form compute time thus step takes time steps perform inexact proximal step first proximal step step defined step rewritten calls inexactps step algorithm first two terms involve matrices last term involves sparse matrix special sparse plus structure speed matrix multiplications specifically obtained similarly obtained take time instead mnk step algorithm columns call approximate gsvt takes time instead mnkt finally step algorithm also takes time result step algorithm takes total time step slightly cheaper involved time complexity summarizing iteration time complexity algorithm usually thus much cheaper mnkt complexity standard section space complexity also reduced need store factorizations sparse matrices take total space instead section techniques also used algorithm easily shown iteration time complexity space complexity involved comparison existing algorithms table compares convergence rates iteration time complexities space complexities various matrix completion algorithms empirically compared section overall proposed algorithms algorithms enjoy fast convergence cheap iteration complexity low memory cost algorithms convergence rate see section algorithm uses acceleration significantly faster robust principal component analysis rpca given noisy data matrix rpca assumes approximated sum matrix plus sparse noise optimization problem min regularizer regularizer allow nonconvex nonsmooth thus seen nonconvex extension rpca uses nuclear norm regularizer examples nonconvex shown table examples nonconvex include norm ieee transactions pattern analysis machine intelligence submitted table comparison iteration time complexities convergence rates space complexity various matrix completion solvers integer constants active subspace selection method active number inner iterations required regularizer convex nuclear norm factorization nonconvex method apg active lmafit irnn gpg fancl convergence rate involves two blocks parameters coupled together thus use separable property proximal operator many popular regularizers computing takes time example sign sij max sign sign however directly computing requires time expensive alleviate problem algorithm easily extended algorithm iteration time complexity dominated inexact proximal steps steps reduced mnkt algorithm algorithm rpca input choose initialize random gaussian matrices ytx yts warm start inexactps ytx prox yts ytx yts yts yts else inexactps prox end end return iteration time complexity mnrt space complexity theorem let xtj stj subsequence generated algorithm limtj xtj limtj stj assumption theorem critical point parallel ncl atrix ompletion section show proposed algorithms parallelized consider matrix completion problem extension problems rpca section similarly performed moreover simplicity discussion focus simpler fancl algorithm algorithm accelerated variant algorithm similarly parallelized shown appendix parallel algorithms matrix completion proposed however based stochastic gradient descent matrix factorization directly used proposed algorithm convergence results section easily extended rpca problem proofs following found appendices operations matrix often form multiplications operation evaluation popular scheme parallel linear algebra block distribution assume threads parallelization block distribution partitions rows columns parts leading total blocks figure shows computations operation easily parallelized algorithm important variables factorized form sparse matrices using block distribution thus partitioned figure resultant parallelized version fancl shown algorithm steps parallelized marked two new subroutines introduced namely step replaces factorization step parallelized version algorithm discussed detail following sections note algorithm equivalent algorithm except parallelized thus convergence results section still hold proposition sequence generated algorithm bounded least one limit point corollary let ytx yts steps performed otherwise fmin identifying span step step algorithm factorization used find span matrix parallelized householder transformation gaussian elimination however typically complex following ieee transactions pattern analysis machine intelligence submitted operation fig parallelization different matrix operations number threads equal dotted path denotes operation thread moreover though step uses svd takes time algorithm parallel algorithm identify span fig partitioning variables three threads used input matrix svd construct proposition vdiag return algorithm fancl parallel input choose initialize random gaussian matrices partition start threads parallelization break end end end return proposition proposes simpler method identify span matrix proof found appendix proposition given matrix let svd otherwise diag orthogonal contains span resultant parallel algorithm shown algorithm time complexity algorithm calls algorithm input thus takes time parallelize steps matrices involved small approximate gsvt step key steps approximate gsvt algorithm power method svd power method parallelized straightforwardly algorithm also replace subroutine algorithm algorithm parallel power method powermethodpl input matrix end return svd multiple factorizations usually needed parallelization complex discussed section following proposition performs simpler manner proof found appendix proposition given matrix let orthogonal equals span svd svd resultant parallelized procedure approximate gsvt shown algorithm step small svd performed single thread matrix step algorithm returned algorithm keep factorized form besides algorithm called sparse plus structure mentioned earlier hence used speed matrix columns algorithm acceleration used equal two equations ieee transactions pattern analysis machine intelligence submitted step takes kqt time steps take time rest takes time total time complexity algorithm kqt algorithm approximate gsvt parallel approxgsvtpl input partitioned matrix svd number corollary leading columns leading columns obtain end return components qua diag checking objectives steps shown figures computation directly parallelized takes time relies one thread needed evaluate thus computing takes rqt time similarly computing takes kqt time lowrank factorized forms utilized based figures performed time thus time complexity steps algorithm kqt iteration time complexity algorithm thus compared speedup number threads almost linear xperiments section perform experiments matrix completion rpca parallelized variant algorithm appendix experiments performed windows server system intel xeon cpu cores memory algorithms sections implemented matlab section use matrix operations standard thread programming matrix completion compare number matrix completion solvers including models based commonly used convex nuclear norm regularizer factorization models decompose observed https http matrix product matrices optimization problem written minu iii nonconvex regularizers including table set lsp tnn nuclear norm minimization algorithms compared include accelerated proximal gradient apg algorithm partial svd propack inexact acceleration proximal algorithm sparse plus structure matrix iterate utilized speed computation section active subspace selection denoted active subspaces active set iteration nuclear norm optimization problem reduced smaller problem defined active set compare algorithm stochastic gradient descent shown less efficient factorization models rank tuned validation set compare two algorithms matrix fitting lmafit algorithm economical matrix pursuit pursues basis iteration compare procedure since shown inferior irnn models nonconvex regularizers compare following solvers iterative reweighted nuclear norm irnn generalized proximal gradient gpg algorithm underlying problem solved using solutions proposed fancl algorithm algorithm accelerated variant algorithm set algorithms stopped difference objective values consecutive iterations becomes smaller synthetic data observed matrix generated elements sampled standard normal distribution elements sampled total log random elements observed half used training rest validation set parameter tuning testing performed unobserved elements performance evaluation use normalized mean squared error nmse recovered matrix denotes unobserved positions rank iii training cpu time vary range experiment repeated five times ieee transactions pattern analysis machine intelligence submitted table matrix completion performance synthetic data nmse scaled cpu time seconds number brackets data sparsity best results according pairwise confidence highlighted nmse rank time nmse rank time nmse rank time nuclear norm apg active fixed rank lmafit capped irnn gpg fancl lsp irnn gpg fancl tnn irnn gpg fancl results shown table seen nonconvex regularization lsp tnn leads much lower nmse convex nuclear norm regularization factorization moreover nuclear norm output much higher ranks terms speed among nonconvex solvers fancl fast fanclacc fastest larger matrix higher speedup fancl gpg irnn movielens experiment performed popular movielens data set table contain ratings different users movies follow setup use observed ratings training validation rest testing performance evaluation use root mean squared error test set rmse recovered matrix experiment repeated five times table recommendation data sets used experiments users movies ratings movielens netflix yahoo lsp fig objective cpu time lsp plot tnn similar thus shown results shown table nonconvex regularizers lead lowest rmse moreover fanclacc also fastest among nonconvex solvers even faster gpg particular fancl accelerated variant solvers nonconvex regularization run data sets figure compares objectives cpu time nonconvex regularization solvers seen fancl decrease objective rmse much faster others figure shows testing rmses fastest use observed ratings training validation rest testing experiment repeated five times results shown table apg gpg irnn run data set large similar running time lmafit inferior performance thus compared nonconvex regularizers converge faster yield lower rmse solutions much lower ranks figure shows objectives rmse time netflix yahoo next perform experiments two large recommendation data sets netflix yahoo table randomly two data sets easily overfits rank increases hence validation set selects smaller rank relative obtained nuclear norm stops earlier however seen rmse much worse ieee transactions pattern analysis machine intelligence submitted table matrix completion results movielens data sets time seconds best results according pairwise confidence highlighted rmse rank time rmse rank time rmse rank time nuclear apg norm active fixed lmafit rank capped lsp tnn irnn gpg fancl irnn gpg fancl irnn gpg fancl table results netflix yahoo data sets cpu time minutes best results according pairwise confidence highlighted netflix yahoo rmse rank time rmse rank time fixed rank lmafit fancl lsp fancl tnn fancl netflix fig rmse cpu time data sets robust principal component analysis synthetic data section first perform experiments synthetic data set observed matrix generated elements sampled elements sampled matrix sparse elements randomly set equal probabilities whole data set randomly split training test sets equal size standard regularizer used sparsity regularizer different lowrank regularizers used hyperparameters tuned using training set performance evaluation use nmse recovered sparse components respectively yahoo fig rmse cpu time netflix yahoo data sets accuracy locating sparse support percentage entries sij nonzero zero together iii recovered rank cpu time vary experiment repeated five times note irnn active subspace selection ieee transactions pattern analysis machine intelligence submitted table rpca performance synthetic data nmse scaled cpu time seconds best results according pairwise confidence highlighted nmse rank time nmse rank time nmse rank time nuclear norm apg gpg lsp gpg tnn gpg table psnr cpu time seconds video background removal experiment psnrs input videos bootstrap campus escalator hall psnr time psnr time psnr time psnr time nuclear norm apg gpg lsp gpg tnn gpg used objectives form smooth function plus regularizer rpca also nonsmooth regularizer similarly matrix completion moreover fancl shown slower compared results shown table accuracies locating sparse support always methods thus shown moreover convex nonconvex regularizers perfectly recover matrix rank sparse locations nonconvex regularizers lower nmse matrix completion much faster larger matrix higher speedup psnr recovered video results shown table seen nonconvex regularizers lead better psnr convex nuclear norm moreover much faster gpg figure shows psnr cpu time bootstrap campus data sets converges higher psnr much faster results hall escalator similar background removal videos section use rpca background removal videos four benchmark videos used table example frames shown figure image background considered foreground moving objects contribute sparse component table videos used experiment bootstrap campus escalator pixels frame total frames bootstrap campus escalator bootstrap hall hall fig example image frames videos given video image frames frame first reshaped column vector frames stacked together form matrix pixel values normalized gaussian noise added experiment repeated five times performance evaluation use commonly used peak ratio campus fig psnr cpu time bootstrap campus videos parallel matrix completion section experiment proposed parallel algorithm section netflix yahoo data sets table compare ieee transactions pattern analysis machine intelligence submitted algorithms inferior performance section machine cores use one thread core suggested randomly shuffle matrix columns rows partitioning use lsp penalty fix total number iterations hyperparameters section experiments repeated five times convergence objective typical run shown figure multiple threads running single cpu report clock time instead cpu time seen accelerated algorithms much faster ones parallelization provides speedup netflix yahoo fig objective value clock time versions fancl netflix yahoo data sets figure shows speedup different numbers threads seen parallelized variants scale well number threads particular scaling better yahoo observed entries partitioned data submatrices distributed evenly improves performance parallel algorithms another observation speedup larger one discussed performing multiplications large sparse matrix significant amount time spent indexing nonzero elements matrix partitioned submatrix becomes smaller easier indexed thus memory cache also becomes effective onclusion paper considered challenging problem nonconvex matrix optimization key observations popular regularizers singular values obtained proximal operator fig speedup number threads parallel fancl red dashed line indicates linear speedup automatically thresholded proximal operator computed smaller matrix allows proximal operator efficiently approximated power method extended proximal algorithm nonconvex optimization setting acceleration inexact proximal step parallelized proposed algorithm scales well number threads extensive experiments matrix completion rpca show proposed algorithm much faster also demonstrates nonconvex lowrank regularizers outperform standard convex nuclear norm regularizer parallel setting typically observed entries distributed partitioned matrices workloads different threads well balanced one future direction allow asynchronized updates parallel algorithm help reduce waiting time threads light workloads makes efficient use cpu moreover parallel algorithms multicore machines easier implement communication issues less scalable distributed algorithms allow scaleup massive data sets consider extending proposed algorithms distributed computing environment eferences recht exact matrix completion via convex optimization foundations computational mathematics vol liu shen robust video denoising using low rank matrix completion proceedings conference computer vision pattern recognition zhang fast accurate matrix completion via truncated nuclear norm regularization ieee transactions pattern analysis machine intelligence vol liu musialski wonka tensor completion estimating missing values visual data ieee transactions pattern analysis machine intelligence vol tang yan lin nonconvex nonsmooth low rank minimization via iteratively reweighted nuclear norm ieee transactions image processing vol xie meng zuo feng zhang weighted nuclear norm minimization applications low level vision international journal computer vision vol wright robust principal component analysis journal acm vol ieee transactions pattern analysis machine intelligence submitted sun xiang robust principal component analysis via capped norms proceedings international conference knowledge discovery data mining tai bazin kim kweon partial sum minimization singular values robust pca algorithm applications ieee transactions pattern analysis machine intelligence vol ganesh shi matsushita wang robust photometric stereo via matrix completion recovery proceedings asian conference computer vision yang luo qian tai zhang nuclear norm based matrix regression applications face recognition occlusion illumination changes ieee transactions pattern analysis machine intelligence vol liu lin yan sun robust recovery subspace structures representation ieee transactions pattern analysis machine intelligence vol accelerated gradient method trace norm minimization proceedings international conference machine learning mazumder hastie tibshirani spectral regularization algorithms learning large incomplete matrices journal machine learning research vol yao kwok accelerated inexact fast matrix completion proceedings international joint conference artificial intelligence zhang schuurmans accelerated training regularization boosting approach advances neural information processing systems hsieh olsen nuclear norm minimization via active subspace selection proceedings international conference machine learning zhang analysis convex relaxation sparse regularization journal machine learning research vol wakin boyd enhancing sparsity reweighted minimization journal fourier analysis applications vol fan variable selection via nonconcave penalized likelihood oracle properties journal american statistical association vol zhang nearly unbiased variable selection minimax concave penalty annals statistics vol gui han towards faster rates oracle property matrix estimation proceedings international conference machine learning yuille rangarajan procedure neural computation vol gong zhang huang general iterative shrinkage thresholding algorithm regularized optimization problems proceedings international conference machine learning lin accelerated proximal gradient methods nonconvex programming advances neural information processing systems zhu yan lin generalized singular value thresholding proceedings aaai conference artificial intelligence halko martinsson tropp finding structure randomness probabilistic algorithms constructing approximate matrix decompositions siam review vol yao kwok zhong fast matrix learning nonconvex regularization proceedings international conference data mining parikh boyd proximal algorithms foundations trends optimization vol beck teboulle fast iterative algorithm linear inverse problems siam journal imaging sciences vol ghadimi lan accelerated gradient methods nonconvex nonlinear stochastic programming mathematical programming vol cai shen singular value thresholding algorithm matrix completion siam journal optimization vol matsushita tai kweon fast randomized singular value thresholding nuclear norm minimization proceedings conference computer vision pattern recognition toh yun accelerated proximal gradient algorithm nuclear norm regularized linear least squares problems pacific journal optimization vol lin chen augmented lagrange multiplier method exact recovery corrupted matrices school eecs peking university tech larsen lanczos bidiagonalization partial reorthogonalization department computer science aarhus university daimi attouch bolte svaiter convergence descent methods tame problems proximal algorithms splitting regularized methods mathematical programming vol schmidt roux bach convergence rates inexact methods convex optimization advances neural information processing systems generalized differentiability duality optimization problems dealing differences convex functions convexity duality optimization nesterov introductory lectures convex optimization basic course springer wen yin zhang solving factorization model matrix completion nonlinear successive algorithm mathematical programming computation vol wang lai fan davulcu orthogonal matrix pursuit low rank matrix completion siam journal scientific computing vol gemulla nijkamp haas sismanis matrix factorization distributed stochastic gradient descent proceedings international conference knowledge discovery data mining hsieh dhillon scalable coordinate descent approaches parallel matrix factorization recommender systems proceedings international conference data mining recht parallel stochastic gradient algorithms matrix completion mathematical programming computation vol demmel heath van der vorst parallel numerical linear algebra acta numerica vol avron kale sindhwani kasiviswanathan efficient practical stochastic subgradient descent nuclear norm regularization proceedings international conference machine learning zhuang chin juan lin fast parallel sgd matrix factorization shared memory systems proceedings acm conference recommender systems goumas kourtis anastopoulos karakasis koziris understanding performance sparse matrixvector multiplication proceedings euromicro conference parallel distributed processing bertsekas tsitsiklis parallel distributed computation numerical methods athena scientific bertsekas nonlinear programming athena scientific rao separation theorems singular values matrices applications multivariate analysis journal multivariate analysis vol arbenz solving large scale eigenvalue problems department mathematics eth lecture notes online available http lewis sendov nonsmooth analysis singular values analysis vol ieee transactions pattern analysis machine intelligence submitted ppendix parallel acc algorithm shows parallel version acceleration performed step first inexact proximal step performed steps step checks whether accelerated iterate accepted condition fails second inexact proximal step performed steps note algorithm equivalent algorithm thus convergence analysis section still holds algorithm parallel input choose initialize random gaussian matrices partition start threads parallelization aat aat break end end else break end end end end return example using proposition decompose log let svd assume singular values larger let resp matrix containing first columns resp cvk let udiag thus full svd needed expensive impractical large matrices ppendix roofs proposition simplicity notations write first introduce definition concave function two lemmas definition concave function given lemma inf assume supgj inf lemma max increases proof part let optimality condition consider two possibilities words optimal solution achieved boundary combining two cases relationship expressed max part assume becomes let becomes larger according two possibilities corresponding ppendix hecking ondition condition accept approximate two convex functions constant using lsp regularizer supgj inf lemma together fact exists smaller make hold inf supgi lemma however solution may exist ieee transactions pattern analysis machine intelligence submitted thus multiple solutions ensure first case must exist take largest solution possible candidates thus gets larger also becomes larger proof proposition lemma otherwise max see however becomes smaller reach reaches zero comes two facts first becomes smaller gets smaller lemma inf become smaller lemma second inf illustration relationships among shown following figure thus exists becomes let arg minqi also quadratic solution otherwise note solution arbitrarily close since possibility arg minyi covered arg thus covered possibilities using min min order get need min leads thus min corollary section show derive threshold penalty derivations penalties obtained similarly penalty proof note problem considers singular value separately simplicity notations let denote ith singular value let min thus finally combining two cases clude min thus min proposition proof first introduce following theorem note quadratic three possibilities arg min thus arg arg min arg minyi min min min theorem separation theorem let min let svd rewritten contains leading columns remaining columns similarly resp contains leading eigenvalues resp columns resp hence max ieee transactions pattern analysis machine intelligence submitted let zvi kbp constant proof pth iteration algorithm inside algorithm step since resp ith column resp zvi due span span theorem substituting combining obtain optimal solution thus svd corresponding left right singular vectors contained respectively theorem besides using inside algorithm step span span thus span span span span kbp thus kbp follows resp orthogonal resp shows singular values larger thus get finally comes span span comes svd singular values larger note lemma together comes fact returned iterations algorithm definition step algorithm thus first singular values term hence max proposition proof first introduce following lemmas lemma algorithm let svd contain first columns kqj kcc lemma algorithm let svd proof proposition proposition comes svd span span proposition prox prox thus prox proposition proof first introduce lemma lemma let convex also convex assumption rewritten constant obviously convex define lemma convex thus also written difference convex functions ieee transactions pattern analysis machine intelligence submitted proposition proof step algorithm ensures proposition proof consider two cases step algorithm performed step performed summing inf bounded assumption inf positive constant let lim corollary proof combining min inf lim kxkf xtj subsequence limit point lim xtj inexactps lim xtj lim rtj lim infinite two cases bounded either infinite combining generated algorithm bounded least one limit point implies infinite finite note xtj xtj due assumption sequence xtj bounded assumption xtj also bounded least one limit point lim xtj lim xtj inf lemma prox critical point indicates maxtj kxtj together sequence bounded least one limit point proof first introduce lemma finite infinite kxtj xtj assumption also theorem inf constant consider three cases implies kxt together bounded sequence least one limit point partition iterations two sets step performed step performed sum bounded assumption assumption also kxkf xtj combining lim inexactps lim xtj lim rtj inexactps lim rtj corollary proof let min thus prox holds tion lemma critical point ieee transactions pattern analysis machine intelligence submitted thus min combining lim xtj inexactps lim ytj lim rtj inexactps lim rtj inf thus assumption also lemma also critical point prox last inequity comes theorem proof partition iterations two sets step performed step performed consider three cases finite infinite let xtj subsequence limtj xtj lim xtj lim xtj besides lim xtj inexactps steps performed ytx yts steps performed proposition proof consider two cases combining lim xtj inexactps lim xtj lim rtj inexactps lim rtj infinite two cases see limit point also critical point either infinite thus limit points also critical points lim xtj lim rtj prox lemma also critical point infinite finite let xtj subsequence limtj xtj summing bounded assumption indicates inf consider three cases lim xtj ytj lim ytj lim xtj besides lim xtj inexactps finite infinite kxtj xtj kstj stj ytx yts partition two sets steps performed otherwise steps performed let thus assumption also lim ytj lim rtj assumption lim kxkf kskf ieee transactions pattern analysis machine intelligence submitted thus maxtj xtj stj sequence xtj stj bounded least one limit point infinite finite xtj stj xtj stj bounded assumption sequence xtj stj must bounded besides assumption indicates sequence xtj stj must bounded least one limit point partition two sets steps performed otherwise steps performed consider three cases finite infinite let xtj stj subsequence lim xtj stj infinite two cases bounded least one limit point infinite thus sequence generated algorithm bounded least one limit point indicate lim xtj xtj lim stj stj corollary proof let min first ytx yts lim xtj lim xtj together min thus holds assumption proximal operator always exact using prox lim stj lim prox stj inf definition satisfy xtj stj combining critical point using lemma infinite finite let xtj stj subsequence lim xtj stj kxtj ytxj critical point lemma satisfy prox prox critical point theorem proof let difference convex decomposition two blocks variables involved critical points defined follows prox last inequality comes combing lim xtj inexactps lim xtj lim rtj inexactps lim rtj kstj ytsj lim xtj ytxj lim stj ytsj ieee transactions pattern analysis machine intelligence submitted thus lim xtj lim ytxj combing lim xtj inexactps lim ytxj lim rtj inexactps lim rtj holds assumption proximal operator always exact using prox lim stj lim prox ytsj prox ytxj ytsj combining critical point using lemma infinite either infinite limit point critical point thus limit points sequence also critical points proposition proof svd svd written orthogonal matrix containing span construction diag diag diag consider two cases proof let svd note span span thus proposition second equality comes full column rank diag contains span assume columns rank diag udiag diag contains first columns rank diag covers span proposition follows thus svd result finally second equality comes
| 2 |
face recognition meets deep learning extended face recognition sep xiang pengfei dou ioannis computational biomedicine lab university houston calhoun houston usa abstract face recognition works focus specific modules demonstrate research idea paper presents face recognition system robust pose variations large leveraging deep learning technology architecture interface described module introduced detail extensive experiments conducted demonstrating outperforms existing face recognition systems facenet commercial software cots least dataset dataset average face identification tasks also achieves performance dataset comparing accuracy score template matching fills gap providing face recognition system compatible results face recognition systems using deep learning techniques keywords face recognition face recognition deep learning pipeline msc preprint submitted biometrics wild image vision computing september figure depiction existing pose problem selected samples distribution yaw angles constrained dataset wild dataset introduction face recognition application computer either classifies human identity according face face identification verifies whether two images belong subject face verification common face recognition system two steps enrollment matching specifically enrollment stage features obtained facial image set images obtain signature template subject enrollment usually three steps face detection face alignment iii signature generation matching stage signatures compared obtain distance identification verification problem recently face recognition technology significantly advanced deployment deep learning technology especially using convolutional neural networks cnn pure face recognition systems achieved human performance even better deepface proposed taigman first reported performance labeled faces wild lfw standard benchmark better human efforts facenet proposed schroff used triplet loss train deep neural corresponding author email address ikakadia ioannis kakadiaris network using million labeled faces obtained performance verification accuracy dataset success deep learning techniques face recognition indeed relies following four aspects large amount data either public datasets webface private datasets advanced network architecture vgg resnet iii discriminative learning approaches triplet loss center loss range loss sphereface regularization methods noisy softmax however face recognition still solved problem conditions datasets lfw use face detector designed work whole pose distribution unconstrained scenario especially using surveillance camera plethora images large variations head pose expression illumination occlusions overcome challenges face model applied assist face recognition facial model intrinsically invariant pose illumination use face model model fitted facial images projection matrix estimated help projection matrix fitted model easy rotate face align input images arbitrary large pose positions frontal position feature extraction signature matching last years researchers focused face recognition pure image view developed numerous loss function approaches learn discriminative features different poses limited number face recognition systems developed using model help align images kakadiaris proposed pose illumination invariant system frontalized face image using annotated face model afm proposed unified morphable model additional pca subspace perturbation address problem mentioned paper presents face recognition system called significantly improves face recognition performance using afm deep learning technology especially large pose scenarios enormous demand face recognition systems frontal face recognition considered solved problem consists several independent modules face detection landmark detection model reconstruction pose estimation lifting texture signature generation matching despite face detection methods methods developed computational biomedicine lab provides sufficient tools interfaces use different designed system core code written efficient provides bindings python system leverages several libraries opencv glog gflags pugixml json modern caffe detecting face landmarks image model constructed image several images estimating projection matrix correspondence model image computed model used help frontalize face features occlusion encodings extracted represent face matching use cosine similarity compute similarity two signature vectors summary paper extends make following contributions brief survey recent face recognition pipeline module summarized face recognition system using deep learning developed intrinsic value model explored frontalize face features extracted representation demonstrate results face recognition system exhibits performance comparable system face recognition results outperform facenet cots least dataset dataset average addition demonstrate generate template signatures multiple images achieve performance dataset rest paper organized follows modern face recognition systems reviewed sec sec architecture functionalities discussed sec module separately introduced detail detailed evaluations indoor datasets reported sec related work divide current existing work two categories sec discuss detailed recent related work module common face recognition pipeline academic view system level papers implementation discussed sec modules face detection face detection first step well studied topic face recognition domain zefeiriou presented comprehensive survey topic divided approaches two categories rigid methods methods addition methods summarized approaches object detection regions convolutional neural network framework well developed techniques directly integrated face detection used mean face model divided face ten parts joined face proposals single model approach proposed ramanan explored context resolution images residual networks resnet demonstrated detect face small three pixels despite face detectors using proposal classification technique detectors also developed ssd yolo classify fixed grid boxes learn regression functions map objects simultaneously lin address issue performance detectors strong detectors unbalanced positive negative samples focal loss also trained object detector recently ssh proposed najibi using loss classification regression network face alignment face alignment refers aligning face image specific position usually researchers include landmark detection topic jin tan summarized categories popular approaches task cascaded regression major trend topic classification frameworks tend popular recently zhu searched similar shapes exemplars regressed shapes using sift features updating probability shapes ensemble random ferns used learn local binary discriminative features kakadiaris proposed jointly learn head pose estimation face alignment tasks single framework jfa using global local cnn features researchers treat face alignment task classification problem kepler joined cnn features different layers captured response map localize landmarks proposed godp algorithm localize landmarks fully convolutional network fcn framework exploring information recent works use generative adversarial networks gan frontalize face huang used gan photorealistic frontal synthesis images kept identity details texture yin incorporated model gan frontalize faces large poses wild drgan proposed tran generate frontalized face face images different poses also demonstrated usage gan face recognition signature generation emerging topic face recognition research generating discriminative representation subject training millions face images using deep learning technology many feature descriptors proposed recently parkhi proposed descriptor within architectures triplet loss proposed schroff train deep neural network using million labeled faces google masi developed face recognition unconstrained environments resnet rendering images addition frontalizing face also rendered face images masi addressed question whether need collect millions faces training face recognition system argued use synthesized images instead real images train model still obtain comparable results despite triplet loss many loss functions proposed recently center loss added wen alongside cross entropy loss obtain discriminative features deep face recognition range loss designed zhang train deep neural networks long tail distribution loss used sphereface demonstrated efficiency learning discriminative features marginal loss proposed name category core detection alignment representation matching openbr deepface facenet openface torch modern active table comparison recent existing face recognition pipelines employ definition modern active made klontz means information provided paper enhance discriminative ability maximizing distances large scale training data system opencv openbr well known computer vision pattern recognition libraries however eigenface algorithm opencv openbr updated since libraries support nearly frontal face recognition since face detector detect frontal face openface implementation facenet amos using python torch provides four demos usage openface applied dlib face detector landmark detector better openbr another official tensorflow implementation facenet authors use mtcnn detect align face boosts performance speed detection accuracy best knowledge limited amount system papers papers focus different research face representations comparison recent existing face recognition systems presented tab including research face representation system design face recognition system designed face recognition moreover system suitable research fast images provide baselines plot results support development system requirements written clean efficient developed linux platform ubuntu system requires gcc compilation leverages list libraries tools cmake boost opencv gflags glog puxixml json caffe dependencies available ubuntu repository except caffe therefore install dependencies requires installing caffe manually architecture overview figure illustrates architecture explicitly illustrates modules functionality blue blocks external shared libraries three components belong system base software green blocks provide basic functions algorithm modules constructed apis applications guis top software built combining apis users directly call applications obtain results advantages architecture simple full development libraries system use features easily data structures basic element file disk operations algorithms based files basic data structure data hash table pairs keys values keys values string type unlike openbr avoid saving giant data memory keep file path memory gui applications detection file system external core caffe utility matching logging dataset utility glog caffe gflag modules internal opencv apps figure depiction architecture addition external libraries includes base libraries process files use cuda manage data files etc based basic libraries high level apis implemented calling function module based sdk easier write various applications different purposes also created guis demonstrate configuration two approaches run first one defining configuration file json format points datasets input files output directories involving modules model locations evaluation attribute dataset contains information input dataset including name path attribute input contains list galleries probes attribute output defines output directories attribute pipelines defines modules used pipeline pip command line application accepts argument configuration file parse configuration file load models run defined modules advantages approach simplicity flexibility unlike openbr framework require detailed understanding option input long arguments command line users need change values attributes dataset input set dataset directory file enroll program pip generate output defined configuration file command line interface make full use sdk created corresponding applications run module applications accept file list text csv file default includes tag top line folder single image system load data memory process data according data list arguments specify location input output saved enrollment executed generates signatures output directory path signature recorded data calling api system list data written file default format face recognition figure depicts overview enrollment contains face detection face alignment face reconstruction pose estimation texture lifting signature generation face detection serious problem openbr openface even commercial face recognition software cots face detection rate openbr supports opencv frontal face detector openface also supports dlib face detector however recent years many face detection algorithms developed deep learning technology support images detect face poses modern detectors headhunter ddfd face detector supported system mathias trained headhunter using templates ddfd face detector proposed farfade alexnet using suppression support different face detectors downstream modules perform bounding box regression detected bounding box reduce variations bounding generate bounding box face detection gallery lift textures original image according correspondence reconstruct face model single image reconstruction euclidean distance texture lifting refinement landmark detection pose estimation representation localize landmarks based response map compute projection matrix residual network learn features probe cosine similarity feature occlusion encoding images enrollment matching figure depiction whole pipeline follow arrow middle rounded rectangles represent different modules dashed arrows represent workflow enrollment encompasses modules listed face first detected transferred localize landmarks model constructed directly image bounding box landmarks model projection matrix estimated frontalized image occlusion map generated according model projection matrix pose robust features extracted images along occlusion encoding matching step computes features visible parts outputs similarity score box first advantage approach need models downstream modules switching face detector second advantage approach provides robust bounding box landmark localization module landmark localization detect face landmarks use godp proposed demonstrated robust pose variations godp landmark detector relies confidence maps generated fully convolutional network confidence map generated landmark indicate possibility landmark appearing specific location original image prediction made simply selecting location maximum response confidence map strategy helps suppress false alarms generated background regions improves robustness algorithm large head pose variations compared landmark detectors novel architecture godp merges information deep shallow layers based new loss function increases resolution discrimination confidence maps achieves results multiple challenging face alignment databases reconstruction facial shape reconstruct facial shape input image integrate pipeline algorithm proposed dou uses subspace model represent afm parameter vector employs cnn estimate optimal parameter values single image train deep neural network large set synthetic data created using rendering randomly generated afms improve robustness illumination variation deep neural network real facial images synthetic data compared existing work efficient due architecture requires single operation predict model parameters moreover relies face detection localize facial region interest image result compared approaches robust pose variation degrade landmark detection accuracy pose estimation given landmarks obtained landmark detection landmarks obtained model transformation matrix estimated solving problem follows min implementation use algorithm also known dls solve equation texture lifting facial texture lifting technique first proposed kakadiaris lifts pixel values original images map given projection matrix afm model original image first generates geometry image pixel captures information existing interpolated vertex afm surface set coordinates referring pixels original facial image computed way facial appearance lifted represented new texture image model technique used estimate occlusion status pixel process generates occlusion mask module following two advantages generates frontal normalized face images convenient feature extraction comparison second generates occlusion masks identify parts face images occluded providing evidence exclude face regions signatures improve performance face recognition matching facial images integrate pipeline algorithm proposed dou extracting face signature prfs face representation discriminative local facial features explicit pose encoding facial texture mask first divided multiple local patches local patch discriminative features extracted selfocclusion encoding computed ensemble local features enhanced encoding forms face signature use two types local features namely dfd feature proposed lei deep feature trained following wen using center loss train dfd feature use small subset database consists frontal facial images subjects divide facial texture patches train dfd feature extractor local patch separately train deep feature casia webface dataset used training data divide facial texture patches train deep neural network local patch separately paper call face signature dfd feature prfs face signature deep feature dprfs experiments section provide systematical numerical analysis two challenging datasets constrained scenarios first datasets used verify introduced fair comparison vgg face descriptor commercial face recognition software cots two challenging datasets conducted image matching end template matching experiments dataset performed dataset images subjects environment constrained poses illuminations usage face recognition various various unconstrained face recognition table comparison datasets challenging due pose variations illumination resolution datasets created controlled lab environment allows facerelated research pose illumination issues addition images also provides corresponding model subjects interesting fact dataset pose follows uniform distribution three dimensions pitch yaw roll subject total images different views data collected time model registered data different poses generate specific face model addition three illuminations resolutions downsampled original size another challenging dataset consists images wild dataset proposed iarpa managed nist dataset merges images frames together provides evaluations template level template contains one several subject according protocol splits galleries probes folders experiment modify protocol use face identification details introduced sec summary two datasets presented tab system provides dataset utility parse load data two datasets yaw pitch table comparison percentage different systems methods ordered cots facenet index poses ordered left right top bottom pose pitch yaw pose pitch yaw frontal face gallery poses probes cases system achieves best performance compared baselines perform fair comparison current face recognition systems choose cots baselines descriptor developed parkhi original release contains caffe model matlab example model implemented embedding method images fused features implementation tried different combinations descriptor matching methods found embedding features cosine similarity metrics works best experiment use represent embedding features matching using cosine similarity metric baseline module provides api obtain features facenet algorithm proposed schroff use personal implemented facenet github trained using webface first use mtcnn align face extract dimensions features provide models achieves accuracy lfw dataset accuracy little bit lower original paper still https sidered cots commercial software developed scalable face recognition provides sdk applications used directly experiments used version compare system version considered significant boost compared previous versions experiment report performance using prfs dprfs features summary software configuration reported tab compute identity accuracy successfully enrolled signatures system features dims metric embedding cosine cots facenet cosine prfs cosine dprfs cosine table comparison systems configuration used experiments face recognition experiment chose configuration dataset named subset images size neutral illumination subset chosen demonstrate system robust different poses therefore use configuration exclude variations illumination expressions etc keep pose variations treated frontal face images pose gallery images poses poses probes independently gallery probe contain images belongs subject face identification experiment performed using pairs sigsets table depicts comparison accuracy among poses except pose used gallery indicates robust different poses compared systems observed cots algorithms generalize pose distributions facenet works better cots extreme poses one possible answer model trained available datasets using webface provide extreme pose cases however cases pose pose tab performance face recognition pipelines still significant room improvement hand help model system keeps consistent symmetric performance among different poses even cases yaw system tolerate pose variations achieves around identity accuracy dprfs features around identity accuracy prfs features average face recognition however case face recognition system suffer pose variations experiment want explore whether system also used environment designed different protocol face identification experiments based original splits unlike original templatelevel comparison conducted image pairs comparison first removed samples splits make comparison pairs cropped face according annotations image thumbnails resolution resolution larger herein compare facenet since overlapping samples training set dataset method avg cots table comparison percentage different systems splits achieves best performance dprfs features split table depicts identification rate different methods dataset system dprfs reports better performance compared cots also system results consistent splits indicates system robust prfs features system perform well dataset one possible answer prfs features trained frgc dataset notably fewer variations pose illumination resolution problems current prfs features generalize images large variances corresponding solution retraining prfs feature model dataset third cots performs well challenging dataset since designed real scenario comparing experiment sec left question system perform slightly better baselines argue scenarios complicated combinations pose variations illumination expression occlusions robust face recognition system take cases consideration addition cots dropped hard samples enrolled fewer signatures would boost performance extent method openbr wang dcnn pam table comparison percentage different systems splits achieves best performance dprfs features split extended enroll several images subject generate template template average signatures computed generating unified model several images use results comparison table lists average identification accuracy method achieved method avg dprfs table detailed percentage different systems splits best performance detailed comparison identification accuracy summarized tab splits dataset memory usage running time conducted analysis terms memory time cafferelated implementation runs gpu gtx titan cots makes full use eight cpus table summarizes system different systems modules implementation external libraries run cpu face detection pose estimation prfs feature extraction therefore time used prfs features takes dprfs features due loading several large models dprfs requires memory user define suitable feature extractors according needs since optimized memory dprfs shares memory block gpu memory cost reduced level prfs use dprfs default system gpu memory time full cots prfs partial dprfs partial table comparison system partial gpu column denotes part code support gpu acceleration time means average enrollment time single image conclusion paper face recognition system robust pose variations large using deep learning technology presented overview architecture interface module introduced detailed extensive experiments conducted demonstrate robust pose variations outperforms existing face recognition systems vgg face descriptor facenet commercial face recognition software least dataset dataset average system achieves performance template matching dataset acknowledgment material based upon work supported department homeland security grant award number grant awarded borders trade immigration bti institute dhs center excellence led university houston includes support project image video person identification operational environment awarded university houston views conclusions contained document authors interpreted necessarily representing official policies either expressed implied department homeland security references references kakadiaris dataset better understanding face recognition across pose illumination variation proc ieee international conference computer vision workshops venice italy klare klein taborsky blanton cheney allen grother mah burge jain pushing frontiers unconstrained face detection recognition iarpa janus benchmark proc ieee conference computer vision pattern recognition boston massachusetts taigman yang ranzato wolf deepface closing gap performance face verification proc ieee conference computer vision pattern recognition columbus ohio huang mattar berg labeled faces wild database studying face recognition unconstrained environments proc workshop faces images detection alignment recognition marseille france schroff kalenichenko philbin facenet unified embedding face recognition clustering proc ieee conference computer vision pattern recognition boston massachusetts lei liao learning face representation scratch arxiv preprint guo zhang gao dataset benchmark face recognition proc european conference computer vision amsterdam netherlands parkhi vedaldi zisserman deep face recognition proc british machine vision conference swansea zhang ren sun identity mappings deep residual networks proc european conference computer vision amsterdam netherlands wen zhang qiao discriminative feature learning approach deep face recognition proc european conference computer vision amsterdam netherlands zhang fang wen qiao range loss deep face recognition arxiv preprint liu wen raj song sphereface deep hypersphere embedding face recognition proc ieee conference computer vision pattern recognition honolulu hawaii chen deng noisy softmax improving generalization ability dcnn via postponing early softmax saturation proc ieee conference computer vision pattern recognition honolulu hawaii kakadiaris toderici evangelopoulos passalis chu zhao shah theoharis face recognition normalization computer visiona image understanding yan chan deng christmas kittler robertson face recognition using unified morphable model proc european conference computer vision amsterdam netherlands ding tao comprehensive survey face recognition acm transactions intelligent systems technology opencv http glog https gflags https pugixml https json modern https jia shelhamer donahue karayev long girshick guadarrama darrell caffe convolutional architecture fast feature embedding proc international conference multimedia orlando florida usa dou kakadiaris evaluation pose invariant face recognition system proc international joint conference biometrics denver colorado zafeiriou zhang zhang survey face detection wild past present future computer vision image understanding girshick donahue darrell malik rich feature hierarchies accurate object detection semantic segmentation proc ieee conference computer vision pattern recognition columbus jiang face detection faster proc ieee international conference automatic face gesture recognition washington sun wang face detection integration convnet model proc european conference computer vision amsterdam netherlands ramanan finding tiny faces proc ieee conference computer vision pattern recognition honolulu hawaii zhang ren sun deep residual learning image recognition proc computer vision pattern recognition las vegas liu anguelov erhan szegedy reed berg ssd single shot multibox detector proc european conference computer vision amsterdam netherlands redmon farhadi better faster stronger proc ieee conference computer vision pattern recognition honolulu hawaii lin goyal girshick dollar focal loss dense object detection arxiv preprint najibi samangouei chellappa davis ssh single stage headless face detector arxiv preprint jin tan face alignment survey computer vision image understanding zhu loy tang face alignment shape searching proc ieee conference computer vision pattern recognition boston shah kakadiaris face alignment via ensemble random ferns proc ieee international conference identity security behavior analysis sendai japan kakadiaris joint head pose estimation face alignment framework using global local cnn features proc ieee conference automatic face gesture recognition washington kumar alavi chellappa kepler keypoint pose estimation unconstrained faces learning efficient regressors proc ieee conference automatic face gesture recognition washington shah kakadiaris godp globally optimized dual pathway system facial landmark localization image vision computing review huang zhang beyond face rotation global local perception gan photorealistic identity preserving frontal view synthesis arxiv preprint yin sohn liu chandraker towards face frontalization wild arxiv preprint tran yin liu disentangled representation learning gan face recognition proc ieee conference computer vision pattern recognition honolulu hawaii masi rawls medioni natarajan face recognition wild proc ieee conference computer vision pattern recognition las vegas masi trn hassner leksut medioni really need collect millions faces effective face recognition proc european conference computer vision amsterdam netherlands deng zhou zafeiriou marginal loss deep face recognition proc ieee conference computer vision pattern recognition honolulu hawaii klontz klare klum jain burge open source biometric recognition proc ieee conference biometrics theory applications systems washington sun liang wang tang face recognition deep neural networks arxiv preprint amos ludwiczuk mahadev openface face recognition library mobile applications tech cmu school computer science pittsburgh zhang zhang qiao joint face detection alignment using multitask cascaded convolutional networks ieee signal processing letters king machine learning toolkit journal machine learning research farfade saberian face detection using deep convolutional neural networks proc acm international conference multimedia retrieval shanghai china mathias benenson pedersoli gool face detection without bells whistles proc european conference computer vision zurich switzerland krizhevsky sutskever hinton imagenet classification deep convolutional neural networks proc neural information processing systems lake tahoe dou shah kakadiaris face reconstruction deep neural networks proc ieee conference computer vision pattern recognition honolulu hawaii dou zhang shah kakadiaris face signature face recognition proc international conference biometrics theory applications systems arlington lei pietikainen learning discriminant face descriptor ieee transactions pattern analysis machine intelligence wang otto jain face search scale ieee transactions pattern analysis machine intelligence chen zheng patel chellappa unconstrained face verification using deep cnn features proc winter conference applications computer vision wacv lake placid usa
| 1 |
nov driss mohammed department mathematics laboratory analysis algebra decision support faculty sciences mohammed university rabat rabat morocco driss bennis abstract recentely anderson dumitrescu attracted interest several authors paper introduce notions sfinitely presented modules rings finitely presented modules coherent rings respectively among results give classical chase characterization coherent rings end paper brief discussion finitely presented modules coherent rings prove last characterized terms localization key words presented modules rings mathematics subject classification introduction throughout paper rings commutative identity particular denotes ring modules unitary multiplicative subset use ideal element denote quotient ideal according module called exists finitely generated submodule also bennis hajoui called submodule particular said ring every ideal clear every noetherian ring notions modules rings introduced anderson dumitrescu motivated works done succeeded generalize several results noetherian rings including classical cohen result hilbert basis theorem additional condition since attracted interest several authors see instance recentely motivated work anderson dumitrescu classical notions introduced see instance paper inerested finitely presented modules coherent rings actually two possibilities could considered finitely presented modules lead two coherent rings prove coherent rings defined one characterization similar classical one given chase coherent rings theorem adopt notion suitable finitely presented modules however seems evident characterize notion terms localization prove indeed briefly studied end paper characterization terms localization organization paper follows section introduce study finitely presented modules call presented module see definition study behavior short exact sequences see theorem end section change rings results see proposition corollary section devoted coherent rings called rings see definition main result represents chase result theorem see theorem also coherent modules introduced see definition proposition end paper short section presents sversion see definitions prove notions characterized terms localization see proposition theorem end paper results relate notion see propositions corollary presented modules section introduce investigate classical finitely presented modules version discuted section definition called presented exists exact sequence finitely generated free clearly every finitely presented module presented however converse hold general suffices note nonnoetherian ring ideal finitely generated presented finitely presented also evident every presented module finitely generated give example finitely generated module presented suffices consider ideal use proposition given hereinafter one could remark definition assume free module finitely generated rather fact following result notions coincide free modules proposition every free finitely generated proof let rei free basis index set exist finitely generated rmn integer every exists finite subset finitely generated let rej contains show deny exists impossible since basis remark similarly proof proposition one prove module decomposed infinite direct sum modules shows projective module countably bennis hajoui generated kaplansky theorem naturaly one would ask existence projective module finitely generated consider boolean ring field two elements every consider projective ideal element see example multiplicative subset since finitely generated desired example projective module finitely generated however determining rings every projective module finitely generated could interest worth noting rings every projective module direct sum finitely generated modules satisfy condition rings investigated next result shows classical case lemma presented module depend one specific short exact sequence form given definition proposition presented finitely generated every surjective homomorphism finitely generated free ker proof obvious since presented exists exact sequence finitely generated free schanuel lemma ker ker following result represnts behavior short exact sequences generalization theorem modules note one give classical see page however prefer focus notion presented modules discussion suitable could subject work theorem let exact sequence rmodules following assertions hold particular every finite direct sum modules presented presented particular every finite direct sum presented modules sfinitely presented particular direct summand module presented presented presented proof since exist finitely generated submodule let rei since surjective exists every let thus ker imf exist finitely generated submodule imf imf submodule rmi rmi finitely generated submodule therefore since presented exist two shorts exacts sequences finitely generated free horseshoe lemma get following diagram first assertion therefore presented obvious bennis hajoui since presented exists short exact sequence finitely generated free consider following pullback diagram therefore presented since presented exists short exact sequence finitely generated free consider following pullback diagram since free since therefore simple consequence get following result extends corollary corollary let two presented submodules rmodule presented proof use short exact sequence end section following change rings results following result extends theorem proposition let rings let ring homomorphism making finitely generated let multiplicative subset every presented presented proof let presented finitely generated finitely generated thus exact sequence integer sequence also exact sequence since presented finitely generated since finitely generated therefore presented following result extends theorem proposition let ideal let assume multiplicative subset presented presented converse holds ideal proof easy use canonical ring surjection proposition conversely presented exact sequence integer first assertion also since ideal presented therefore theorem presented bennis hajoui rings giving definition rings give following calssical case definition modules definition said finitely generated every finitely generated submodule presented clearly every coherent module however using proposition one show ideal finitely generated coherent reason consider finitely generated submodules rather submodules explained assertion remark following result studies behavior modules short exact sequences generalizes theorem proposition let exact sequence rmodules following assertions hold particular every finite direct sum modules finitely generated proof clear finitely generated let finitely generated submodule exist two shorts exacts sequences two positive integers horseshoe lemma get following diagram since finitely generated submodule module presented using theorem therefore presented clearly finitely generated let finitely generated submodule consider exact sequence ker finitely generated submodule module presented ker finitely generated theorem since ker presented therefore theorem presented evident since submodule seen submodule set definiton rings definition ring called every finitely generated ideal presented remark note every ring indeed follows fact every finitely generated free see discussion lemma next example give example ring clearly every coherent ring converse true general example ring coherent consider trivial extension multiplicative set bennis hajoui since finitely generated coherent every ideal finitely generated fact since ideal element shows easy show presented finitely presented thus ring coherent ring however seems evident give condition converse holds done rings see proposition section give another coherent rings characterized terms localization one would propose coherent rings following condition every ideal presented however satisfies condition particular every ideal finitely generated every ideal finitely presented particular coherent means notion rings condition considered classical coherence nevertheless rings could particular interest new class rings class coherent rings class noetherian rings give example coherent ring satisfy condition one could consider boolean ring field two elements every multiplicative subset indeed ideal finitely generated also note following condition every ideal finitely generated could interest indeed clearly one show following equivalences ring satisfies condition coherent satisfies condition ring coherent satisfies condition ring noetherian satisfies condition give example ring use following result proposition let direct product rings cartesian product multiplicative sets coherent every proof result proved using standard arguments example consider ring given remark let coherent ring multiplicative set noetherian proposition proposition give main result classical chase result theorem mimic proof theorem use following lemma lemma lemma let ring let finitely generated ideal let set let free module generators let exact sequence exists exact sequence rxi theorem following assertions equivalent every presented every finitely generated free presented ideal every finitely generated ideal ideal every intersection two finitely generated ideals ideal bennis hajoui proof proof similar theorem see also theorem however sake completeness give proof follows proposition obvious let finitely generated submodule free hence exists finitely generated free submodule containing therefore presented trivial let finitely generated ideal presented consider finitely generated presented thus exists exact sequence lemma exists surjective homomorphism shows proved induction number generators finitely generated ideal use assertion exact sequence use assertion lemma since proposition applied exact sequence shows ideal let two finitely generated ideals finitely generated sfinitely presented applying theorem short exact sequence get proved induction number generators finitely generated ideal using two short exact sequences used worth noting chase paper coherent rings characterized using notion flat modules naturaly one ask flatness characterizes rings similarly classical case leave interesting open question end section change rings results following results extends theorem proposition let ideal assume multiplicative subset particular following assertions hold ring ring ring ring proof use proposition next result generalizes theorem studies transfer localizations lemma let ring homomorphism flat amodule let multiplicative set presented presented proof follows using fact flatness preserves injectivity proposition ring every multiplicative set proof let finitely generated ideal finitely generated ideal since presented using lemma ideal presented desired finiteness short section present another prove notion characterized terms localization following definition gives another finitely presented modules definition module called presented exists finitely presented submodule remark clearly every finitely presented module presented however converse hold general suffices consider coherent ring module finitely generated example ring given remark inclusions definition complicate study behavior presented modules short exact sequences done theorem think presented modules mostly used commutative rings theorists rather researchers interested notions homological algebra reason behind use letter presented bennis hajoui seems relation two notions presented presented modules nevertheless deduce ring defined every presented ideal presented finitely presented finitely presented nevertheless doest make things work respect localization presented modules fact module satisfies necessarily submodule presented modules give following result proposition presented finitely presented finitely generated presented finitely presented submodule proof obvious clear since finitely desired define classical coherence rings definition ring called every ideal sfinitely presented clearly every coherent ring converse true general ring given example used example ring coherent also evident every ring done example use following result give example ring proposition let direct product rings cartesian product multiplicative sets every proof result proved using standard arguments example consider ring coherent let coherent ring multiplicative set noetherian proposition proposition follwoing result characterizes rings characterized terms localization theorem following assertions equivalent every finitely generated ideal presented every finitely generated ideal finitely presented ideal particular coherent ring proof straightforward let ideal exist finitely generated ideal assertion finitely presented ideal therefore tsi desired end paper result relates rings notion notion used characterize rings assume integral domain let sats denotes ideal sats irs proposition proved sats sats fact used prove ring noetherian every finitely generated ideal sats see proposition following result shows implication proposition fact equivalence general context consider inclusion let canonical homomorphism denote generated set sats proposition let sats sfinite sats bennis hajoui proof set sats since exist finitely generated thus write rxn exists set tsn tsk hand since conversely let desired since exist finitely generated hand since consequently tsk therefore following result proved similarly proof proposition however guarantee preservation finitely presented modules multiplying elements assume contain proposition assume every element regular let rsubmodule sats presented presented sats corollary assume every element regular following assertions equivalent every finitely generated ideal sats presented every finitely generated ideal sats acknowledgement part work presented second author scientific day algebra graaf held faculty sciences rabat may authors would like thank professor zine abidine abdelali helpful comments preparation paper references anderson dumitrescu rings comm algebra anderson kwak zafrullah agreeable domains comm algebra chase direct products modules trans amer math soc costa parameterizing families rings comm algebra glaz commutative coherent rings lecture notes berlin hamed hizem modules satisfying property comm algebra hamed hizem rings forms comm algebra hamann houston johnson properties uppers zero com alg kaplansky projective modules ann math kim kim lim mori domains algebra lim properties amalgamated algebras along ideal pure appl algebra lim properties composite ring extensions comm algebra mcgovern puninski rothmaler every projective module direct sum finitely generated modules algebra zhongkui rings arch math brno
| 0 |
fast construction efficient composite likelihood sep equations zhendong huang davide ferrari school mathematics statistics university melbourne abstract growth size complexity modern data challenges applicability traditional inference composite likelihood methods address difficulties related model selection computational intractability full likelihood combining number likelihood objects single objective function used inference paper introduces procedure combine partial likelihood objects large set feasible candidates simultaneously carry parameter estimation new method constructs estimating equations balancing statistical efficiency computing cost minimizing approximate distance full likelihood score subject penalty representing available computing resources results truncated equations containing informative partial likelihood score terms asymptotic theory within framework sample size data dimension grow developed properties illustrated numerical examples keywords composite likelihood estimation likelihood truncation corresponding author davide ferrari school mathematics statistics university bourne parkville vic australia dferrari introduction since idea likelihood fully developed fisher inference played role paramount importance statistics complexity modern data however poses nontrivial challenges traditional likelihood methods one issue related model selection since full likelihood function difficult impossible specify complex multivariate problems another difficulty concerns computing necessity obtain inferences quickly challenges motivated development composite likelihood methods avoid intractable full likelihoods compounding set likelihood objects besag pioneered inference context spatial data lindsay developed inference generality due flexible framework established theory framework become popular tool many areas applied statistics see varin overview inference common applications consider independent observations random vector pdf parametric family denotes true parameter paper mainly concerned large data sets data dimension sample size large given observap tions write efn empirical mean function empirical cdf use denote expected value operator denotes differentiation respect setting maximum likelihood score log associated estimating equations efn intractable due difficulties computing specifying full density suppose however one obtain tractable pdfs dimension much smaller example could represent single element like variable pair like conditional like typically total number grows quickly instance taking variable pairs results candidate specific choice set pdfs sometimes referred design lindsay typically specified practitioner simplicity design treated given assume often case applications focus maximum composite likelihood estimator mcle defined solution estimating equations efn efn log jth partial score score associated jth subset given vector coefficients determined refer composition rule addition computational advantages compared mle flexible modeling mcle enjoys properties analogous maximum likelihood estimator mle since partial scores commonly define unbiased estimating equations euj score also unbiased property leading consistency unfortunately mcle properties mle since asymptotic variance generally different inverse fisher information two coinciding special families models choice composition rule crucial determining efficiency computb ing cost associated established theory unbiased estimating equations prescribes find minimize asymptotic variance heyde chapter given inverse godambe information matrix var although theoretically appealing notoriously difficult task due instability common estimators term var lindsay hand common practice retaining terms choosing fixed undesirable computational statistical efficiency perspectives especially partial scores exhibit pronounced correlation cox reid discuss detrimental effect caused presence many correlated scores variance small compared pairwise likelihood estimation serious case correlation scores overwhelming keeping terms may lead lack consistency implied mcle motivated considerations introduce new method called sparse composite likelihood estimation selection scle consisting two main steps truncation step estimation step composition rule obtained minimizing approximate distance unknown full likelihood score score subject constraint step may viewed maximizing statistical accuracy given afforded computing alternatively may interpreted minimizing computing cost given level statistical efficiency due geometry resulting composition rule say contains number elements see lemma useful terms improving mcle statistical accuracy retained noisy contributing little improvement dropped solve estimating equations find final estimator compared traditional estimation main advantage approach reduce computational burden retaining relatively high efficiency large data sets reduced number terms estimating equations translates fast computing enhanced stability final estimator relatively small cost terms statistical efficiency remainder paper organized follows section describe main methodology simultaneous likelihood truncation parameter estimation section study properties truncated composition rule implied estimator within framework sample size data dimension allowed diverge section illustrates properties methodology context estimation location scale multivariate normal models section study computational statistical efficiency finite samples numerical simulations section concludes final remarks technical lemmas used main results deferred appendix main methodology throughout paper consider unbiased partial scores satisfying euj assume unique solution equations approach described section applicable problems arbitrary sample size data dimension mainly concerned situation data dimension number available objects large compared sample size although focus partial scores concreteness methodology properties section remain essentially unchanged arbitrary unbiased equation instance location parameter appropriate choice presence outliers may partial score another suitable choice setting estimating equation ferrari yang defined logq logq log logq rest paper use denote matrix column vectors define matrix entry write columns sponding denotes containing remaining columns accordingly define matrix use denote elements represents vector containing elements sparse efficient estimating equations main objective solve estimating equations efn defined respect using coefficients obtained minimizing ideal criterion denotes euclidean norm given constant constants depending data clarity exposition set remainder paper optimal solution interpreted one maximizes statistical accuracy implied estimator subject given level computing alternatively may viewed minimize complexity equations subject given efficiency compared mle tuning constant balances statistical efficiency computational burden first term aims obtain efficient estimating equations finding score close score composition rule optimal sense score function closest mle score although choice gives estimators good statistical efficiency offers control score complexity since partial likelihood scores included final estimating equation second term penalty discouraging overly complex estimating equations section show typically form penalty implies number elements exactly zero relatively large many elements exactly zero thus simplifying considerably estimating equations efn large fraction elements zero say equations efn sparse sparsity key advantage approach reduce computational burden achievable without loosing much statistical efficiency hand large one risks miss information useful data subsets may otherwise improve statistical accuracy empirical criterion estimation obvious difficulties related direct minimization ideal criterion presence intractable likelihood score expectation depending unknown parameter address issues first note negligible term depending criterion written see recall partial scores unbiased differentiate sides euj appropriate regularity conditions result used eliminate explicit dependency score finally replacing expectations empirical averages leads following empirical objective efn efn appropriate regularity conditions empirical criterion estimates consistently population criterion irrelevant constant depending caveat must close considerations motivate following estimation strategy compute truncated given preliminary consistent estimator composition rule solving argmin update parameter estimator iteration efn efn theorem shows convex minimization problem unique solution particularly let subset partial scores efn defined write solution set sign diag efn seb efn seb sign vector seb uebt ueb ueb matrix column vectors sign function jth element taking values respectively diag denotes diagonal square matrix insight meaning may useful differentiating expanding around conditions section gives efn combining highlights jth partial likelihood score selected sufficiently correlated residual difference hence criterion retains maximally useful explain gap full likelihood score score drops remaining scores meaning corresponding composition rule contain zero elements required empirical violated covariance matrix partial scores efn may nearly singular due presence largely even however efn correlated partial scores hand setting always gives guarantees existence matrix efn seb proposed approach requires initial consistent estimator often easy obtain partial scores unbiased one simple option entails solving efn large one may choose stochastic strategy dillon lebanon elements may set either randomly according scheme although initial estimator could quite inefficient update improves upon situation moreover estimator coefficients refined iterating times computational aspects lars implementation selection empirical composition rule computed using apb address issue propose proaches due implementation based regression lars algorithm efron originally developed sparse parameter estimation context linear regression modb implementation lars minimizes including one score els given time composite likelihood score step score included largest correlation currently available residual difference followed adjustment step numerical examples section suggest implementation lars algorithm selection fast steps returns path estimated composition rules value tuning constant jth partial score enters estimating equation selection practical importance since balances statistical computational efficiency given budget afforded computing say include one partial score time example using lars approach stop max reach efn efn efn efn denotes empirical covariance matrix selected partial scores indexed set criterion viewed proportion score variability explained currently selected partial scores practice choose close computing budget reached set analogy principal component analysis selected combination scores accounts largest variability collection empirical scores properties section investigates asymptotic behavior sparse composition rule corresponding scle defined within setting number candidate partial likelihoods allowed grow sample size use ekum denote trace fisher information based full likelihood may interpreted maximum knowledge full likelihood score available although grow reflecting rather natural notion one learn true model overall data size increases allowed grow fast log rather common situation estimation occuring instance scores substantially correlated independent heterogeneous increasing variances see examples section sparsity optimality composition rule section give conditions ensuring uniqueness empirical composition rule weak convergence population counterpart end work within neighborhood assume following regularity conditions exist positive constants element continuous uniformly bounded first second order derivatives analysis begins deriving condition kktc kuhn population objective defined kktc characterizes amount sparsity computational complexity associated selected estimating equations depending value tuning constant let diag defined section lemma kktc condition minimizer defined satisfies jth element vector proof let note tayor expansion around fisher information matrix jth likelihood component condition otherwise choosing sign implies need show diction since minimizes assume take sign sign implies contradicted minimizer hence argument analogous used proof lemma leads kktc specifically minimizer empirical loss efn lemma important implications current setting since relates size covariance jth score residual difference particularly covariance sufficiently small correspondent coefficient thus tuning parameter controls level sparsity composite score forcing weights score components small exactly zero uniqueness simple condition partial scores replace require scores general position specifically say score components general position affine subspace dimension contains elements excluding antipodal pairs points partial scores continuous general position probability theorem conditions solution defined unique given moreover contains elements proof let index set elements defined lemma first note composite likelihood score defined due unique solutions minimize uniqueness strict convexity implies corresponding index set unique lemma first note square matrix next show uniqueness full rank otherwise row matrix written linear efn ueb efn efn combination rows set efn lemma implies also event efn set coefficients probability equals since continuous full rank meaning size satisfies random thus ueb implies strict convexity full rank fixed containing elements indexed hence unique arguments theorem essentially unchanged population composition rule showing full rank using condition lemma index set elements implies also uniqueness next turn convergence empirical composition rule thus showing suitable replacement intractable criterion objective since criterion efn efn diag used approximation population criterion defined clearly affects accuracy approximation let distance efn kefn supreme variation matrices efn matrix induced matrix rate goes depends mainly number partial scores behavior random elements vary considerably different models example elements one needs log cai general cases suffices ensure vershynin next investigate increase compared ensure suitable behavior obtain weak convergence introduce additional requirement covariance matrix partial scores shrink zero fast exists sequence condition analogous compatibility condition estimation regression van geer ensures good behavior observed design matrix regressors differently sparse regression setting condition applied set true nonzero regression coefficients sparsity assumption composition rule imposed theorem conditions proof lemma efn note efn efn efn second term last equality lemma thus implies condition corollary let sequence conditions sup efn result follows noting proof lemma efn difference efn efn according conditions theorem efn corollary states composite likelihood score reasonable approximation particularly even close zero composite score still uses fraction components time near optimal score composition rule yielding closest score maximum likelihood score moreover implied godambe information var expected close however mcle based choice may unavailable computationally intractable due common difficulties estimating var lindsay varin truncated composition rule implies stable estimation requiring fraction scores asymptotic behavior scle section show consistency give asymptotic distribution scle defined one advantage estimation consistency asymptotic normality treated separately estimator inherits standard rethe properties leading consistency preliminary estimator quirements normality additional conditions scores needed let matrix obtained stacking let maxj maximum variation emb euj pirical optimal hessian matrices let maxj kefn supreme variation empirical scores expected value around rest section use cov denote population variability sensitivity matrices respectively depending implicitly assume exist positive constants element matrix continuous uniformly bounded first second derivatives theorem suppose exist eigenvalues bounded away conditions cov denote population variability sensitivity matrices proof without loss generality prove case since fixed proof easily generalized case without additional conditions let empirical sensitivity matrix written efn consistent preliminary estimator note kefn kefn efn efn kefn kefn first term right hand side lemma second term efn converges efn maxj since kefn theorem assumptions lemma last term also law large numbers shows efn moreover lemma since eigenvalues bounded away large efn efn since shows part theorem efn show normality obtain efn efn efn efn value second equality first term follows expansion efn efn since central limit theorem applies lemma lemma first term second term efn efn efn smaller order compared first term efn efn since theorem last term max efn max efn last expression lemma theorem assumption implies last term smaller order compared according lemma slutsky theorem first term finally since implies desired result consistency asymptotic normality estimator follow mainly converging probability target composition rule since score unbiased asymptotically normal linear combination also normally tributed overall convergence rate given order actual order depends underlying correlation partial scores optimal rate achieved scores perfectly independent combining highly correlated scores final estimating equation give rates closer examples special families models section illustrate scle estimation location scale estimation special multivariate normal models estimation common location heterogeneous variates let covariance matrix elements diagonal elements computing mle requires efn singular mle practice replaced mle available practice whilst estimation still feasible jth partial score estimating equation based sample efn leading profiled mcle weighted average marginal sample means example one work directly optimal composition rule estimation required particularly useful inspect special case independent components corresponds model estimators independent studies combined improve accuracy independence explicit solution highlights overly noisy data subsets variance dropped thus influence final estimator number elements note uniform weights corresponding mcle usual optimal solution although implied estimator minimum variance offers control overall computational cost since selected hand choosing judiciously may lead low computational burden negligible loss resulting estimator instance assuming straightforward calculation shows since number scores first term mean squared difference optimal score bounded vanishing term thus composite score converges optimal composite score particularly decreases sufficiently slow rate truncated score still contain relatively small number terms approximately equal optimal estimator correspondent estimator terms statistical accuracy elements correlated partial scores contain overlapping information case tossing away highly correlated partial scores improves computing maintaining satisfactory statistical efficiency final estimator figure shows solution path asymptotic relative efficiency compared mle different values large corresponding scle asymptotic relative efficiency drops gradually scores left example illustrates relatively high efficiency achieved truncated equations partial scores already contains majority information cases final scle sparse composition rule expected achieve good computational cost statistical efficiency location estimation exchangeable normal variates second example consider exchangeable variables marginal scores identically distributed exchangeable equal correlation differently example number log log log log log log log number number log log figure top row solution paths minimizer criterion different values corresponding number bottom row asymptotic compared mle vertical dashed lines relative efficiency scle selected criterion results correspond bottom represent common location model jth diagonal element equal element equal solution criterion equal elements regardless value optimal parameter estimator first eigenvalue whilst remaining eigenvalues equal suggesting first score contains relatively large information compared scores much larger var statistical computational efficiency may measured ratio estimator variance compared ratio increases quickly smaller much slower larger thus although elements nonzero partial scores contain already majority information suggests practice taking sufficiently large value sparse empirical solution contains zero elements already ensures relatively high statistical efficiency corresponding mlce exponentially decaying covariances let jkth element exp quantity may regarded distance spatial locations evaluating score example computationally expensive large since requires computing inverse task involving operations hand score obtained inverting covariance matrices thus requiring operations given observations mcle solves equation wjk ujk wjk wjk ujk corresponds score bivariate normal distribution pair figure shows analytical solution path minimizer criterion compared different values asymptotic relative efficiency scle mle consider number pairs ranging various choices scle relatively high asymptotic efficiency interestingly efficiency remains steady around left suggests small proportion components contains already majority information cases scle reduces dramatically computing burden retaining satisfactory efficiency final estimator numerical examples section study performance scle terms assessing mean squared error computing cost data dimension increases preliminary estimator use mcle perhaps common choice applications varin example generate samples size specify following covariance structures diagonal kth diagonal elements log log log log log log log log number number number log figure top row solution paths minimizer criterion defined different values corresponding number reported botb compared mle tom row asymptotic relative efficiency scle selected criterion vertical dashed lines bottom row correspond results correspond model element equal exp unit diagonal elements first elements uncorrelated element elements pairwise correlations unit diagonal elements block diagonal structure independent blocks six elements correlation figure left shows relative mean squared error scle compared mle moderate data dimension points trajectories correspond inclusion new component according algorithm described section scle achieves efficiency compared mle covariance structures considered always candidate partial likelihoods included advantage scle becomes evident scores exhibit relatively strong correlation example independent blocks maximum efficiency achieved representative partial scores selected block figure right shows ratio mean squared error scle compared mle relatively large data dimension compared sample size although mle used theoretical benchmark practice estimator available larger sample size interestingly sample size fixed including eventually leads substantial loss efficiency examples selecting many wastes computing resources also implies estimators larger errors hand proper choice tuning constant corresponding selected balance computational statistical efficiency example second numerical example consider covariance estimation model exp covariance components random vector decreases rapidly distance msemle msescle msemle msescle number partial scores included number partial scores included figure estimate mean square error mle msemle divided scle msescle model trajectory based samples size point trajectories correspond inclusion new component based algorithm described section left different specifications detailed section right covariance ranging components increases figure shows estimates mean square error scle compared mcle uniform composition rule point trajectories correspond inclusion new component using algorithm described section scle already efficient uniform mcle handful partial scores selected example selecting ten already ensures times accuracy uniform mcle since uniform mcle uses pairs scle obtains accurate results much lower computing cost mseunif msescle mseunif msescle number partial scores included number partial scores included figure estimate mean square error mcle mseunif divided scle msescle point trajectories corresponds inclusion new component based algorithm described section results based monte carlo samples size model exp trajectories correspond left right different numbers ranging conclusion final remarks recent years inference complex large data sets become one active research areas statistics context inference played important role applications remedy drawbacks traditional likelihood approaches despite popularity methods address computational parsimony statistical efficiency inference methodological perspective remains largely unanswered question motivated gap literature introduced new likelihood selection methodology able truncate quickly overly complex equations potentially encompassing many terms attaining relatively low mean squared error implied estimator achieved selecting estimating equations satisfying complexity minimizing approximate score inference based statistical objective functions parameter new statistical literature see giraud exposition topic note however differently existing approaches main goal reduce computational complexity overall estimating equations regardless model parameter viewed fixed size accordingly involves composition rule model parameter future developing approaches simultaneous penalization may useful deal situations data dimension size parameter space increase two main perks proposed approach make effective alternative traditional estimation practitioner perspective first advantage scle methodology constructs equations returns inferences quickly theorem shows empirical composition rule retains elements important feature method reduces sometimes dramatically amount computing needed obtain implied mcle standard error lemma highlights elements correspond partial scores maximally correlated residual difference means approach constructs estimators relatively high efficiency dropping contributing least equations approximating second desirable feature method concerns model selection ability reduce complexity large data sets essence truncation step described dimensionreduction step starting observations possibly large vector method generates collection subsets individually selected data subsets size much smaller collectively contain information given level computing represented theoretical perspective little work devoted study properties estimators number diverges cox reid discuss estimators based equations terms taking pairwise marginal scores vector take rigid composition rules compared wjk pairs wjj marginals tuning constant used increase efficiency knowledge current paper first studying behavior flexible composition rules implied estimating equations setting grow theorem corollary provide guidance selected score meaningful approximation unknown score sense objective first requirement total information available full likelihood actually known kum overwhelming compared sample size require condition mild relatively elements contain strong signal whilst remaining elements noisy heterogeneous variances section illustrate taking diagonal increasing diagonal elements second requirement tuning constant dominates asymptotically represents convergence rate empirical covariance scores efn instance elements subgaussian log meaning asymptotically larger log finally show statistical optimality computationally parsimony within selection procedure judiciously selected rate described theorem truncated composition rule scores approximates optimal composition rule consisting nonzero terms accordingly corollary suggests implied truncated score function approximates optimal score uniformly neighborhood extending type result developing theoretical insight interplay type penalty mcle accuracy beyond current setting would represent another exciting future research direction example findings would particularly valuable spatial statistics often number components overwhelming poses serious challenges traditional methods appendix section show technical lemmas required main results section lemma decreasing proof denote first term criterion defined without penalty term suppose let minimizers respectively subtracting last two inequalities gives since analogous argument shows decreasing lemma conditions proof note ekum hence since eum efn diag efn euj diag efn diag efn eum diag efn maxj eum hence lemma let conditions efn preliminary consistent estimator used compute proof note implies therefore gives lemma moreover efn subtracting efn sides gives efn efn efn efn efn diag efn efn inequality implied lemma last expression since lemma matrix maximum norm bounded matrix lemma conditions proof direct result since according lemma assumption theorem lemma conditions eku proof note ekum ekum ekum expanding ekum gives eku ekum eku eku gives eku lemma assume conditions every log composite likelihood score corresponding ith observation proof without loss generality assume recall every constants inequality follows applying chebyshev inequalities assumption beginning section log lemma implies log hence converges proves desired result references besag spatial interaction statistical analysis lattice systems journal royal statistical society series methodological pages van geer statistics data methods theory applications springer science business media cai zhang zhou optimal rates convergence covariance matrix estimation annals statistics cox reid note pseudolikelihood constructed marginal densities biometrika dillon lebanon stochastic composite likelihood journal machine learning research oct efron hastie johnstone tibshirani least angle regression annals statistics ferrari yang maximum estimation annals statistics fisher mathematical foundations theoretical statistics philosophical transactions royal society london series containing papers mathematical physical character pages giraud introduction statistics volume crc press heyde application general approach optimal parameter estimation springer science business media kuhn nonlinear programming historical view traces emergence nonlinear programming pages springer lindsay composite likelihood methods contemporary mathematics lindsay sun issues strategies selection composite likelihoods statistica sinica varin reid firth overview composite likelihood methods statistica sinica vershynin close sample covariance matrix actual covariance matrix journal theoretical probability
| 10 |
noname manuscript inserted editor identifying hazardousness using classification methods comparative study may varun kumar ojha parmartha dutta atal chaudhuri received date accepted date abstract work formulated problem related sewerpipeline gas detection using approaches primary goal work identify hazardousness offer safe access workers human fatalities occurs due toxic exposure sewer gas components avoided dataset acquired laboratory tests experiments various organized design predictive model able hazardous situation design prediction model several classification algorithms used performances evaluated compared empirically statistically collected dataset addition performances several ensemble methods analyzed understand extent improvement offered methods result comprehensive study showed algorithm performed better many algorithms perceptron radial basis function network support vector machine reduced pruning tree etc similarly observed ensemble approach enhanced performance base predictors ojha technical university ostrava ostrava czech republic dept computer science engineering jadavpur university kolkata india dutta dept computer system sciences university india chaudhuri dept computer science engineering jadavpur university kolkata india neural computing applications doi varun kumar ojha keywords sewer gas detection neural network classification test introduction view providing solution problem using technology human fatalities need avoided hence technology simple possible work addressed complex realworld problem related gas detection safety detection terms environment required allow maintenance cleaning pipeline sewer gas detection highly complex problem presence several toxic gases mixture form single gas detector may offer reliable solution therefore studied complexity problem terms gas mixture primary goal offer simple solution high accuracy easy categorize hazardous situation straightforward way hazardous meet simplicity formulated gas detection problem classification problem contains mixture several toxic gases hydrogen sulphide ammonia methane carbon dioxide nitrogen oxides nox usually mixture generated due biodegradation waste sewage toxic fatal come gases following alarming number human fatalities reported year newspapers agencies authorities responsible maintaining cleaning sewer pipeline provides various electronic portable gas detectors available market employed persons determine safeness physically get involve maintenance work however available electronic portable gas detectors providing satisfactory results evident recent comments judiciary authorities judgment civil appeal number supreme court india stated state absolve responsibility put place effective mechanism ensuring safety workers employed maintaining cleaning sewage system similarly another judgment supreme court india stated entering sewer lines without safety gears made crime even emergency situations motivated carry research domain come simple solution without minimum knowledge technicalities gas composition safety limits person able understand environment sewer system entering ensure simplicity model collected preprocessed data realize sewer binary class classification problem however work apart objective constructing prediction model set secondary objective analyze performances classifiers identifying hazardousness empirically statistically meet objectives used base predictors four different categories neural network based classifiers tree based classifiers instance based classifiers rule based classifiers algorithms applied collected dataset performance algorithms collected terms accuracy collected results used analyzing performance superiority one algorithm another one category algorithms another observed performance algorithms independent category belong example performance instance based neighbor logistic model tree support vector machine came three different categories competitive performance however must consider theorem suggests algorithms perform better problem another therefore find predictor performs best case used base predictors nine ensemble methods rest article organized follows background study provided section leads setting ground describing contribution gas detection section provide detailed description data collection preprocessing mechanisms constitute core significant part formulating gas detection problem binary classification problem section deals brief descriptions methods used constructing prediction model design comprehensive experiment set evaluation classifiers reported section whereas section describes empirical statistical evaluation classifiers discussions conclusion reported sections respectively methodology section put together background study data collection mechanisms classification methods definitions background study describes significance gas detection problem data collection mechanism describes formulation classification problem background study literature review conducted perspective enose cover broad area research field gas detection modeling using intelligent computing although much work specifically sewer reported past notable contributions observed reported noticeable research work development design electronic nose gas detection system neural network varun kumar ojha mixed gas nox measurement system developed hand sirvastava proposed design intelligent enose system using backpropagation approach llobet presented pattern recognition approach based wallet transformation gas mixture analysis using single sensor liu addressed algorithm recognize patterns mixed gases mixture three component gases using infrared gas sensor lee illustrated uses micro gas sensor array gsa combined recognizing combustible leakage gases ambard demonstrated use gas discrimination using gsa gases authors illustrated technique developing gas sensory system sensing gases dynamic environment pan shown several applications wongchoosuka proposed detection system based carbon gas sensors detecting methanol zhang developed genetic algorithm detecting mixed gas mines proposed system estimation hazardous gas release rate using optical sensor technique following salient points came mentioned articles mainly approaches studied far detecting mostly systems reported past developed two three gases sensors gases used less gases mixtures sensing important factor gas detection system least reported literature yet however ojha offered methods annealing factor addressed extent however works primarily related regression modeling impact humidity temperature sensors remained ignored far gas detection system viewed framework regression problems classification problem classification based approach led determine hazardous nonhazardous situation addition collection organization preprocessing collected data enabled address issue firmly issue occurs sensitivity one towards multiple gases case gsa designed using five typically meant detecting respective target gas hence gsa used collecting data mixture gases crosssensitivity sensed values collected data became inevitable therefore rather considering pure results respective gases registered results part data since identifying hazardousness ally intelligent model learned data also maintained crosssensitivity patterns registered terms data values learned model accurately predicts unknown gas mixture equipment data collection mechanism eeprom explaining details data collection equipment need explain basic design purpose work offer intelligent gas detection system electronic portable gas detector result embedding electronic system data flow developed intelligent system shown fig describes entire process intelligent system design divided three phases data acquisition unit consists gas chamber gsa data block intelligent unit classifier unit receives data dataacquisition unit classifying acquired data patterns output unit prompts result terms colored light buzzer hence objective limited train classifier using collected data describe data collection process follows fig block diagram intelligent system design real time data flow process first collected data samples literature laboratories test collected gas mixture samples second designed metal oxide semiconductor mos gas sensors array gsa used verifying literature laboratory data generating data samples purpose experiments designed gsa consists five sensing five different gases include hydrogen sulphide ammonia methane carbon dioxide nitrogen oxides nox typically mos sensors electrical sensors responses change circuit resistance proportional gas concentration resistance type sensor responds change resistance due change concentration gases change resistance given change mos sensor resistance base resistance sensing resistance specifics gas concentration clean air sensors mics ppm ppm ppm ppm ppm respectively ppm unit measuring concentration gas air defined follows ppm equal volume gas volume air varun kumar ojha typical arrangement gas sensor array shown fig circuitry shown fig left developed laboratory fabricated installed sensors mics gases respectively gas sensors used sensitive target gases sensitive also gases hence crosssensitivity effect mos sensors confirmed moreover confirmed sensor responses noisy accordingly pattern noise considered recorded instance dataset hence use raw values sensor response hazardousness prediction may misleading operating environment therefore training electronic portable gas detector may used predict sewer hazardousness accurately effort work provide classifier data collection vital role training classifier data samples collected per following steps first several manhole samples collected kolkata india municipal area tested laboratory identify presence several toxic gases nitrogen dioxide carbon monoxide hydrogen sulphide ammonia methane carbon dioxide secondly gas sensors identified respective gases result came procurement gas sensor mics respectively collected data sheets form companies respective sensors third step laboratory setup verification collection sensor response respective gas sensors certain range concentration specifically concentration range ppm laid sensor manuals sensors mics gases respectively addition lab setup see fig right gas cylinders connected gas concentration measuring unit called mass flow controller mfc connected gas chamber gas allowed pass specific concentration array gas sensor specifically behavior gas sensors recorded fig gas sensor array gsa identifying hazardousness following steps used preparing data sample classifiers training first hazardous safety limits component gases manhole gas mixture collected secondly three different levels safetylimit iii manhole gas recognized thirdly gases mixed different combination prepare several mixture sample used pass gsa table indicates examples mixture gases different combinations example mix five gases three different recognized concentration levels get different combinations addition considered role humidity temperature influence sensor behavior accordingly data values recorded hence collected dataset contained seven input features output class sample labeled safe sample responses five sensors maximum safety limit unsafe sample responses among five sensors maximum safety limit safety limits manhole gases follows safety limit ppm ppm ppm ppm ppm ppm ppm ppm ppm ppm table illustrates fraction collected data samples classification based approach categorized classifiers four different groups classifiers category classifiers contains three classifiers table samples different concentration concentration gases ppm humidity temperature class status safe safe unsafe safe unsafe safe unsafe unsafe unsafe varun kumar ojha network based classifiers perceptron mlp computational model imitates human brain learn environment data work used threelayered mlp layers input layer hidden layer output layer radial basis function network rbf special class mlp inputs mapped onto hidden layer consists radial basis function mapping input hidden layer support vector machine svm supervised learning computational model maps input high dimension feature space using kernel trick hence separable patterns input space linearly classified high dimensional feature space tree based classifiers reduced pruning tree rep tree based classifier method treelike structure designed predicting target class based input variables specifically leaves tree offers decision class based conjunction input feature represented branches tree rep tree decision tree tree size reduced pruning inefficient branches naive bayes tree nbt special class decision tree leaf nodes decision tree offer decision class replaced naive bayes classifier decides class label based features learned threshold table samples calibrated sensor responses based knowledge gathered literature lab tests scaling process sensors response humidity temperature inco innh inch class identifying hazardousness logistic model trees lmt similar nbt transformation leaves decision tree logistic regression node logistic regression maps independent variables categorical dependent variables using logistic function hence lmt simple idea nodes tree replaced logistic regression model rule based classifiers decision table simple representation data table based system decision made based features matching searched decision table successful search majority class label returned otherwise majority class label entire dataset returned decision unlabeled data part rule based classification method based partial decision tree generates list rules used subsequently making prediction unknown data instance rules generated based partial decision tree splits dataset subsets entire dataset gets exhausted form nodes leaf nodes tree majority predictor zero simplest possible form classification method based majority class label dataset simple words always predicts majority class instance based classifiers learning ibk provides concept description primary output ibk algorithm function maps instance category class label concept description function updated based training procedure involves two functions similarity classification similarity function computes similarity training instances instances returns classification function provides class label instances based results similarity function accordingly concept description updated star learner uses similarity matching function test instances learned instances locally weighted learning lwl locally weighted learning prediction models allowed create local points dataset specific point interest rather creating model entire dataset hence linear regression naive bayes classifier classifier may used create local models case use decision stamp single level decision tree model prediction varun kumar ojha ensemble methods work tried exploit different method making ensemble ensemble perform well need take account two things accuracy predictors diversity among predictors example bagging maintains diversity bootstrapping dataset addboost combines several weak predictors random subspace maintains diversity splitting feature space random committee maintains diversity creating predictors using different random seeds rotation forest maintains diversity splitting extracting feature subspace using principal component analysis similarly voting scheme combine several predictors maintain diversity describe ensemble methods follows bagging bagging several copies predictor created copy predictor learns different replicate learning set created complete training set using bootstrapping finally predictor decision combined using plurality voting method adaptive boosting adaboost ensemble technique combines several weak predictors inaccurate rules create accurate predictor random subspace random sub random subspace ensemble method feature space divided several feature subset hence predictors constructed feature subset finally decision constructed predictors combined using voting method random committee random com random committee ensemble several predictors constructed similar dataset use different random seeds maintain diversity ensemble rotation forest rotation frst approach training set predictors created splitting feature set subsets principal component analysis applied extract principle components hence diversity among predictors maintained axis rotation form new feature set training ensemble selection ensemble sel ensemble selection approach ensemble starts empty bag predictors chosen library trained predictors maximizing performance ensemble added bag one one compute decision ensemble using voting method voting scheme vote voting scheme combines probability distribution several chosen predictors available bag making ensemble using majority voting combination method identifying hazardousness multi ensemble approach uses bag predictors selects output class selecting predictor bag predictors based performance predictors weighted predictor ensemble wpe scheme ensemble weight predictors determined subsequently ensemble output many predictors computed follows arg max number classes two function returns value one predicted class experimental framework results aim experiment design obtain highly accurate model predicting hazardousness environment sewer pipeline sewerpipeline environment represented collected dataset second objective experiment design obtain results analyzing classifiers predictors accordingly results classifiers collected table represents parameter setting chosen classifiers evaluation classifiers repeated experiments times finally results compared based empirical statistical smirnov test evaluation used weka matlab tools purpose experiments organized experimental results three parts reflected table first part table describes category wise performance classifier hence performance category classifiers evaluated represented performance classifiers per training test accuracy accuracy close indicates classification accuracy accordingly standard deviation std training test accuracies reported understanding consistency classifiers performance table performance classifiers arranged follows category arranged ascending order average accuracy test set better performing classifier less performing classifier dataset portioned equal sets time sets used training one set testing process repeated times time unique test set used second part organized results according rank classifiers performance test set may please noted classifier collected instances training test results hence results table reflect averaged training test accuracy classifiers however ranking classifiers based average results say much quality classifier hence varun kumar ojha table parameter setting different classifiers category classifiers classifiers classifiers classifiers ensemble classifiers ensemble classifiers classifiers parameters mlp learning rate momentum factor iteration nodes hidden layer kernel gaussian basis function kernel radial basis function minimum instance per leaf split proportion leaf node nave bayes classifier node logistic function number instance per node splitting similarity function linear nearest neighbor search neighbor size similarity function entropy distance measure similarity function linear nearest neighbor search weight function linear classifier decision stamp evaluation metric accuracy search method best first confidence threshold pruning ensemble size classifier rep tree ensemble size classifier decision stamp ensemble size classifier rep tree ensemble size classifier random tree ensemble size classifier random tree ensemble size classifier rep tree ensemble size classifiers ensemble size classifiers ensemble size classifiers rbf svm rep nbt lmt ibk star lwl part zero bagging adaboost random sel random com rotation frst ensemble sel vote multi scheme wpe third part results used pairwise comparison classifiers using test ascertains whether supremacy one classifier statistically significant comprehensive matrix pairwise test results presented table test statistical test determines difference cumulative frequency distribution cfd two samples words indicates whether empirical cfd one sample equal larger smaller tells whether two dataset statistically similar dissimilar statistically dominated dissimilar statistically dominant experiments test evaluated significance level confidence discussions since developed electronic portable gas detector shall used naive persons engaged maintaining looking binary answer hence objective search classification accuracy identifying hazardousness table experimental results classifiers fold cross validation error category classifiers classifiers classifiers classifiers ensemble classifiers classifiers training avg accuracy std test avg accuracy std svm mlp rbf lmt rep tree tree ibk star lwl part decision table zero multi rotation frst random com bagging wpe ensemble sel vote random sub table ranking algorithms according performance test set fold rank category classifiers training test multi ibk kstar rotation frst random com bagging wpe lmt ensemble sel svm reptree vote part nbtree mlp random sub rbf lwl zeror adaboost varun kumar ojha rbf reptree svm zeror vote adaboost bagging ensemble sel random com wpe part random sub nbtree rotation frst mlp lmt lwl ibk ibk kstar lmt lwl mlp nbtree part rbf reptree svm zeror vote adaboost bagging ensemble sel random com random sub rotation frst wpe kstar classifiers table ranking algorithms according performance test set fold model weights highest accuracy combination may implemented electronic portable gas detector form moreover also difficult task certain accuracy implemented electronic portable gas detector toxic exposure gas also proportional time safety limit however monitoring requisite maintenance involved accuracy detector may relaxed hence resorted choose accuracy accuracy developed detector classifier performance compared threshold setting accuracy first let discuss obtained results classifiers belonging category classifier svm performs better counterparts mlp rbf terms high accuracy test accuracy high consistency std test accuracy hand performance mlp reported next svm high consistency performance rbf found inconsistent poorer comparison counterparts tree based category performance lmt reptree comparable whereas nbtree shown poor performance compared counterparts identifying hazardousness category performance ibk star comparative high accuracy high consistency lwl performed poor low accuracy came category rule based classifier part outperformed others category consistency high consistency well performing classifiers ibk svm mlp etc classifier zeror consistently performed poor comparison classifiers ensemble category random com rotation frst bagging wpe ensemble sel performed high accuracies consistency however performance ensembles random forest vote adaboost satisfactory compared ensembles one reason behind poor performance random sub usage subset features therefore feature selection may help case dataset high correlation maintained features output feature similarly voting used probability measures combine predictors addboost combined weak predictors whereas entirely better performing ensemble exploited best predictors hence performed better scenario considering assumption accuracy good predictor implementation gas detector figure table classifiers belong category exception classifier lwl performed better classifiers categories however instance based classifier ibk suitable implementation electronic gas detector since required large memory computation saving instances training set ibk prediction computed based training samples hence takes long time compute output unacceptable real time next category whose performance found close ibk classifiers category tree based classifier two classifies lmt rep tree qualified accuracy threshold contrary two classifiers performed lower accuracy however svm performed significantly well high accuracy similarly classifier part category accuracy however since svm produced less number parameters tree based predictor robustly accommodates noisy attributes recommended experiments svm proper choice implementation proposed gas detector conclusion work explored real world problem context classification simplified approach offering binary decision problem explored problem related detection hazardousness sewer pipeline environment crucial problem since related safety persons work toxic environment varun kumar ojha usually environment contains mixture toxic gases hence collected samples sewer pipelines different locations examined samples identify data samples experiments prepared large dataset collecting gas sensor responses laboratory tests literature scaled collected gas sensor responses form dataset samples labeled hazardous samples labeled finally applied different classifiers identified dataset empirical statistical performance evaluated discovered problem instance based classifier performed best followed performance tree based classifiers however found performance classifiers dependent ability mechanism classifiers information regarding category belong acknowledgements work supported iprocom marie curie initial training network funded people programme marie curie actions european unions seventh framework programme references whorton insidious foe gas western journal medicine vol lewis sax dangerous properties industrial materials wiley gromicko sewer gases home http hindu deaths drains http accessed dec ndtv died diwali inside sewage pipe http accessed dec anand dying gutters tehelka magazine vol dec achttp cessed dec hindu provide safety gear sewer workers enter manholes says court http accessed dec sewer deaths http accessed dec supreme court orders states abolish manual scavenging http accessed dec wolpert macready free lunch theorems optimization ieee transactions evolutionary computation vol mixed gas sensor system based thin film saw sensor array neural network proceedings twelfth southern biomedical engineering conference srivastava srivastava shukla design issue intelligent electronic nose system proceedings ieee international conference industrial technology vol ieee search good computational paradigm proceedings ieee international conference industrial technology vol ieee identifying hazardousness llobet ionescu brezmes vilanova correig barsan gardner multicomponent gas mixture analysis using single tin oxide sensor dynamic pattern recognition ieee sensors journal vol lee ban lee lee micro gas sensor array neural network recognizing combustible leakage gases ieee sensors journal vol ambard guo martinez bermak spiking neural network gas discrimination using tin oxide sensor array ieee international symposium electronic design test applications ieee baha dibi novel neural technique smart gas sensors operating dynamic environment sensors vol pan liu application electronic nose gas mixture quantitative detection ieee international conference network infrastructure digital content ieee wongchoosuk wisitsoraat tuantranont kerdcharoen portable electronic nose based carbon gas sensors application detection methanol contamination whiskeys sensors actuators chemical vol zhang tang genetic algorithms data fusion application mine detection chinese control decision conference ccdc ieee koo shin yoon estimation hazardous gas release rate using optical sensor neural network computer aided chemical engineering vol ojha dutta saha performance analysis neuro genetic algorithm applied detecting proportion components manhole gas mixture international journal artificial intelligence applications vol ojha dutta performance analysis neuro swarm optimization algorithm applied detecting proportion components manhole gas mixture artificial intelligence research vol ojha dutta chaudhuri saha convergence analysis backpropagation algorithm designing intelligent system sensing manhole gases hybrid soft computing approaches springer india dutta ojha conjugate gradient trained neural network intelligent sensing manhole gases avoid human fatality advances secure computing internet services applications igi global ojha dutta chaudhuri saha understating continuous ant colony optimization neural network training case study intelligent sensing manhole gas components international journal hybrid intelligent systems vol concurrent neurosimulated annealing algorithm case study intelligent sensing manhole gases international journal hybrid intelligent systems vol ghosh roy singh saha ojha dutta sensor array manhole gas analysis international symposium physics technology sensors ispts ieee ghosh saha roychaudhuri ojha dutta portable sensor array system intelligent recognizer manhole gas sixth international conference sensing technology icst ieee cantalini valentini armentano lozzi kenny santucci sensitivity analysis ethanol humidity carbon nanotubes thin film prepared pecvd sensors actuators chemical vol mitzner sternhagen galipeau development micromachined hazardous gas sensor array sensors actuators chemical vol varun kumar ojha liu zhang zhang cheng cross sensitivity reduction gas sensors using genetic algorithm neural network optical methods industrial processes farquharson vol proceedings spie donham exposure limits related air quality risk assessment iowa concentrated animal feeding operations air quality study weaver carbon monoxide poisoning new england journal medicine vol simonton human health effects exposure concentrations hydrogen sulfide occupational health safety shilpa new insight panic attacks carbon dioxide culprit journal young investigators http fahey hegglin twenty questions answers ozone layer update scientific assessment ozone depletion weigend huberman rumelhart predicting future connectionist approach international journal neural systems vol lowe broomhead multivariable functional interpolation adaptive networks complex system vol cortes vapnik networks machine learning vol olshen stone classification regression trees wadsworth international group vol quinlan programs machine learning elsevier esposito malerba semeraro tamma effects pruning methods predictive accuracy induced decision trees applied stochastic models business industry vol mohamed salleh omar comparative study reduced error pruning method decision tree algorithms ieee international conference control system computing engineering iccsce ieee walker duncan estimation probability event function several independent variables biometrika vol cox regression analysis binary sequences journal royal statistical society series methodological landwehr hall frank logistic model trees machine learning vol kohavi power decision tables machine learning springer frank witten generating accurate rule sets without global optimization aha kibler albert learning algorithms machine learning vol cleary trigg learner using entropic distance measure proceedings international conference machine learning vol frank hall pfahringer locally weighted naive bayes proceedings nineteenth conference uncertainty artificial intelligence morgan kaufmann publishers atkeson moore schaal locally weighted learning artificial intelligence review vol polikar ensemble based systems decision making ieee circuits systems magazine vol breiman bagging predictors machine learning vol freund schapire generalization learning application boosting journal computer system sciences vol identifying hazardousness random subspace method constructing decision forests ieee transactions pattern analysis machine intelligence vol rodriguez kuncheva alonso rotation forest new classifier ensemble method ieee transactions pattern analysis machine intelligence vol caruana crew ksikes ensemble selection libraries models proceedings international conference machine learning acm kuncheva combining pattern classifiers methods algorithms john wiley sons weka data mining software java accessed online available http matlab statistics machine learning toolbox accessed online available http
| 9 |
generic model computation nachum dershowitz school computer science tel aviv university tel aviv israel past two decades yuri gurevich colleagues formulated axiomatic foundations notion algorithm classical interactive parallel formalized new generic framework abstract state machines approach recently extended suggest formalization notion effective computation arbitrary countable domains central notions summarized herein background abstract state machines asms invented yuri gurevich constitute general model computation one operate desired level abstraction data structures native operations ordinary models computation instances one generic paradigm give overview foundational considerations underlying model cobbled together primarily programs sequential variety formalism built three components generalized assignments function symbol vocabulary program arbitrary terms vocabulary statements may prefaced conditional test else propositional combination equalities terms program statements may composed parallel following keyword short parallel asm program describes single transition step statements executed repeatedly unit assignments conditions enabled additional constructs beyond needed interaction parallelism dealt simple example consider program shown algorithm describing version selection sort contain values sorted unary function symbol initially quantity values sorted set brackets indicate statements executed parallel program proceeds repeatedly modifying values well locations referring terms conditions fail values sorted relation program halts nothing left declarations initializations program constants variables shown sorting program partial particular representation natural numbers used index whether implementation uses natural language decimal numbers video lecture gurevich subject see http kashefi krivine van raamsdonk eds dcm eptcs dershowitz work licensed creative commons attribution license generic model computation algorithm program sorting else algorithm program bisection search sgn sgn sgn sgn binary strings immaterial long addition behaves expected equality disequality furthermore program work regardless domain values drawn integers reals strings long means provided evaluating inequality relation another simple asm program shown algorithm standard bisection search root function described algorithm point abstract formulation author wrote applicable continuous function ones programmed remarkable asms simple model computation suffices precisely capture behavior whole class ordinary algorithms domain reason virtue abstract state machine asm representation theorem theorem algorithm satisfies three natural sequential postulates emulated asm postulates articulated section formalize following intuitions algorithm system given algorithm state information determines future transitions captured logical structure iii state transitions governed values finite set terms significance sequential postulates lies comprehensiveness formalize features exactly characterize classical algorithm abstract generic manifestation programs models effective sequential computation satisfy postulates idealized algorithms computing real numbers algorithm geometric constructions compass straightedge see examples latter abstract state machines computational model wedded particular data representation way say turing machines manipulate strings using small set tape operations representation theorem restated section establishes asms express precisely emulate algorithms satisfying premises captured postulates algorithm asm program describes precisely function state state algorithm sense asms subsume computational models may informative note similarity form asm namely single repeated loop set generalized assignments nested within conditionals folk theorem effect dershowitz flowchart program converted single loop composed conditionals sequencing assignments aid auxiliary variables see parallel composition gives asms ability perform multiple actions sans extra variables capture transpires single step algorithm versatility asms makes ideal specification prototyping indeed asms used model manner programming applications systems languages precise intended level abstraction see asm website http numerous exemplars asms provide complete means describing algorithms whether implemented effectively account abstractness one express generic algorithms like bisection search arbitrary continuous functions like gaussian elimination even field applied left unspecified asml executable specification language based asm framework used industry particular behavioral specification interfaces see example church thesis asserts recursive functions numeric functions effectively computed similarly turing thesis stakes claim function strings mechanically computed computed particular turing machine generally one additional natural hypothesis regarding describability initial states algorithms explained section characterizes effectiveness model computation operating countable data domain theorem account ability asms precisely capture single steps algorithm one infer absolute bounds complexity algorithms arbitrary effective models computation seen theorem end section sequential algorithms sequential postulates regarding algorithmic behavior based following key observations state contain relevant information apart algorithm needed determine next steps example instantaneous description turing machine computation needed pick machine computation left see similarly continuation lisp program contains state information needed resume computation structures suffice model salient features states compare values programming variables meaningless algorithm implementation independent rather relationships values matter algorithm follows algorithm work equally well isomorphic worlds compare algorithm relations values stored state via terms vocabulary equalities disequalities values algorithms expressed means finite texts making reference finitely many terms relations among see example three postulates given modified slightly assert classical algorithm system operating structures way invariant isomorphisms algorithm prescription updating states changing interpretations given symbols states essential idea fixed finite set terms generic model computation refer possibly indirectly locations within state suffice determine state changes transition sequential time begin algorithms deterministic systems postulate sequential time algorithm determines following nonempty states nonempty subset initial states partial transition function terminal states states transition defined transition depend state means states must store information needed determine subsequent behavior prior history unavailable algorithm unless stored current state deterministic classical algorithms fact never leave room choices involve sort interaction environment determine next step incorporate nondeterministic choice probabilistic choice interaction environment one would need modify notion transition postulate meant exclude formalisms result continuation depend limit infinite sequence preceding finite infinitesimal steps likewise processes states evolve continuously analog processes like position bouncing ball rather discretely eschewed though may appear first glance recursive function fit rubric system fact definition traditional recursive function comes together computation rule evaluating rogers writes obtain computation uniquely working inside left right abstract state algorithm states comprehensive incorporate relevant data including program counter coupled program completely determine future computation states may regarded structures finitely many functions relations constants simplify matters relations treated functions constants nullary functions state consists domain base set universe carrier interpretations symbols relevant information state given explicitly state means interpretation symbols appearing vocabulary structure specific details implementation data types used algorithm matter sense states abstract crucial consideration leads second postulate postulate abstract state states algorithm structures finite vocabulary following hold state algorithm structure isomorphic also state initial terminal initial terminal respectively transitions preserve domain dom dom every state class distinction irrelevant purposes dershowitz transitions respect isomorphisms isomorphism states also state structures endowed boolean truth values standard boolean operations vocabularies include symbols structure state interprets function symbols vocabulary every symbol vocabulary state values domain domain value assigned location write way assigns value dom ground terms vocabularies finite since algorithm must describable finite terms refer explicitly finitely many operations hence algorithm instance involve knuth arrow operations etc instead one could employ ternary operation postulate justified vast experience mathematicians scientists faithfully transparently presented every kind static mathematical scientific reality logical structure restricting structures limiting syntax precludes states infinitary operations like supremum infinitely many objects would make sense algorithmic point view however limit semantics algorithms notions domain states may sequences sets objects case state would also need provide operations dealing objects closure isomorphism ensures algorithm operate chosen level abstraction states internal representation data invisible immaterial program means behavior algorithm contradistinction implementation example depend memory address variable algorithm depend matters full description must also include specifics memory allocation possible liberalize postulate somewhat allow domain grow shrink vocabulary infinite extensible enhancements materially change notion algorithm extension structures partial operations given see section effective transitions actions taken transition describable terms updates form meaning new interpretation given next state function symbol values program update one use assignment view state collection graphs operations point pair also denoted thus define update set changed points terminal state undefined indicate setting point encapsulates relation algorithm providing information necessary update interpretation given current state produce particular state algorithm needs evaluate terms help information stored next postulate ensure finite representation updates determined performed means finite amount work simply stated fixed finite set ground terms determines stepwise behavior algorithm postulate iii effective transitions every algorithm finite set ground critical terms state vocabulary states agree values terms also share update sets two states particular one terminal bounded exploration generic model computation intuition algorithm must base actions values contained locations current state unless states undergo updates unconditionally algorithm must explore one values accessible locations current state determining proceed means algorithm reference locations via terms since values abstract entities every referenced location value two states behavior algorithm must states fixed finite set critical programs infinite size like infinite table lookup careful analysis notion algorithm examination intent founders field computability demonstrate sequential postulates fact true ordinary sequential algorithms kind envisioned pioneers field words classical algorithms satisfy postulates iii sense traditional notion algorithm precisely captured axioms definition classical algorithm object satisfying postulates iii shall called classical algorithm equivalent algorithms makes sense say two algorithms behavior behaviorally equivalent operate states transition function two algorithms syntactically equivalent states renaming symbols vocabularies transitions renaming discussion algorithm equivalence see abstract state machines abstract state machines asms description language classical algorithms characterizing programs semantics asm statements assignment parallel composition conditionals expected formalized program defines single step repeated forever next state convenience show simple form asms bear mind however much richer languages asms given used practice programs expressed terms vocabulary convention asm programs always include symbols boolean values true false undef default undefined value standard boolean operations equality vocabulary sorting program instance contains addition standard symbols suppose states integers three standard values domain nullary symbols fixed programming constants serve bounds nullary symbols programming variables used array indices states interpret symbols well standard symbols usual unlike static interpretation never changed program initial states integer values plus undef points dershowitz states update set table update sets sorting program program always terminates successfully first elements nondecreasing order hidden variables asms steps algorithm intended executed sequence say asm need keep explicit track sequence semantics unlike algorithms observed either change value location current state asm might update location trivial way giving value already also asm might designate two conflicting updates location called clash case standard asm semantics cause run fail programs might abort alternative semantics imagine nondeterministic choice competing values considered prefer ignore nondeterminism implicit failure tacitly presume asm never involves clashes albeit undecidable property take various possibilities account proposed update set asm may defined following manner else otherwise otherwise means course boolean condition holds true condition conditional statement evaluate true statement contribute updates asm execution halts success terminal state since confusion arise dropping subscript otherwise updates applied yield next state replacing values locations referred latter contains trivial updates loop forever terminal states update set signify next state set updates update sets sorting program algorithm shown table subscript omitted example state generic model computation per row next state one step per row unchanged algorithm reaches terminal state row representation theorem abstract state machines clearly satisfy three sequential postulates asms define function operate abstract states depend critically values finite set terms appearing program unchanging values parts state modified program example critical terms sorting asm terms appearing except sides assignments contribute proper subterms instead subterms values affect computation thus asm describes classical algorithm structures vocabulary similarity type converse greater significance theorem representation theorem every classical algorithm sense definition behaviorally equivalent asm exact states function proof representation theorem constructs asm contains conditions involving equalities disequalities critical terms closure isomorphisms essential ingredient making possible express algorithm language terms typical asm models partial functions like division tangent using special value undef denoting argument outside function domain definition arranging operations strict term involving undefined subterm likewise undefined state asm would return true asked evaluate expression undef therefore programmed work properly despite partiality division analysis representation theorem refined algorithms employing truly partial operations operations cause algorithm hang operation attempted outside domain definition rather return undef point behaviorally equivalent asm never attempts access locations state also accessed given algorithm partial operations required next section effective algorithms thesis thesis asserts standard models capture effective computation specifically effectively computable numeric partial functions partial recursive partial string functions computed turing machine say algorithm computes partial function input states particular locations input values running algorithm results correct output values specifically domain input state terms values input states cover tuples input states agree values terms dershowitz input values corresponding input state leads via sequence transitions terminal state value designated term vocabulary algorithm whenever latter defined leads infinite computation whenever capture makes sequential algorithm mechanically computable need input states finitely representable accordingly insist harbor information beyond means reach domain values plus anything derived therefrom say function symbols construct domain state assigns value exactly one term restricting gives free herbrand algebra example domain sorting algorithm consisting integers booleans constructed true false undef successor function call takes integers predecessor negation negative integers absolute value postulate iii ensures transition function describable finite text text asm algorithm effective states must also finitely describable definition effectiveness state effective includes constructors domain plus operations almost everywhere meaning locations hold input values default value undef classical algorithm effective initial states moreover effective algorithms bootstrapped state effective also vocabulary enriched constructs domain every total partial operation computed effective algorithm constructors model computation set algorithms shared domain effective algorithms via constructors effectiveness postulate excludes algorithms ineffective oracles halting function free constructors foundation precludes hiding potentially uncomputable information means equalities distinct representations domain element approach effectiveness advocated extended include partial functions states sorting algorithm effective sense since addition natural numbers comparisons integers operations reside initial states programmed constructors true false undef particular natural numbers turing machines strings form effective models furthermore shown three prima facie different definitions effectiveness arbitrary domains proposed respectively comprise exactly functions strengthening conviction essence underlying notion computability fact captured theorem thesis every effective model representation domain values strings algorithms simulated turing machine call effective computational model maximal adding function computes results set functions simulated effective model remarkably perhaps exactly one model theorem effectiveness theorem set partial recursive functions likewise set string functions unique maximal effective model isomorphism countable domain generic model computation recently extended proof thesis demonstrated validity widely believed extended thesis theorem extended thesis every effective algorithm polynomially simulated turing machine conclusion dealt herein classical type algorithms say meaning bounded parallelism deterministic interaction outside world case abstract state machines faithfully emulate algorithm class seen theorem furthermore characterized distinction effective algorithms abstract siblings theorem various declarative styles programming relation implicit rather explicit notion algorithm programs algorithms sense definition would equipped specific execution mechanism like one recursion mentioned prolog example mechanism unification mode search would need specified paradigm extended handle modern notions desired algorithm make explicit distinction successful failing terminal states storing particular values specific locations final state alternatively one may declare failure conflict two enabled assignments see difficulty allowing nondeterminism multivalued transition function semantics choice made clashing assignment statements transitions indeed nondeterministic see general forms nondeterminism obtained adding choice command sort language see nothing needs added syntax asms apply cases environment provides input incrementally one need imagine environment allowed modify values specified set locations state machine steps see analysis algorithms extended case algorithm interacts outside environment step execution waits queries environment responded forms interaction handled analysis extended massively parallel algorithms distributed algorithms handled fact asms emulate algorithms facilitates reasoning complexity algorithms theorem parallel asms used studying complexity algorithms unordered structures see quantum algorithms modeled asms current research includes extension framework hybrid systems combining discrete sequential steps analog evolving time behaviors dershowitz acknowledgements thank yuri gurevich nikolaj perspicacious suggestions referees questions evgenia falkovich help references mike barnett wolfram schulte abcs specification asml behavior components informatica slovenia available http theabcsofspecification viewed june andreas blass nachum dershowitz yuri gurevich two algorithms bulletin symbolic logic available http viewed mar andreas blass nachum dershowitz yuri gurevich exact exploration hanging algorithms proceedings eacsl annual conferences computer science logic brno czech republic lecture notes computer science springer berlin germany available http pdf viewed may longer version http viewed may andreas blass yuri gurevich ordinary interactive algorithms part acm transactions computational logic available http viewed may andreas blass yuri gurevich ordinary interactive algorithms part acm transactions computational logic article available http viewed may andreas blass yuri gurevich ordinary interactive algorithms part iii acm transactions computational logic article available http viewed may andreas blass yuri gurevich abstract state machines capture parallel algorithms correction extension acm transactions computation logic article available http viewed andreas blass yuri gurevich dean rosenzweig benjamin rossman interactive algorithms part axiomatization logical methods computer science paper available http viewed june andreas blass yuri gurevich dean rosenzweig benjamin rossman interactive algorithms part abstract state machines characterization theorem logical methods computer science paper available http viewed july andreas blass yuri gurevich saharon shelah polynomial time computation unordered structures journal symbolic logic available http viewed july udi boker nachum dershowitz thesis arbitrary domains arnon avron nachum dershowitz alexander rabinovich editors pillars computer science essays dedicated boris boaz trakhtenbrot occasion birthday lecture notes computer science generic model computation springer available http viewed udi boker nachum dershowitz three paths effectiveness andreas blass nachum dershowitz wolfgang reisig editors fields logic computation essays dedicated yuri gurevich occasion birthday lecture notes computer science springer berlin germany available http viewed egon origins development asm method high level system design analysis journal universal computer science available http viewed june egon dean rosenzweig mathematical definition full prolog science computer programming available ftp viewed july olivier bournez nachum dershowitz foundations analog algorithms proceedings third international workshop physics computation nile river egypt available http viewed may olivier bournez nachum dershowitz evgenia falkovich towards axiomatization simple analog algorithms manindra agrawal barry cooper angsheng editors proceedings annual conference theory applications models computation tamc beijing china lecture notes computer science springer verlag available http available http pdf viewed july nachum dershowitz evgenia falkovich formalization proof extended thesis proceedings seventh international workshop developments computational models dcm july zurich switzerland electronic proceedings theoretical computer science available http viewed july nachum dershowitz yuri gurevich natural axiomatization computability proof church thesis bulletin symbolic logic available http viewed apr robin gandy church thesis principles mechanisms kleene symposium studies logic foundations mathematics andreas glausch wolfgang reisig class distributed algorithms abrial uwe editors rigorous methods software construction analysis lecture notes computer science springer berlin available http viewed mark gold limiting recursion symbolic logic saul gorn algorithms bisection routine communications acm erich antje nowack quantum computing abstract state machines proceedings international conference abstract state machines advances theory practice asm taormina italy berlin available http viewed july yuri gurevich evolving algebras lipari guide egon editor specification validation methods oxford university press available http viewed apr dershowitz yuri gurevich sequential abstract state machines capture sequential algorithms acm transactions computational logic available http viewed apr yuri gurevich benjamin rossman wolfram schulte semantic essence asml theoretical computer science available http viewed june yuri gurevich wolfram schulte margus veanes toward industrial strength abstract state machines technical report microsoft research available http viewed yuri gurevich tatiana yavorskaya bounded exploration bounded nondeterminism technical report microsoft research available http viewed apr david harel folk theorems communications acm stephen kleene mathematical logic wiley new york stephen kleene reflections church thesis notre dame journal formal logic emil post absolutely unsolvable problems relatively undecidable propositions account anticipation davis editor solvability provability definability collected works emil post boston unpublished paper hilary putnam trial error predicates solution problem mostowski symbolic logic wolfgang reisig gurevich theorem sequential algorithms acta informatica available http viewed wolfgang reisig computable kernel abstract state machines theoretical computer science draft available http viewed hartley rogers theory recursive functions effective computability new york marc spielmann abstract state machines verification problems complexity thesis rwth aachen aachen germany available http viewed july alan turing computable numbers application entscheidungsproblem proceedings london mathematical society corrections vol reprinted davis undecidable raven press hewlett available http
| 6 |
commonsense ocated ear relation extraction nov frank bill kenny zhu frankxu yuchenlin kzhu department computer science engineering shanghai jiao tong university shanghai china introduction artificial intelligent systems benefit incorporating commonsense knowledge background ice cold roperty chewing eating ubevent chair table typically found near ocated ear etc kind commonsense facts utilized many downstream tasks textual entailment visual recognition tasks commonsense knowledge often represented relation triples commonsense knowledge bases conceptnet mit one largest commonsense knowledge graph available today however kind commonsense knowledge bases usually manually curated community efforts thus scale well paper aims automatically extracting commonsense ocated ear relation physical objects textual corpora defined two objects typically found near real life focus ocated ear relation reasons ocated ear facts helpful prior knowledge object detection complex image scenes figure illustrates two motivating examples commonsense knowledge potentially benefit general reasoning reading comprehension question answering well many tasks iii existing knowledge bases facts relation conceptnet triples ocated ear figure ocated ear relation facts assist detection vague objects dimly lit room settings shown left bright laptop present table one may guess lamp photo frame books maybe nearby similarly right set knife fork plate table one may believe could glass beside based commonsense even though objects hardly visible due low light propose two novel tasks extracting ocated ear relation textual corpora one binary relation classification problem judges whether sentence describing two objects physically close task produce ranked list ocated ear facts given classified results large number sentences believe two tasks help community automatically complete populate existing commonsense knowledge bases first two authors contribute equally conference neural information processing systems nips long beach usa additionally also create two benchmark datasets evaluating ocated ear relation extraction systems two tasks one sentences describing scene two physical objects label indicating two objects scene consists pairs objects scores indicating confidences certain pair objects commonly located near real life propose several methods solve tasks including neural architecture proposed neural architecture compares favorably current method relation classification problem relatively smaller proposed datasets extract total new ocated ear triples conceptnet ocated ear relation classification given sentence mentioning pair physical objects call instance section aim determine whether located near physical scene described sentence example suppose dog cat king puts dog cat true two objects located near sentence successful classification model expected label instance true dog older answer instance false talking general comparison following subsections present two different kinds baseline methods binary classification task methods neural architectures methods first baseline svm classifier based following features claim semantic syntactic features widely utilized among existing relation classification models note put special focus adverbs prepositions based assumption lexical units describing directions positions physical world help identify ocated ear relations proposed features bag words set words ever appeared sentence bag path words bpw set words appeared shortest dependency path objects dependency tree sentence plus words two subtrees rooted parse tree bag adverbs prepositions bap existence adverbs prepositions sentence binary features global features length sentence number nouns verbs adverbs adjectives determiners prepositions punctuations whole sentence shortest dependency path features sdp dependency parse tree sentence shortest path two objects semantic similarity features cosine similarity glove word embeddings two object words obtaining features every instances feed processed data svm classifier evaluate linear rbf kernels different parameter settings rbf kernel performs best overall neural architectures long short term memory based recurrent neural architectures lstms widely used relation classification observe existence ocated ear relation instance depends two major information sources one semantic syntactical features sentence object pair intuition design model two parts shown figure left part encoding syntactical semantic information sentence right part encoding semantic similarity word embeddings output confidence lstm dense layer token vector representation position normalized sequence original sentence lead lead king token word vectors position led dog nice garden dog garden figure proposed model sentence normalization using original word sequence sentence input two problems irrelevant words sentence take noise model large vocabulary original words induce many parameters may cause example given two sentences king led dog nice criminal led dog poor object pair dog garden sentences two words lead essential determining whether object pair located near given bias words also semantic differences irrelevant words king criminal beautiful poor useful relation dog garden thus tends act noise level objects lemma dependency role pos tag examples open lead open open table examples four types tokens sentence normalization represents subject given verb preposition represents object considering problems propose utilizing pos tags instead capture syntactical information reduce vocabulary size however solely loses much semantic dependency words thus propose normalized sentence representation method merging three important relevant kinds information instance lemma pos tags dependency role first replace two nouns object pair keep lemmatized form original words verbs adverbs prepositions highly relevant describing physical scenes replace subjects direct objects verbs prepositions nsubj dobj verbs case prepositions dependency parse tree special tokens indicating dependency roles remaining words simply use pos tags replace originals four kinds tokens illustrated table table real example normalized sentence representation object pair interest dog garden king open opened open door open led lead dog table sentence normalization example utilize stanford corenlp tool https nice garden model training shown figure bottom figure shows original sentence transformed normalized sequence described apart normalized tokens original sequence capture structural information also encode distance token word position embeddings features proposed intuition information needed determine relation two target nouns normally comes words close target nouns leverage lstm encode whole sequence tokens normalized representation plus position embedding meantime two pretrained glove word embeddings original two physical object words fed hidden dense layer finally concatenate outputs use sigmoid activation function obtain final prediction choose use standard binary loss function rmsprop used optimizer following add dropout lstm well embedding layer utilize batch normalization overfitting problem due relatively small dataset ocated ear relation extraction figure shows overall workflow automatic framework mine locatednear relations raw text first construct vocabulary physical objects generate candidate instances sentence corpus pair physical objects appear nouns sentence apply ocated ear relation classifier instance relation classifier yields probabilistic score indicating confidence existence ocated ear relation finally scores instances corpus grouped object pairs aggregated object pair associated final score mined physical pairs scores easily integrated existing commonsense knowledge base specifically object pair find sentences corpus mentioning objects classify instances relation classifier get confidences instance feed function obtain final score object pair five variants scoring functions conf conf conf conf object pairs object classifier corpus classification confidence locatednear relation scores figure computing ocated ear scores object pairs datasets proposed vocabulary physical objects constructed intersection entities belong physical object class wikidata conceptnet concepts manually filtered words meaning abstract concept results physical objects total afterwards utilize cleaned subset project gutenberg corpus contains english books written authors assumption sentences fictions acc random majority svm svm svm svm acc svm svm drnn svm table performance baselines classification task ablation means without certain feature likely describe real life scenes sample investigate density ocated ear relations gutenberg widely used corpora namely wikipedia used mintz new york times corpus created riedel used lin hoffmann surdeanu english wikipedia dump sentences mentions least two physical objects turn positive new york times corpus percentage positive sentences contrast percentage gutenberg corpus much higher two corpora making good choice ocated ear relation extraction corpus identify pairs sentences among pairs randomly select object pairs sentences respect pair annotators label commonsense ocated ear instance labeled least three annotators college students proficient english final truth label sentence decided majority vote four annotators cohen kappa among three annotators suggests substantial agreement randomly choose instances training set test set evaluating first relation classification task second task ask annotators label whether pair objects likely locate near real world majority votes determine final truth labels agreement datasets made publicly evaluation ocated ear relation classification evaluate proposed methods general domain relation classification model drnn results shown table svm feature ablation feature types section model experiment variants input sequence original sentence uses original words input tokens uses pos tag sequence input tokens uses tokens sequence sentence normalization results find svm model without global features performs best indicates features benefit shortest dependency paths whole sentence find drnn performs best precision significantly higher experiment also shows enjoys highest recall score terms overall performance best one one possible reason proposed normalization representation reduces input sequences token vocabulary size preserving important syntactical semantic information also reduces vocabulary size loses much information another reason ocated ear relation described sentence mostly decorating descendants object word dependency tree words merely along shortest dependency path thus drnn capture information words belonging descendants two object words tree https besides added two naive baselines random baseline classifies instances two classes equal probability majority baseline considers instances positive map table ranking performances scoring methods information captured rest experiments use classifier choice ocated ear relation extraction classified sentences using extract ocated ear relation using four scoring functions section first present quantitative results use scoring functions rank commonsense ocated ear object pairs described section table shows ranking results using mean average precision map precision metric accumulative scores generally better door room ship sea fire wood fire smoke book table boy girl house garden house fire door hall fruit tree cup tea arm leg horse saddle door street table chair table top object pairs returned best performing scoring function qualitatively show object pairs highest scores table setting threshold minimum score true object pairs ocated ear object pairs data set pairs obtain total ocated ear relations precision human inspection related work classifying relations entities certain sentence plays key role nlp applications thus hot research topic recently methods neural network techniques common introduce lstm model classify relations incooperating several different kinds information sentence improved performed best task one baseline methods related work extraction visual commonsense knowledge yatskar work learns textual representation seven types visual relations using textual caption image dataset another important related work enriches several popular relations conceptnet little textual information real large corpora however ocated ear relation studied work relation extremely scarce conceptnet distinctiveness conclusion presented novel study enriching ocated ear relationship textual corpora based two benchmark datasets proposed several methods solve relation classification problem showed existing methods work well task discovered model significant edge simpler model whereas sentence normalization turns useful future directions include better utilizing distant supervision incorporating knowledge graph embedding techniques applying ocated ear knowledge downstream applications computer vision natural language processing references bowman angeli potts manning large annotated corpus learning natural language inference arxiv preprint bunescu mooney shortest path dependency kernel relation extraction cooijmans ballas laurent courville recurrent batch normalization arxiv preprint dagan dolan magnini roth recognizing textual entailment rational evaluation approaches natural language engineering ebrahimi dou chain based rnn relation classification pages hendrickx kim kozareva nakov pennacchiotti romano szpakowicz task classification semantic relations pairs nominals proceedings international workshop semantic evaluation semeval acl uppsala university uppsala sweden july pages hinton srivastava swersky neural networks machine learning lecture overview gradient descent lecture coursera hochreiter schmidhuber long memory neural computation hoffmann zhang ling zettlemoyer weld weak supervision information extraction overlapping relations proceedings annual meeting association computational linguistics human language technologiesvolume pages association computational linguistics ioffe szegedy batch normalization accelerating deep network training reducing internal covariate shift arxiv preprint lahiri complexity word collocation networks preliminary structural analysis proceedings student research workshop eacl pages april taheri gimpel commonsense knowledge base completion proceedings annual meeting association computational linguistics acl berlin germany august association computational linguistics lin maire belongie hays perona ramanan zitnick microsoft coco common objects context european conference computer vision pages springer lin shen liu luan sun neural relation extraction selective attention instances acl mintz bills snow jurafsky distant supervision relation extraction without labeled data proceedings joint conference annual meeting acl international joint conference natural language processing afnlp volume pages association computational linguistics pennington socher manning glove global vectors word representation empirical methods natural language processing emnlp pages url http ren voss abdelzaher han cotype joint extraction typed entities relations knowledge bases www riedel yao mccallum modeling relations mentions without labeled text machine learning knowledge discovery databases pages socher pennington huang manning recursive autoencoders predicting sentiment distributions proceedings conference empirical methods natural language processing pages association computational linguistics speer havasi representing general relational knowledge conceptnet lrec pages surdeanu tibshirani nallapati manning learning relation extraction proceedings joint conference empirical methods natural language processing computational natural language learning pages association computational linguistics mou chen peng jin classifying relations via long short term memory networks along shortest dependency paths emnlp pages jia mou chen jin improved relation classification deep recurrent neural networks data augmentation coling jia mou chen jin improved relation classification deep recurrent neural networks data augmentation arxiv preprint yatskar ordonez farhadi stating obvious extracting visual common sense knowledge proceedings pages zaremba sutskever vinyals recurrent neural network regularization arxiv preprint zeng liu lai zhou zhao relation classification via convolutional deep neural network coling pages zhou zhang zhang exploring various knowledge relation extraction acl zhu fathi reasoning object affordances knowledge base representation european conference computer vision pages springer
| 2 |
garch process driven process mar mohammadi march abstract paper study simple driven generalized autoregressive conditionally heteroscedastic process statistical properties process characterized process potential approximate driven cogarch processes show state representation process described random recurrence equation periodic random coefficients almost sure absolute convergence state process proved periodically stationary solution state process shown cause volatility periodically stationary suitable conditions also shown increments constant length process periodically correlated process finally apply test investigate behavior increments constant length simulated samples proposed process keywords garch process process periodically correlated periodically stationary introduction many financial data indices heteroscedastic structure examples kind stocks returns network traffic natural data see popular model data autoregressive conditionally heteroscedastic arch model proposed engle generalized arch garch bollerslev garch type processes become popular tools model heteroscedasticity discrete time faculty mathematics computer science amirkabir university technology hafez avenue tehran iran mohammadi rezakhah rezakhah department mathematics computer science allameh tabatabai university tehran iran modarresi practice various reasons data many time series irregularly spaced created demand models first time kluppelberg introduced version garch cogarch process preserves essential features garch processes replaced noise garch process increments process volatility process satisfies stochastic differential equation proved stationarity property also second order properties regularity conditions corresponding process brockwell generalized driven cogarch process driven cogarch process volatility arma carma process showed state representation volatility expressed stochastic recurrence equation random coefficients periodic behavior common many time series power market prices car accident claims insurance company sales seasonal interest term periodically correlated introduced gladyshev property introduced bennett called cyclostationary properties processes studies hurd miamee bibi lescheb studied class bilinear processes periodic coefficients periodic arma periodic garch models processes introduced stationary independent increments right continuous paths left limits processes potential applied financial data following stochastic volatility structure generalization process process periodically stationary increments studied maejima sato considered process underlying process carma cogarch processes applied evident underlying process increments observations processes significant dependency ones previous periods process prominent processes cases paper introduce cogarch process driven simple process call process simple process defined compound poisson process periodic intensity period process enables provide statistical properties process moreover find random recurrence equation periodic random coefficients state representation process regularity condition show absolute convergence state equation also show volatility process strictly periodically stationary increments process constant length integer process period process potential provide approximation every driven cogarch process finally investigate theoretical results concerning structure increment process simulation show increments process length period support squared coherence statistics consists lines parallel main diagonal spacing paper organized follows section introduce simple driven cogarch processes present simple process obtain characteristic function section devoted sufficient conditions make volatility process strictly periodically stationary obtain mean covariance function state process volatility process section also investigate second order properties squared increments cogarch process section section illustrate results simulations proofs contained section simple driven cogarch processes section study preliminaries additive processes characteristic functions process subsection also describe structure simple process characteristics subsection introduce simple driven cogarch process subsection preliminaries let filtered probability space smallest rightcontinuous filtration contains sets process defined probability space called additive process stochastically continuous independent increments sample paths left limits stationary increments process characteristic function additive process following representation theorems theorem let additive process infinitely divisible distribution law uniquely determined spot characteristic triplet inner product euclidean vector norm spot measure satisfies integrability condition min remark spot characteristic triplet defined triplet called local characteristic triplet satisfy following conditions deterministic function finite variation symmetric continuous matrix valued function verifies family measures verifies min extension process present definition processes definition subclass additive processes called process period denotes equality distributions structure simple process describing structure simple process define general structure intensities function poisson process periodically stationary increments also characterize pure jump process representation characteristic function introduce corresponding measure definition poisson process periodically stationary increment process poisson process periodically stationary increment intensity periodic function period definition simple compound poisson process let partition positive real line also assume integer let poisson process periodically stationary increments period intensity function defined simple compound poisson process defined znj arrival time nth jump independent distribution also deterministic drift function period say one easily verify independent increment find characteristic function simple compound poisson process following lemma lemma let poisson process periodically stationary increment mean defined process defined following characteristic function eiwst eiwz iwz proof see appendix remark remark lemma spot characteristic triplet process local characteristic triplet following form dds follows definition remark family measures verify implies decomposition quadratic variation process corollary lemma stochastic process defined process period proof see appendix structure simple driven cogarch process let simple process period defined process parameters simple driven cogarch process defined dgt dst equivalently dsu volatility process defined state process unique solution stochastic differential equation dyt denotes differentiation respect initial value independent driving process periodic stationarity conditions section provide conditions prove volatility process defined strictly periodically stationary period result main theorem prove increments constant length process periodically correlated process mian aim paper also give sufficient necessary condition determine volatility following theorem norm defined kckr kcckr kckr sup theorem let state process process parameters defined suppose simple process defined family random random vector addition independent identically distributed let eigenvalues invertible matrix strictly negative real parts also suppose exists one log matrix diagonal measure defined converges distribution finite random vector fixed goes infinity distribution vector unique solution random equation independent let conditions hold strictly periodically stationary period hands borel sets borel sets ysn ysn vsn vsn proof see appendix following remark describe lyapunov exponent leads absolutely convergence state process theorem remark proof theorem based use general theory multivariate random recurrence equations discussed bougerol picard brandt vervaat one dimensional case state vector defined satisfies multivariate random recurrence equation condition provides stability model based existence vector norm satisfy conditions log log log max equivalent assertion lyapunov exponent strictly negative almost surely lim sup conditions theorem imply natural matrix norm matrix corresponds following natural vector norm matrix diagonal corollary strictly periodically stationary process period increments constant length process make process words cov cov dss proof see appendix theorem let state process process parameters suppose real constant following two conditions hold ebt ebt probability one conversely either fails holds fails exists simple process proof volatility process similar proof theorem process characterization state process aim section study expected value covariance function state process volatility process first prove sufficient conditions expected value covariance exist presenting first second moments random vector find expected value covariance function state process furthermore closed form square increments cogarch process characterized lemma let assumptios theorem hold cov cov simple process proof see appendix remark theorem find solution following random equation solution following equation vec vec kronecker product two matrices matrix vec column vector constructed stacking columns matrix vector following lemmas establish mean covariance function state process lemma suppose state process conditions theorem lemma hold exists cov proof see appendix corollary let volatility process expected value covariance function following forms cov cov proof see appendix financial time series returns negligible correlation squared returns significantly correlated therefore investigate behavior properties increments cogarch process assume volatility process strictly periodically stationary present first second orders increment process defined corollary proposition let zero mean simple driven cogarch process cov exist moreover exist cov cov cov cov proof see appendix remark assume cov cov cov cov dsb dss simulation section simulate simple process defined process compound poisson process arrival rate defined verify theoretical results concerning structure increments sscogarch process defined simulation simulate state process defined jump time points time points using random recurrence equation evaluate discretized version volatility process defined corresponding process finally verify structure increments process following method simple process defined underlying poisson process consider time first jump time intervals nth jumps arrival times therefore ftsn defined arrival times generated following algorithm generate independent identically distributed iid sequence uniform first arrival time distribution therefore denotes uniform generating considered generated sample first arrival time evaluated arrival time distribution therefore generating generated sample nth arravial time thus applying iid sample evaluate successively nth arrival time details see periodic intensity function one evaluate available software consider periodic drift function successive jump size generate independently distribution corresponding arrival time belongs evaluate simple process znj consider following steps simulation process defined consider integer choose real parameters eigenvalues matrix defined strictly negative real parts conditions satisfied evaluated arrival times algorithm generate state process following recurrence equation assuming initial value recurrence equation obtained replacing jump size simulated predefined distributions simple process jump therefore follows dyt byt dyt follows hence version process using process one jump time follows dsu dsu dsu evaluated values process generate process corresponding process finally using values provided previous step evaluate sampled processes vih gih followings suppose since simple process jump follows vih note follows step vih using process jump follows gih dsu dsu dsu hence gih test structure increments process detect structure process hurd miamee dudek showed proposed spectral coherence used test whether process method based fact support spectral coherence process period contained subset parallel lines squared coherence statistic series computed follows discrete fourier transform statistic satisfies null hypothesis complex gaussian uncorrelated real imaginary parts squared coherence statistic probability density type error squared coherence determined elog values statistic computed pair plotting values statistic exceed significant values statistic lie along parallel equally spaced diagonal lines graph significant values indicates presence subset parallel lines ensure periodic structure series consequence periodic mean recommended remove periodic mean series first example let simple process rate function furthermore length successive partitions period intervals moreover distribution jumps size subintervals assumed denotes normal distribution mean variance example consider process parameters thus matrix conditions satisfy process simulate duration period intervals parameters specified using step sample process equally space partition distance one get discretized samples period intervals follow verify increments sampled process process figure top increments simulated process size bottom left sample autocorrelation plot bottom right significant values sample spectral coherence figure graph increments sampled process size top sample autocorrelation graph process bottom left presented bottom right graph shows sample coherent statistics values specified collection pairs exceed threshold corresponding parallel lines sample spectral coherence confirm increments sampled process also graph significant verifies first peak shows second order periodic structure period table different values values different values sample coherence statistics test increments sampled process period presented table corresponding threshold shows test significant corresponding parallel lines figure appendix proof lemma exist thus using definition definition fact independence increments iwst eiwdt eiw eiwdt eiw iwdt since znr independent distribution follows definition conditional expected value eiw eiwzn iwz iwz therefore iwst iwdt iwz iwz proof corollary sufficient prove exist thus eiw similar method proof lemma iwz eiw eiw since follows method used computation characteristic function iwz definition definition partition thus proof theorem proof let simple process defined ith jump size furthermore denote time first jump occurs time intervals jumps follows satisfies order prove sequence independently identically distributed let define therefore function random vector using fact increments poisson process independent density function random vector computed follows lim follows give conditional density since increment process poisson process mean follows definition conditional density independence sequence clear since constructed segment process iterating obtain follows therefore iterating obtain since follows immediately independent identically distributed note infinite series partial sums thus using general theory random recurrence equations see bougerol picard brandt vervaat condition prove almost sure absolute convergence series let diagonal using condition show log log log log follows log log hence strong law large numbers yield lim sup lim sup cauchy root criterion follows series almost sure absolute convergence since state process cadlag paths follows almost surely finite therefore follows converges distribution fixed satisfies unique solution clear general theory random recurrence equations suffices show ysn ysn using recursion equation analysis used obtain relation give proof general case similar therefore random vector function similar argument also shows random vector function using assumption follows proof corollary since process dss independent follows corollary order prove covariance function periodic suffices show let denote conditional expectation respect since increments interval independent increment process measurable dsu dss since dsu function vector distribution follows cov cov proof lemma let state process semi levy driven cogarch process exp log exp log follows exp log exp log define cadlag process log negative simple pure jump semi levy procress follows definition remark exp log exp using similar analysis used proof proposition follows follows thus imply cov respectively proof theorem seen implies sequence converges distribution finite random vector vector unique solution random equation independent follows thus imply cov respectively proof lemma using independence obtain last equality follows assumption section theorem computing cov sufficient obtain therefore followed recursion equations used proof theorem relation follows independence sequence also independence sequence proof corollary since fixed almost surely expected value covariance function volatility process proof proposition imitate proof theorem brockwell chadraa lindner since martingale zero mean follows ito isometry square integrable martingales integrators hence follows follows partial integration dgs dss similar analysis used compensation formula remark relation follows proof since increments interval independent expectation follows dss thus follows compensation formula therefore cov remark calculate cov partial integration get cov cov calculate first term let dss know therefore cov dss partial integration substituting byt follows dis dss dms locally integrable martingale mean zero result assumption thus using fact dss almost surely fixed equality holds vector hence calculate second term covariance follows dsb cov cov cov cov cov dsb references bennett statistics regenerative digital transmission bell system technical journal bibi lescheb general periodic bilinear processes economics letters bollerslev generalized autoregressive conditional heteroskedasticity journal econometrics bollerslev patton wang daily house price indices construction modeling longerrun predictions journal applied econometrics bougerol picard stationarity garch processes nonnegative time series journal econometrics brandt stochastic equation stationary coefficients advances applied probability brockwell continuous time arma processes handbook financial time series brockwell chadraa lindner garch processes ann appl brockwell davis time series theory methods edition springer new york cinlar introduction stochastic processes prentice hall englewood cliffs new jersey cont tankov financial modelling jump processes chapman financial mathematics series dudek hurd wojtowicz parma models applications applied condition monitoring vol cyclostationarity theory springer switzerland engle autoregressive conditional heteroscedasticity estimates variance united kingdom inflation econometrica gladyshev periodically correlated random sequences soviet math hurd miamee periodically correlated random sequences spectral theory practice new york wiley jeon taylor density forecasting wave energy using models kernel density estimation international journal forecasting kluppelberg lindner maller continuous time garch process driven levy process stationarity second order behaviour krithikaivasan zeng deka medhi based traffic forecasting dynamic bandwidth provisioning periodically measured nonstationary traffic transactions networking maejima sato processes journal theoretical probability roger williams diffusions markov processes martingales volume ito calculus cambridge university press cambridge sato levy processes infinitely divisible distributions cambridge university press cambridge vervaat stochastic difference equation representation infinitely divisible random variables advances applied probability
| 10 |
adadnns adaptive ensemble deep neural networks scene text recognition chun zejun jianwei oct chunchao hongfa lei department computer science technology university science technology beijing beijing china teg tencent ltd shenzhen china corresponding author xuchengyin abstract recognizing text wild really challenging task complex backgrounds various illuminations diverse distortions even deep neural networks convolutional neural networks recurrent neural networks training procedure scene text recognition outputs deep neural networks different iterations always demonstrated diversity complementarity target object text simple effective deep learning method adaptive ensemble deep neural networks adadnns proposed simply select adaptively combine classifier components different iterations whole learning system furthermore ensemble formulated bayesian framework classifier weighting combination variety experiments several typical acknowledged benchmarks icdar robust reading competition challenge datasets verify surprised improvement baseline dnns effectiveness adadnns compared recent methods scene text widely used visual indicators navigation notification text recognition scene images videos one key factor variety practical applications reading wild doermann yin tian assisting visually impaired people goto tanaka sanketi shen coughlan translation shi fragoso user navigation minetto driving assistance systems chen yang autonomous mobile robots scene text cropped word recognition methods generally grouped word recognition holistic word recognition typical segmentationbased approaches word image small segments combine adjacent segments candidate characters classify using convolutional neural networks cnns gradient classifiers find approximately optimal word recognition result bissacco jaderberg vedaldi zisserman complex backgrounds diverse distortions character segmentation another challenging task thereby holistic word recognition approaches deep neural networks impressive text reading wild copyright rights reserved word spotting direct holistic approach usually calculates similarity measure candidate word image query word jaderberg gordo sequence matching indirect holistic approach recognizes whole word image embedding hidden segmentation strategies example shi constructed training deep neural network sequence recognition scene text recognition shi bai yao however variety grand challenges scene text recognition see samples fig even recent deep neural networks dnns additional characters probably identified text distortions complex backgrounds characters wrongly recognized changing illuminations complex noises characters sometimes missed low resolutions diverse distortions figure challenging examples icdar robust reading competition challenge dataset scene text images incorrectly recognized baseline dnns see related descriptions experiments captions show recognized text left versus ground truth right additional characters wrong characters missing characters target words stochastic gradient descent sgd bottou variants become defacto techniques optimizing dnns sgd always leads local minima even though popularity sgd attributed ability avoid spurious local minima dauphin plenty number million possible local minima dnns kawaguchi local minima flat basins supposed generalize better learning system keskar result although different local minima often similar error rates corresponding neural networks dnns tend make different mistakes diversity complementarity exploited via classifier ensemble huang two major ways ensemble deep neural networks one hand different learning systems dnns first trained independently final system trivial ensemble different deep learning architectures via majority voting averaging example high profile competitions imagenet kaggle ensemble techniques huge computation complexity ensemble becomes uneconomical impossible researchers universities even small companies hand one learning system dnns first trained final ensemble selects combines neural network components one system without incurring additional training cost huang proposed ensemble technique called snapshot ensembling specific optimization strategy designed train dnns model snapshots neural network components cycles combined final ensemble learning procedure huang however design specific effective optimization algorithms dnns also challenge paper propose new adaptive ensemble deep neural networks adadnns simplest way given trained neural networks iterations learned dnns system subset neural network components simply selected adaptively combined perform final predictions ensemble formally formulated bayesian framework classifier weighting combination argue diversity complementarity dnns sgd adadnns via ensembling diversity improve robust performance final learning system time high accuracy components dnns adadnns via combination accurate neural network components improve precision performance final classification system variety experiments several acknowledged benchmarks icdar robust reading competition challenge datasets shown simple effective adadnns improves largely baseline dnns moreover proposed approach neural network component means resulting dnn iteration whole training procedure dnns system trained conventional optimization algorithms bottou curtis nocedal even specific algorithms snapshot ensembling huang top performance compared latest methods related work recognizing text scene videos attracts interests fields document analysis recognition computer vision machine learning existing methods scene text cropped word recognition grouped word recognition holistic word recognition general word recognition methods integrate character segmentation character recognition language priors using optimization techniques markov models weinman crfs mishra alahari jawahar shi recent years mainstream segmentationbased word recognition techniques usually word image small segments combine adjacent segments candidate characters classify using cnns gradient classifiers find approximately optimal word recognition result using beam search bissacco hidden markov models alsharif pineau dynamic programming jaderberg vedaldi zisserman word spotting manmatha han riseman direct holistic word recognition approach identify specific words scene images without character segmentation given lexicon words wang belongie word spotting methods usually calculate similarity measure candidate word image query word impressively recent methods design proper cnn architecture train cnns directly holistic word images jaderberg jaderberg use label embedding techniques enrich relations word images text strings almazan gordo sequence matching indirect holistic word recognition approach recognizes whole word image embedding hidden segmentation strategies shi constructed train deep neural network sequence recognition scene text recognition convolutional recurrent neural networks framework crnn designed utilized shi bai yao paper similar crnn architecture used adadnns recognizing scene text sequently holistically classifier ensemble mainly divided two categories first one aims learning multiple classifiers feature level multiple classifiers trained combined learning process boosting freund schapire bagging breiman rotation forest rodriguez kuncheva alonso second tries combine classifiers output level results multiple available classifiers combined solve targeted problem multiple classifier systems classifier combination zhou yin adadnns paper follows second one namely given multiple classifiers neural network components sequently learned dnns adadnns constructed combining intelligently component classifiers within formulation framework adaptive ensemble deep neural networks known sgd batch optimization lead different local minima dnns neural network components always diversity complementarity conventionally tens thousands iterations also neural network components learning system dnns considering acceptable computation complexity testing procedure one thing quickly select small subset neural network components different training iterations time considering high accuracy requirement another thing adaptively combine subset neural network components construct final classification system following unified framework adadnns first formulated next detail procedure adadnns described two key issues optimizing first one calculation mentioned distribution describing correlation decision thus derived distance assumed computed functions returns otherwise scene text recognition task one hand given dictionary calculated dict dict unified framework formulate ensemble decision individual classifier decisions combine majority voting sums votes class selects class receives votes majority voting popular combination rule major limitation majority voting decision model taken account without considering distribution decisions particular possible models hypothesis space could exploited considering individual decisions correlations hypotheses use framework combine classifiers given sample set independent classifiers probability label estimated bayesian model hand correlation assumed function cost levenshtein distance cld traditional levenshtein distance cost two different characters always however spelling correction cost two characters similar shape tends smaller distance paper statistics frequencies different character pairs location label hypothesis validation set bootstrapped training set experiments calculate cost two different characters cost note given dictionary competitive relationship thus calculated distribution describing correlation decision denotes posterior probability model posterior computed prior probability classifier isp model likelihood training set assumed constant therefore assigns optimal label according following decision rule argmaxy argmaxy phi argmaxy argmaxy argmaxy phi argmaxy function multiplying scaling factor different range dict dict function cld heuristic approach values empirically assigned multiple integral points values points calculated piecewise linear interpolation example shown fig general small range fig obtained weights convenient linear combination classifiers second issue generating voting candidates probable labels hypotheses obviously ground truth always appear decisions made necessary find effective way generate good candidates decisions find probable label existed initial label hypothesis generally speaking good candidate means small edit distance hypotheses following idea propose algorithm semantically generate voting candidates see algorithm cld experiments word dictionary jaderberg used given dictionary batch orderings converge different solutions snapshots often similar error rates make different mistakes diversity exploited ensembling multiple snapnots average sampling combined majority voting focusing scene text recognition crnn model shi bai yao used generate base classifiers neural network components text recognizer crnn uses ctc graves output layer estimates sequence probability conditioned input image input image represents character sequence figure example describing relationship cost levenshtein distance algorithm generating voting candidates input base classifier set initial decisions made measurement function pairwise distance upper bound distance candidate hypothesis output voting candidate set parameter subset procedure maxh end end algorithm searching process implicit computational way experiments special simple case algorithm used voting candidates generation process initialized upper bound set inf assumed constant adadnns algorithm within framework procedure adadnns scene text recognition includes three major steps base classifiers generation classifier combination ensemble pruning base classifiers generation ensembles work best base models high accuracy overlap set examples misclassify deep neural networks dnns naturally used base classifier generator ensembles one hand dnns dramatically improved many domains speech recognition visual object recognition object detection composed multiple processing layers learn representations data multiple levels abstraction hand training phase one individual deep neural network two snapshots different classifier combination adadnns core adadnns calculate calculation function distance represented set values multiple integral points values assigned highest recognition rate validation set detail procedure adadnns ensemble shown algorithm algorithm adadnns classifier combination input base classifier set dict given dictionary function distance parameter voting candidates set generated algorithm output label prediction procedure initialize dict calculate end calculate adadnns pruning classifier ensemble pruning generally improve ensemble performance use genetic algorithm pruning ensemble meta heuristic inspired process natural selection belongs larger class evolutionary algorithms gas commonly used generate solutions optimization search problems relying bioinspired operators mutation crossover selection adadnns pruning firstly population binary weight vectors randomly generated means classifier remained secondly population iteratively evolve fitness vector measured validation set stands recognition rate finally ensemble correspondingly pruned evolved best weight vector experiments evaluate effectiveness proposed adadnns method variety experiments text cropped word recognition conducted acknowledged benchmark datasets first focused challenging task incidental scene text recognition icdar robust reading competition challenge trained adadnns learning system synthetic dataset jaderberg training set challenge performed comparative experiments also conducted experiments learned adadnns text recognition tasks focused scene text recognition borndigital text recognition icdar robust reading competition challenge checked generalization adadnns baseline dnns model crnn one shi bai yao official metrics icdar robust reading competition shahab shafait dengel karatzas karatzas used figure challenging samples scene text correctly recognized upper adadnns gems mgennisgal rgao railroad united kappa xmas zoom youtube york walk wprd year wisconsin experiments incidental scene text recognition icdar robust reading competition challenge database karatzas widely used highly competitive benchmark database scene text recognition within complex situations recent years public dataset includes training set images test set annotated text regions cropped words complex backgrounds various illuminations diverse distortions incidental scene text recognition topic challenging task experiments variety methods conducted compared baseline dnns adadnns adadnns pruning winning participation method official competition marked bold words latest top submissions robust reading competition rrc website marked italic words table comparative results icdar challenge dataset incidental scene text recognition comparative results rrc website date method baidu idl hik ocr maps baseline dnns adadnns adadnns pruning upper upper seen table proposed adadnns much better baseline dnns example measure upper adadnns surprised improvement say adaptive ensemble dnns simple effective strategy largely improved performance original baseline dnns moreover compared latest top submissions baidu idl hik ocr method adadnns pruning best performance upper also perform experiments dataset veit similar challenging largescale incidental scene text dataset images dataset coco dataset contain text images text regions robust reading challenge holding released http icdar comparative results adadnns adadnns pruning baseline dnns validation set respectively scene text recognition samples cocotext shown fig experiments focused scene text recognition text recognition order investigate generalization adadnns directly use trained adadnns system icdar challenge perform experiments icdar challenge cropped word recognition dataset challenge dataset contains ground truths cropped word images experiments variety methods conducted compared baseline dnns adadnns adadnns pruning winning participation method official competition marked bold words top three results published papers latest top submissions rrc website marked italic words table comparative results icdar challenge dataset focused scene text recognition comparative results without publications rrc website date method tencentailab tencent youtu hik ocr cnn jaderberg rare shi crnn shi bai yao photoocr baseline dnns adadnns adadnns pruning upper upper similarly adadnns much better baseline dnns measure upper increases surprisedly trained another task challenge adadnns adadnns pruning competitive performance new dataset challenge dataset compared recent published methods crnn shi bai yao even latest submission results apart experiments text recognition scene images icdar robust reading competition challenge also directly perform learned adadnns images track challenge though images scene images similar challenging issues text recognition complex backgrounds low resolution various colors also compare adadnns adadnns pruning baseline dnns winning participation method official competition marked bold words latest top submissions rrc website marked italic words similar conclusions drawn firstly adadnns improves largely compared baseline dnns upper secondly adadnns comparative performance latest submission results dahua ocr table comparative results icdar challenge dataset text recognition comparative results rrc website date method tecent youtu tecentailab dahua ocr photoocr baseline dnns adadnns adadnns pruning upper upper fully believe adadnns adadnns pruning performs icdar challenge challenge datasets performance correspondingly improved obtain impressive results compared latest submission systems also near issue future work conclusion discussion variety dnns based methods proposed still investigated literature scene text recognition grand challenges complex backgrounds various illuminations diverse distortions order fully take advantage complementary diversity high accuracy neural network components dnns adaptive ensemble deep neural networks adadnns proposed simply select adaptively combine neural networks whole training procedure comparative experiments scene text cropped word recognition showed adadnns achieves remarkable increase final performance compared baseline dnns note dnns methods dramatically improved object detection object recognition speech recognition many domains consequently near future issue evaluate efficacy adadnns dnns object recognition speech recognition example experiments object detection recognition adadnns snapshot ensembling huang resnet densenet huang liu weinberger performed compared next step references almazan almazan gordo fornes valveny word spotting recognition embedded attributes ieee trans pattern analysis machine intelligence alsharif pineau alsharif pineau text recognition hybrid hmm maxout models proceedings international conference learning representations iclr bissacco bissacco cummins netzer neven photoocr reading text uncontrolled conditions proceedings international conference computer vision iccv bottou curtis nocedal bottou curtis nocedal optimization methods machine learning corr bottou bottou machine learning stochastic gradient descent proceedings international conference computational statistics compstat breiman breiman bagging predictors machine learning dauphin dauphin pascanu cho ganguli bengio identifying attacking saddle point problem optimization advances neural information processing systems annual conference neural information processing systems nips fragoso fragoso gauglitz zamora kleban turk translatar mobile augmented reality translator proceedings ieee workshop applications computer vision wacv freund schapire freund schapire generalization learning application boosting journal computer system sciences gordo gordo supervised features word image representation proceedings ieee international conference computer vision pattern recognition cvpr goto tanaka goto tanaka wearable camera system blind proceedings international conference document analysis recognition icdar graves graves gomez schmidhuber connectionist temporal classification labelling unsegmented sequence data recurrent neural networks machine learning proceedings international conference icml pittsburgh pennsylvania usa june zhang ren sun deep residual learning image recognition proceedings ieee conference computer vision pattern recognition cvpr huang huang pleiss liu hopcroft weinberger snapshot ensembles train get free proceedings international conference learning representations iclr huang liu weinberger huang liu weinberger densely connected convolutional networks proceedings ieee conference computer vision pattern recognition cvpr jaderberg jaderberg simonyan vedaldi zisserman synthetic data artificial neural networks natural scene text recognition corr jaderberg jaderberg simonyan vedaldi zisserman reading text wild convolutional neural networks international journal computer vision jaderberg vedaldi zisserman jaderberg vedaldi zisserman deep features text spotting proceedings european conference computer vision eccv karatzas karatzas shafait uchida iwamura bigorda mestre mas mota las heras icdar robust reading competition proceedings international conference document analysis recognition icdar karatzas karatzas nicolaou ghosh bagdanov iwamura matas neumann chandrasekhar shafait uchida valveny icdar competition robust reading proceedings international conference document analysis recognition icdar kawaguchi kawaguchi deep learning without poor local minima advances neural information processing systems annual conference neural information processing systems nips keskar keskar mudigere nocedal smelyanskiy tang largebatch training deep learning generalization gap sharp minima proceedings international conference learning representations iclr michaud valin proulx textual message read mobile robot proceedings international conference intelligent robots systems iros volume manmatha han riseman manmatha han riseman word spotting new approach indexing handwriting proceedings conference computer vision pattern recognition cvpr minetto minetto thome cord leite stolfi snoopertrack text detection tracking outdoor videos proceedings ieee international conference image processing icip mishra alahari jawahar mishra alahari jawahar cues scene text recognition proceedings ieee conference computer vision pattern recognition cvpr rodriguez kuncheva alonso rodriguez kuncheva alonso rotation forest new classifier ensemble method ieee trans pattern analysis machine intelligence sanketi shen coughlan sanketi shen coughlan localizing blurry text natural images proceedings ieee workshop applications computer vision wacv shahab shafait dengel shahab shafait dengel icdar robust reading competition challenge reading text scene images proceedings international conference document analysis recognition icdar shi shi wearable translation robot proceedings ieee international conference robotics automation icra shi bai yao shi bai yao trainable neural network sequence recognition application scene text recognition ieee trans pattern analysis machine intelligence published online shi shi wang xiao zhang gao zhang scene text recognition using partbased character detection proceedings ieee conference computer vision pattern recognition cvpr shi shi wang lyu yao bai robust scene text recognition automatic rectification proceedings ieee conference computer vision pattern recognition cvpr tian tian yin hao unified framework tracking based text detection recognition web videos ieee trans pattern analysis machine intelligence published online veit veit matera neumann matas belongie dataset benchmark text detection recognition natural images corr wang belongie wang belongie word spotting wild proceedings european conference computer vision eccv weinman weinman butler knoll feild toward integrated scene text reading ieee trans pattern analysis machine intelligence chen yang chen yang detection text road signs video ieee trans intelligent transportation systems doermann doermann text detection recognition imagery survey ieee trans pattern analysis machine intelligence yin yin huang yang hao convex ensemble learning sparsity diversity information fusion yin yin zuo tian liu text detection tracking recognition video comprehensive survey ieee trans image processing zhou zhou ensemble methods foundations algorithms boca raton chamman
| 1 |
yang bmc systems biology suppl http research open access microbial community pattern detection human body habitats via ensemble clustering framework peng xiaoquan kang asia pacific bioinformatics network apbionet thirteenth international conference bioinformatics sydney australia july august abstract background human habitat host microbial species evolve function continue evolve elucidating microbial communities respond human habitats fundamental critical task establishing baselines human microbiome essential understanding role human disease health recent studies healthy human microbiome focus particular body habitats assuming microbiome develop similar structural patterns perform similar ecosystem function environmental conditions however current studies usually overlook complex interconnected landscape human microbiome limit ability particular body habitats learning models specific criterion therefore methods could capture underlying microbial patterns effectively results obtain comprehensive view propose novel ensemble clustering framework mine structure microbial community pattern metagenomic data particularly first build microbial similarity network via integrating metagenomic samples three body habitats healthy adults novel symmetric nonnegative matrix factorization nmf based ensemble model proposed applied onto network detect clustering pattern extensive experiments conducted evaluate effectiveness model deriving microbial community respect body habitat host gender clustering results observed body habitat exhibits strong bound microbial structural pattern meanwhile human microbiome reveals different degree structural variations body habitat host gender conclusions summary ensemble clustering framework could efficiently explore integrated clustering results accurately identify microbial communities provide comprehensive view set microbial communities clustering results indicate structure human microbiome varied systematically across body habitats host genders trends depict integrated biography microbial communities offer new insight towards uncovering pathogenic model human microbiome background metagenomic background human body content complex microbial communities living inside microbiome occupies body habitats endows ecosystem functions nutrition pathogen resistance correspondence ningkang computational biology group single cell center shandong key laboratory energy genetics cas key laboratory biofuels qingdao institute bioenergy bioprocess technology chinese academy science qingdao china full list author information available end article immune system development help maintain health hence systematically defining normal states human microbiome important step towards understanding role microbiota pathogenesis however majority microbiomes poorly investigated understand principle human microbiome prior research concentrated particular body habitats example turnbaugh investigated gut microbiome obese lean twins address host environmental condition diet influence yang licensee biomed central ltd open access article distributed terms creative commons attribution license http permits unrestricted use distribution reproduction medium provided original work properly cited creative commons public domain dedication waiver http applies data made available article unless otherwise stated yang bmc systems biology suppl http microbial components grice targeted human skin microbiome characterize topological personal variations within multiple sites bik research indicated distinctness microbial structure oral cavity tongue however human microbial habitats isolated one another instead reveal community structure correlation across body habitats case ensemble different habitat samples could bring global insights microbiome recent studies aggregated microbial samples different body habitats perform comprehensive study costello surveyed microbiomes gathered body habitats nine adults mitreva carried extensive sampling body habitats individuals order establish global insight human microbiome built microbial similarity network nodes consisted metagenomic samples multiple human body sites edges phylogenetic similarity samples measured terms shared evolutionary history clustering approaches applied similarity network group samples shared similar phylogenetic structures within clusters ones clusters researchers could infer microbial patterns affected body habitat host gender environmental condition time costello proposed hierarchical clustering algorithm microbial community network found personal microbiota relatively stable within habitats time turnbaugh identified two distinct functional modules gut microbiome via principal components analysis pca hierarchical clustering algorithm experimental results disclosed microbiome within clusters carried similar functions mitreva adopted clustering algorithm discovered covariation microbiome different habitats current limitations clustering approach aims group metagenomic samples similar phylogenetic patterns achieved various algorithms differ significantly terms computational principles measures generated clustering results viewed taking different look data shown table however prior studies employ one particular clustering approach clustering outputs tend specific towards criterion proposed approach example clustering algorithm groups samples densely connected similarity network however true microbial page communities limited densely connected structures samples sparsely microbial structure widely exist lake graph clustering mcl clustering explores best partition network algorithms allow overlaps clusters therefore unable discover shared microbe two communities species could adapt conditions like microbial mats biofilms hierarchical clustering algorithm learns hierarchical structure network used hierarchical structure determined local optimization criterion global objective function might lead small clusters part similar samples clustering approach like identifies clusters follow statistical condorcet criteria statistical model microbial community remains rarely known therefore difficult evaluate reliability results advantage proposed ensemble clustering framework ideally clustering algorithm able exploit clustering patterns comprehensive possible however mentioned algorithms capable taking consideration factors different clustering algorithms may produce different partitions network given multiple clustering results need explore information output robust results exploit complementary nature patterns ensemble clustering proposed recently successfully used solve many community detection problems thus use ensemble clustering framework integrate various kinds clusters call base clustering results output comprehensive results study first construct consensus matrix measures similarity samples based samples base clustering results next apply symmetric nonnegative matrix factorization nmf consensus matrix derive clusters symmetric nmf provides lower rank approximation nonnegative matrix could easily related clustering nonnegative data mentioned factorization consensus matrix generate clustering assignment matrix could capture cluster structure inherent network unlike prior researches applied single cluster algorithm particular habitat microbiome framework assembled clustering algorithms different human microbiome different body habitats carried experiments demonstrate capability capturing microbial community experimental yang bmc systems biology suppl http page table summary four particular clustering approaches clustering approaches clustering characteristics limitations microbial pattern clusters defined connected dense regions network true microbial community limited densely connected structures sparsely microbial structure still exists graph clusters generated via graph partitioning techniques based clustering partition based algorithms allow overlaps clusters therefore unable discover shared microbe among clusters species could adapt conditions like microbial mats biofilms hierarchical clustering clusters built based agglomerative clustering model shows relations members groups hierarchical structure determined local optimization criterion global objective function might lead small clusters part similar samples distributionbased clustering clusters modelled using statistical distributions statistical models microbial communities still unknown need explored results showed predicted clusters capable revealing spatial gender roles human microbiota eventually elaborated human microbiome biogeography provided new insights disease pathogenesis human microbiome material methods section first briefly introduced experimental data similarity measurements metagenomic samples gpu based fast similarity matrix computing described schema ensemble clustering framework phases structure microbial community experimental data work used metagenomic samples project moving pictures human microbiome build microbial matrix similarity network refer section similarity measurements metagenomic samples details sample metagenomic matrix network illustrated figure similarity matrices datasets shown additional file table performed measure structural similarity metagenomic samples efficiency shown additional file figure metagenomic samples annotated two habitat gut skin oral cavity defined human body habitat samples live gender male female defined gender host samples inhabit combining two sample partition one six male gut male skin male oral cavity female gut female skin female oral cavity table summarized distribution metagenomic samples three body habitats two host genders similarity measurements metagenomic samples scoring function compared two microbial samples structure calculating maximum common component common phylogenetic tree figure example similarity matrix similarity network matrix tile indicates similarity value samples colour gradient red high green low network node represents sample edges represent similarity values matrix yang bmc systems biology suppl http page table microbial samples six human body habitats gut skin oral male female total total considering phylogenetic distance abundance species formula scoring function first evaluated common abundance species leaf node considered smaller abundance value two samples abundance values propagated ancestors iteratively accumulative common abundance values root node reflected overall similarity two metagenomic samples could computed using similarity root defined formula common abundance similarity leaf node common abundance internal node constructed similarity matrix based similarity among sample pair figure exploiting architecture gpu formula could invoked parallel using large number threads compute similarity different pairs metagenomic samples compute similarity matrix samples spawned threads gpu similarity value matrix processed independent thread figure overview gpu based similarity matrix computing figure illustrated gpu computing workflow build common phylogenetic tree first loaded initialize abundant specie data file system main memory data reloaded gpu computing threads gpu kernel completed figure step key step values returned back ram populate similarity matrix stored file system ensemble clustering framework subsection proposed novel ensemble clustering framework namely perform microbial community pattern detection framework consisted two stages generation phase consensus matrix constructed based base clustering results identification phase symmetric clustering used detect reliable clusters consensus matrix schema algorithm presented figure terminology computing similarity matrix metagenomic samples used construct microbial similarity network reformatted simple undirected graph defined vertex set containeed vertices edge set vertex represented metagenomic sample weighted edge represented polygenetic structure similarity two samples figure cluster vci eic subnetwork vci eic set edges induced vci microbial community set predicted microbial clusters defined generation phase similarity network ready set base clustering results calculated yang bmc systems biology suppl http page figure schema algorithm applying four clustering algorithms base clustering algorithms similarity network different initializations shown figure base clustering algorithms included algorithm clustering hierarchical clustering clustering present table additional file section consensus matrix introduced measure samples clusters base clustering results wij indicated number base clustering results sample sample assigned cluster divided total number base clustering results therefore matrix took consideration generated clusters reflected similarity pair samples based different clustering criterions higher value wij likely sample sample belonged cluster identification phase consensus matrix constructed applied symmetric clustering algorithm matrix derive clusters flowchart algorithm shown figure main idea algorithm outlined follows symmetric nmf defined equation suitable network clustering based similarity matrix predefined cost function predefined number clusters cluster indicator matrix entry denoted membership sample belonging cluster could easily infer clustering assignment sample row study used divergence cost function could represented dkl wij wij log hht wij hht chose cost function since free noise parameter widely used nmf sample may belong one cluster seldom belonged clusters thus cluster indicator matrix sparse achieve sparsity yang bmc systems biology suppl http page solution regularization integrated neglecting constants adding regularization modified formulation follows min log hht hht controlled sparsity cluster indicator matrix solution ensemble clustering minimization cost function equation constraints formed constrained nonlinear optimization problem similar adopted multiplicative update rule estimate widely accepted useful algorithm solving nonnegative matrix factorization problem multiplicative update rule obtained following update rules iteratively updated according updating rule satisfied stopping criterion let cluster indicator matrix iteration time algorithm stopped whenever predefined tolerance parameter set default value tolerance parameter addition maximum iteration time limited iterations stopping criteria unsatisfied order avoid local minimum random initialization repeated algorithm times random initial conditions chose results lowest value cost function cluster indicator matrix microbial clusters similar obtained microbial clusters cluster indicator matrix taking threshold assign sample cluster weight cluster exceeded way samplecluster membership matrix mean sample assigned detected cluster mean final output completing steps obtained refined clusters satisfied following conditions summarized whole algorithm figure results section focused evaluating effectiveness algorithm presenting experimental results first introduced experiment design evaluation metrics experimental settings figure algorithm microbial community pattern detection study conducted experimental comparison base clustering approaches comparison constructed consensus network original metagenomic similarity network finally clustering results investigated human microbial community influenced body habitat host gender evaluation metrics work evaluated effectiveness clustering algorithms observing well detected clusters corresponded sampling information habitats genders six refer subsection terminology details since true number cluster patterns habitat gender unknown literature references clearly mention determine number cluster patterns either body habitat host gender empirically defined reference clusters based six assuming metagenomic samples identical likely similar microbial structures bring metagenomic samples identical one reference cluster typically quality predicted clusters could evaluated following three quantity measures metrics could measure well detected clusters corresponded reference clusters among three measures harmonic mean precision recall aimed assessing well detected clusters matched reference yang bmc systems biology suppl http page ones cluster level precision measured fraction detected clusters matched reference ones recall measured fraction reference clusters matched detected clusters metric took account overlap detected reference clusters focused measuring whether samples within identical habitats grouped together detected clusters value measure varied higher value indicated better match details metrics please refer additional file section parameter setting experiments introduced additional file section evaluation clustering results generated algorithm subsection evaluate performance metaec algorithm presented performance comparison proposed algorithm base clustering approaches comparison constructed consensus matrix original microbial similarity matrix comparison four base clustering approaches evaluate performance ensemble clustering approach accuracy clustering results derived proposed approach compared ones derived base clustering algorithms figure illustrated performance different clustering algorithms terms three metrics respect reference clusters figure could observe approach competitive performance compared base clustering algorithms regard three measures among base clustering algorithms cluster number set better performance terms clustering cluster number set better performance terms hierarchical clustering cluster number set comparable performance clustering cluster number set terms none could superior performance others regard three measures however approach obtained best performance terms three measures may owing fast approach could make use clusters derived different base clustering algorithms extract reliable results addition conducted sensitivity study phylogenetic structure similarity microbial network ran algorithm threshold value metagenomic similarity matrix tuning step size results additional file figure showed outperformed clustering techniques wide range edge threshold indicating figure performance comparison ensemble clustering framework base clustering algorithms respect fmeasure note approach random initialization denoted approach base clustering result initial input denoted result obtained algorithm robust insensitive similarity network noisy data coverage addition compared computational time base clustering approaches table results show yang bmc systems biology suppl http page table comparison bases clustering approaches computational time method time hierarchical exactly spend time hierarchical clustering less clustering total time cost metaec sum base clustering algorithms plus seconds rapid development computational capability could improve time efficiency large amount operations comparison constructed consensus network original similarity network demonstrate benefits combining different base clustering results applied symmetric nmf original metagenomic similarity network evaluated performance fair results symmetric nmf original metagenomic similarity network obtained best tuned parameter comparison two tested similarity network present figure regard results figure showed applying symmetric nmf consensus matrix achieved better performance original similarity network results demonstrated benefits combining different base clustering results similarity matrix well constructed element reflected cocluster similarity factorization similarity matrix would generate clustering assignment matrix figure performance comparison bayesian nmf based clustering algorithm applied ensemble clustering similarity network original microbial similarity network additive values three measures present data source random initialization case value set result corresponds also choose base clustering results presents best performance initial input symmetric nmf result corresponds could well capture cluster structure inherent network representation however original network weighted interaction via measuring phylogenetic structure samples way metagenomic samples higher phylogenetic similarity likely involved one cluster actual microbial pattern uncorrelated phylogenetic similarity community detected symmetric nmf may unreliable ensemble clustering framework generated consensus matrix integrated clustering results derived different clustering algorithms element consensus matrix indicated frequency corresponding sample pair clustered together base clustering results thus applying symmetric nmf consensus matrix could take consideration strength multiple clustering patterns output comprehensive robust result interpretation microbial community patterns human body habitats based clustering results recall metagenomic samples clustered terms frequency base clustering results hence final output clusters assembled samples represent unique microbial patterns consensus base clustering approaches next clustering results infer microbial pattern influenced body habitats host genders structural variation across body habitats analyzing enrichment body habitat host gender six predicted clusters results figure revealed stronger coherence body habitat host gender clusters dominated particular body habitats inferred body habitats harboured distinctive microbial patterns also observed base clustering results additional file figure although four base clustering algorithms generate clustering patterns different criterions clusters additional file table enriched particular habitats meanwhile observed microbial communities different body habitat exhibited different degree compositional structure variation figure showed microbial structure remained relatively stable oral cavity compared diverse microbial structures harboured skin biologically reasonable detect diverse patterns skin since quite different places skin microbial communities could sampled different extend habitat structural variation also observed base clustering results additional file figure gut oral cavity microbial community patterns fit one clustering criterion gut consistent oral cavity yang bmc systems biology suppl http page figure sample distribution predicted clusters respect body habitat host gender hierarchical clustering contrary gut oral cavity cluster could recognized four clustering criterions experimental settings inferring skin samples many cluster patterns diverse microbial structures note proposed generates comprehensive community patterns respect since result agreement consensus multiple base clustering approaches example compared hierarchical clustering results additional file figure capture cluster ensemble clustering able uncover femalegut specific clusters shown figure indicating could reveal degree structural variation body habitat comprehensively base clustering results structural variation across host gender assessed microbial structure variation respect host gender used measure similarity two metagenomic samples results figure indicated habitats variation significantly less within gender samples opposite gender samples however habitats perform different degree structural variation respect host gender oral cavity microbiome exhibited stable structure among opposite gender individuals phylogenetic structure similarity skin communities unique structural variation patterns regarding host gender gut community structure highly variable samples opposite gender hosts less similarity value opposite gender samples gut cluster exhibited strong coherence gender hosts hand enrichment study figure showed two gut clusters distinct host gender indicating opposite sexual individuals may exhibit distinct microbial composition gut microbial interconnection habitats although microbial communities reflected unique structures distributions body habitats interconnected microbial components among body habitats still observed clustering results example cluster figure contained skin samples shared similar microbial compositions oral cavity communities skin cluster harboured oral cavity samples respectively since skin microbial pattern closely associated external yang bmc systems biology suppl http figure structural variation host gender oral cavity gut clusters environment oral cavity open system microbiome external environment imported breathing eating food drinking water oral cavity skin would respond outside environmental conditions gradually evolve similar microbiomes conclusions discussions human microbiomes microbiomes hosted gut oral mucosa skin etc organisms perform functions useful human host maintain healthy yet detailed factors attribute microbial community structures human body habitats host gender remain poorly conceptualized fully understand roles human microbiome disease health prior studies focus particular body habitats health individuals specific clustering approaches based assumption metagenomic samples body habitats would develop similar microbial structure patterns however human habitats isolated interacted correlated form integrated complex system identified structures might unsuccessful due noisy sample similarity specific topological structure within metagenomic network hence single clustering algorithm rarely achieves optimal outcome uncover global comprehensive landscape human microbiome perform ensemble clustering framework page scale metagenomic samples study proposed algorithm four main advantages microbial pattern detection could effectively identify reliable microbial communities via integrating many base clustering results regard modularity microbial communities defined clustering microbial communities modularity according effects related environments treatments consensus clustering network much clearer showing modularity property environments shape microbial communities body habitat critical healthcare diognosis original metagenomic similarity network ensemble framework robust coverage metagenomic similarity network shown additional file figure compared base clustering results additional file figure algorithm could reveal spatial gender patterns microbiome shown figure comprehensively ensemble clustering result general agreement multiple base clustering approaches nevertheless acknowledged performance algorithm depends base clustering results quality original metagenomic similarity network base results generated poor clustering algorithms ensemble outputs would far real microbial community similarity patterns original similarity network unreliable capture modularity metagenomic samples none clustering approaches could work address problem integrate base clustering approaches diverse optimization criterions pattern assumptions reduce bias generated base approaches assume algorithms capture wide variety clustering patterns similarity network alleviate effect unreliable clustering results hand proposed nmf based mode could used association study bioinformatics domain complex method implement convergence could slow shown table rapid development computational capability could improve time efficiency large amount operations nonnegative constraints cluster indicator matrix may insufficient condition achieving sparseness cases one may set appropriate thresholds enforce sparseness summary ensemble clustering framework metagenomic data analysis microbial community pattern detection future nmf based model could exploited offer potential applications bipartite model association disease gene prediction yang bmc systems biology suppl http availability data sets supporting experimental results article available download http page additional material additional file experimental design file show experimental design paper including introductory four base clustering approaches evaluation microbial clusters parameter setting additional file supplementary material file presents several figures tables additional experimental results mentioned paper including efficiency algorithm evaluation four base clustering results sensitivity study phylogenetic structure similarity microbial network competing interests authors declare competing interests authors contributions conceptualized designed method drafted manuscript responsible implementation loy provided raw data participated discussion improved method well revised draft loy read approved manuscript loy hnc acknowledgements work supported part chinese academy sciences grant ministry science technology grant well national science foundation china grant declarations publication costs article partially funded chinese academy sciences grant ministry science technology grant well national science foundation china grant institute infocomm research agency science technology research star singapore article published part bmc systems biology volume supplement thirteenth international conference bioinformatics systems biology full contents supplement available online http authors details institute infocomm research agency science technology research star singapore singapore biology group single cell center shandong key laboratory energy genetics cas key laboratory biofuels qingdao institute bioenergy bioprocess technology chinese academy science qingdao china computer vision department mathematics sun university guangzhou china published december references wilson bacteriology humans ecological perspective john wiley sons dethlefsen relman ecological evolutionary perspective mutualism disease nature turnbaugh ley hamady knight gordon human microbiome project exploring microbial part changing world nature lederberg infectious history science eckburg bik bernstein purdom dethlefsen sargent gill nelson relman diversity human intestinal microbial flora science fierer hamady lauber knight influence sex handedness washing diversity hand surface bacteria proceedings national academy sciences aas paster stokes olsen dewhirst defining normal bacterial flora oral cavity journal clinical microbiology nasidze quinque tang stoneking comparative analysis human saliva microbiome diversity barcoded pyrosequencing cloning approaches analytical biochemistry turnbaugh hamady yatsunenko cantarel duncan ley sogin jones roe affourtit core gut microbiome obese lean twins nature grice kong conlan deming davis young nisc comparative sequencing program bouffard blakesley murray topographical temporal diversity human skin microbiome science bik long armitage loomer emerson mongodin nelson gill relman bacterial diversity oral cavity healthy individuals isme journal mitreva structure function diversity healthy human microbiome nature costello lauber hamady fierer gordon knight bacterial community variation human body habitats across space time science lozupone hamady knight online tool comparing microbial community diversity phylogenetic context bmc bioinformatics kent yannarell rusak triplett mcmahon synchrony aquatic microbial community dynamics isme journal zinger coissac choler geremia assessment microbial communities graph partitioning study soil fungi two alpine meadows applied environmental microbiology lloyd least squares quantization pcm ieee transactions information theory szekely rizzo hierarchical clustering via joint distances extending ward minimum variance method journal classification moon algorithm ieee signal processing magazine devarajan nonnegative matrix factorization analytical interpretive tool computational biology plos computational biology zhao simon matrix factorization gene expression profiles bioinformatics zhang liu zhou novel computational framework simultaneous integration multiple types genomic data identify regulatory modules bioinformatics dai zhang protein complex detection via weighted ensemble clustering based bayesian nonnegative matrix factorization plos one lancichinetti fortunato consensus clustering complex networks scientific reports kuang park ding symmetric nonnegative matrix factorization graph clustering sdm caporaso lauber costello gonzalez stombaugh knights gajer ravel fierer moving pictures human microbiome genome biol ning efficient search similar microbial communities based novel indexing scheme similarity score metagenomic data bioinformatics kullback letter editor distance american statistician psorakis roberts sheldon soft partitioning networks via bayesian matrix factorization adv neural inf process syst tan automatic relevance determination nonnegative matrix factorization spars processing adaptive sparse structured representations yang bmc systems biology suppl http page seung lee algorithms matrix factorization advances neural information processing systems greene cagney krogan cunningham ensemble matrix factorization methods clustering interactions bioinformatics manning raghavan introduction information retrieval cambridge university press mcguire colgrove whitney diaz bustillos versalovic ethical legal social considerations conducting human microbiome project genome research dewhirst chen izard paster tanner wade human oral microbiome journal bacteriology yang mei kwoh learning disease gene identification bioinformatics mei kwoh yang zheng interaction prediction learning local information neighbors bioinformatics yang kwoh inferring association via global protein complex network propagation plos one zheng ding mamitsuka zhu collaborative matrix factorization multiple similarities predicting interactions acm sigkdd international conference knowledge discovery data mining mei kwoh yang zheng globalized bipartite local model interaction prediction proceedings international workshop data mining bioinformatics yang chua kwoh ensemble positive unlabeled learning disease gene identification plos one cite article yang microbial community pattern detection human body habitats via ensemble clustering framework bmc systems biology suppl submit next manuscript biomed central take full advantage convenient online submission thorough peer review space constraints color figure charges immediate publication acceptance inclusion pubmed cas scopus google scholar research freely available redistribution submit manuscript
| 5 |
dynamic loop parallelisation adrian orestis epcc may university edinburgh kings buildings mayfield road edinburgh nested loops common feature high performance computing hpc codes shared memory programming models openmp structure common source parallelism parallelising structures requires programmers make static decision parallelism applied however depending parameters problem nature code static decisions loop parallelise may optimal especially enable exploitation runtime characteristics execution changes iterations loop chosen parallelised might limit amount processors utilised developed system allows code make dynamic choice runtime parallelism applied nested loops system works using source source compiler created perform transformations user code automatically directive based approach similar openmp approach requires programmer specify loops region parallelised runtime library responsible making decisions dynamically execution code method providing dynamic decisions loop parallelise significantly outperforms standard methods achieving openmp using clauses optimisations possible system addressing simulations number iterations loops change runtime program loops perfectly nested ntroduction high performance computing hpc codes particular scientific codes require parallel execution order achieve large amount performance increase depending underlying parallel platform used programmers use different programming models order achieve parallel execution distributed memory systems message passing programming model commonly used approach applying parallelism codes shared memory systems however attractive choice parallel programming openmp parallelisation codes openmp often achieved loop parallelisation long iterations loop independent distributed available processors system order execute parallel programmer required specify loop parallelised placing compiler directives loop resolving dependency issues iterations beforehand hpc codes often consist regions nested loops multiple levels order parallelise regions choice must made parallelism applied loops even though openmp supports variety strategies parallelising nested loops single one used parallelise code static choice however exploit runtime characteristics execution program changes input parameters executable affect iterations loops may render parallelisation decision suboptimal addition iterations loop change runtime due nature code common feature hpc codes organise data hierarchies example blocks arrays depending problem blocks different shapes sizes parameters affect loops responsible accessing data situations static decision potential impose limitation amount processors used parallel execution loops current trend chip manufactures increase number cores processors generation leading larger larger shared memory system readily available computational scientists desktop beyond dynamic approach must considered taking decisions report outlines investigations various strategies applied runtime order make dynamic decision parallelise region nested loops approach try automatically perform modifications users code compilation order enable code make decisions dynamically runtime specifically investigated possibility multiple versions loop within region nested loops order make dynamic choice whether loop execute sequentially parallel pen openmp arguably dominant parallel programming model currently used writing parallel programs used shared memory parallel systems version supported fortran openmp operates using compiler directives programmer annotates code specifying parallelised compiler transforms original code parallel version code compiled providing higher level abstraction openmp codes tend easier develop debug maintain moreover openmp easy table trategies parallelising nested loop regions name description outermost inner loop nested loop parallelisation outermost loop parallelisation one inner loops parallelisation multiple loops nested parallel regions collapsing loops single big loop loop collapsing loop selection runtime loop selection using clauses develop parallel version serial code without major modifications whilst number different mechanisms openmp provides adding parallel functionality programs one generally used often loop parallelisation involves taking independent iterations loops distributing group threads perform sets independent operations parallel since threads access shared data generally straightforward parallelise loop structural changes program iii ested oops hpc codes particularly scientific codes deal numerical computations based mathematical formulas formulas often expressed form nested loops set computations applied large amount data generally stored arrays parallelisation applied loop individually arrays often consist multiple dimensions access data achieved presence nested loops furthermore uncommon arrangement data done multiple hierarchies commonly blocks multidimensional arrays additional loops require order traverse data code presented choice must made loop level parallelise parallelisation occur summary available strategies presented table outermost loop commonly used approach parallelise outermost loop nested loop region shown listing using strategy iterations loop distributed members thread team threads operate parallel executing portion iterations assigned individually nested loops parallel region executed sequential manner pragma omp parallel private work listing outer loop parallelisation nested loop region parallelising outermost loop often good choice minimises parallel overheads openmp implementation initialisation parallel region scheduling loop iterations threads synchronisation takes place end parallel loops extensive work overheads various openmp directives found despite advantages outermost loop parallelisation strategy context drawbacks choice maximum amount available parallelism limited number iterations outerloop loop considering example code listing possible tasks executed parallel restricts number threads code utilise upon execution therefore number processors cores exploited inner loop variant outermost loop strategy difference one inner loops region chosen parallelised approach required beneficial outer loop enough iterations parallelise efficiently variant parallelisation strategy introduces parallelisation overheads requiring parallelisation performed loop outerloop rather loops shown listing nesting parallelisation deeper loop levels increase performance problems parallel overheads appear lot times whereas amount work iteration becomes finer pragma omp parallel shared work listing inner loop parallelisation nested loop region another issue strategy scenario loops perfectly nested situation computations loops shown listing parallelising loop deeper level result sequential execution work depending amount execution time serialised approach potential increase execution time code somework otherwork listing poorly nested loop region example nested nested parallelisation strategy exploits fact one loop executed parallel opening multiple nested parallel regions different levels loops presented listing threads utilised parallel execution code unlike outermost loop inner loop approaches utilise many threads iterations loop biggest number iterations strategy exploit parallelisation opportunities studies shown nested parallelism give good results systems large number processors pragma omp parallel private pragma omp parallel shared work listing nested loop parallelisation nested loop region loop collapsing loop collapsing strategy takes different approach exposing additional parallelism within nested loop regions performing code transformations multiple nested loops combined collapsed single loop newly created loop larger amount iterations distributed threads version openmp supports loop collapsing using collapse clause loop construct requiring programmer provide number loop levels collapse able use collapse clause loops perfectly nested code loops number loop iterations multiplied together need able regularly divided loop collapsing produce better results inner loop nested loop strategies since parallel overheads minimal however always available either compilers support openmp version conditions outlined met pragma omp parallel collapse work parallel region always created either case presence clause affects number threads get assigned parallel region sequential execution triggered code executed master thread parallel execution threads execute code furthermore clause programmers still required manually write code makes decision construct sensible evaluated manually parallelise loop potential target parallelisation dynamic loop one motivators work parallelisation undertaken structured code undertaking computational fluid dynamics cfd simulation structured mesh multigrid code works multiblock grids includes range cfd solvers including steady state dual harmonic balance timedomain general pattern computations within code shown listing whilst type computational pattern uncommon scientific codes one challenges parallelisation code use range different methods previously outlined range loops vary instance performing time domain simulation harmonic loop single iteration however performing harmonic balance simulation range values generally furthermore uncommon run large simulations single block small number blocks meaning block loop small number iterations finally block simulation different values dimensions theory loop collapsing strategy would ideal type simulation code would enable parallelisation without deal varying sizes nested loops however guaranteed input datasets loop iterations regularly divided also particular areas code loops perfectly nested listing parallelisation nested loop region loop collapsing loop selection openmp already provides way forcing parallel region execute sequentially use clause openmp directives clause following form scalar expression used determine runtime whether code enclosed parallel region execute sequentially parallel scalar expression clause evaluates region executed sequentially value result parallel execution however new parallelisation iter iter block block harmonics harmonics perform computations listing example scientific code loops given different techniques used parallelise nested loops occurrence nested loops many scientific simulation codes fact loop iterations nested loops change different input datasets code performing different functions code wanted system enabled selection different parallelisation choices available code runtime specific ranges nested loops known strategy providing functionality create code based provided user code perform parallelisation nested loops add decision making algorithms dynamically choose runtime parallelisation used specifically created tools create multiple versions loop within region nested loops order make dynamic choice whether loop execute sequentially parallel general code duplication considered bad programming practice amongst issues lead update anomalies instances functionality modified modifications occur thus damage maintainability code however duplicate code instance serial parallel versions loop nested loop structure generated automatically standard user code adversely affect maintainability user program created compiler recognises compiler directives within user source code uses source code generate program alternative parallelisation strategies encapsulated within exposing simple interface programmers compiler directives similar already familiar openmp compiler directives automatically provide dynamic parallelisation functionality users without requiring significant changes original source code furthermore approach provides users choice enabling disabling functionality minimum effort complement code duplication also implemented functionality small runtime library produces code responsible deciding parallelisation perform automatically decision functionality considers number iterations loop order chose parallelisation strategy makes best use processors cores available implementation currently limited parallelising single loop nested loop region taking advantage outermost inner loop strategies authors already taken similar approach modifying openmp runtime library order make decisions dynamically however applying logic openmp runtime library would limited implementation specific compiler using compile approach aiming transfer logic user code order maintain portability solution addition simple heuristics also explored idea approach runtime order detect best possible parallelisation strategy time measurements heuristics based approach alone capture information amount actual computations making decision parallelising loop whilst generally irrelevant perfectly nested loops work lowest loop may impact work different loops well may also fig compilation process using compiler situations different inner loop slightly iterations outer loop could chosen simple heuristic place parallelisation occurs overheads associated parallelising inner loop actually make suboptimal choice providing profiling based decision mechanism may help scenarios enable identify situations instance using less threads parallelise outer loop might provide better execution time idea auto tuning code already proposed researches producing optimised code apply similar logic ource source compiler compiler acts preprocessor code contain openmp directives well directives compiler parses code creates internal representation code form abstract syntax tree ast regions input code contain directives translated semantics programming language openmp directives parse phase appropriate nodes regions placed ast created ast translated back code openmp directives generated code compiled using standard openmp enabled compiler produce parallel executable process illustrated figure compiler implemented using lua programming language along lpeg parsing library recognises number bespoke compiler directives form pragma preomp loop preceded pragma preomp directive considered compiler suitable candidate applying parallelisation loop found compiler performs necessary code transformations decision made runtime whether loop run sequentially parallel ensure sequential parallel versions loop available executable runtime addition simple analysis loop performed order facilitate computation loops iterations making decision example code presented listing pragma preomp parallel private pragma preomp parallel shared work listing nested loop region preomp furthermore also extend grammar support additional clause parallel threshold expression clause optional present compiler assume default value clause used allow control loop parallelised discussed section code duplication main function compiler take original user code duplicate loops parallelised serial parallel versions loops selected runtime previously mentioned system allows one loop parallelised given time although loop parallelised change runtime program parameters loop change serial parallel versions loops parallelised must appear executable enable selection runtime take place loop preceded pragma preomp directive loop duplicated wrapped normal else statement evaluates decision function runtime library selects else branch based outcome evaluation openmp comparison code duplication approach also implemented functionality uses existing clause openmp parallel construct custom directive translated openmp parallel directive attached clause order decide whether execute loop parallel rather serial parallel version loop expression clause consists call decision function runtime library takes evaluated expressions loops information order make decision functionality included allow comparison approach standard method developers could currently use provide dynamic selection parallelism openmp however major drawback approach reason uses functionality parallel region created regardless whether loop parallelised considering example figure parallelising outer loop two nested loops two threads result three parallel regions thread outer region fig example using clause parallelise outer inner loop two nested loops two threads create new parallel region become master case inner loop parallelised two parallel regions created nested regions larger number loops method potential produce excessive parallel overheads ecision functions runtime library runtime library implements logic deciding version loop chosen execution code processed compiler must linked runtime library enable functionality used decision based heuristics use heuristics based information collected runtime decide whether loop execute sequentially parallel idea approach look first loop enough iterations utilise available threads based assumption parallelising outer loops efficient parallelising inner loops amount parallel overheads lower openmp parallel regions encountered less frequently execution loop decider checks whether loop outer level already running parallel condition met loop serialised case outer loop running parallel number iterations loop calculated divided available number threads results value greater equal specified threshold parallel version loop chosen otherwise loop serialised discussed section default value threshold must idle threads although controlled user calculations iterations based parameters loop extracted source source compiler provided arguments decision function case original code loop uses variables boundaries change value also captured decision function calculation design allows constant monitoring changes iterations loops also results dynamic adaptation parallelisation strategy execution program algorithm simple minimum overheads moreover need maintain state loops however logic used function program profiling overhead imposed first iterations program figure outlines example three nested loops vii erformance valuation fig example heuristics profiling decider three loops based optimism considers amount parallelism exposed loop regardless whether amount work loop big enough justify overheads parallelisation whether work loops evaluate performance new functionality aimed benchmark standard static openmp parallelisations range different configurations particular focussed varying number loop iterations amount work within loops number changes occur loop bounds execution evaluate whether approach beneficial compared static parallelisation undertake benchmarks used two different codes first synthetic configurable benchmark code shown listing constructed evaluation number iterations loop configured amount work simulated calling delay function second third loops within third loop decision based heuristics profiling address potential issue basic decision based heuristics previously discussed also implemented complex decision function based size loops evaluation work loops manner heuristics decider uses information extracted source source compiler order determine whether loop parallelised however loop meet conditions function reverts profiling mode order decide version loop serial parallel choose based timings first time loop executed heuristics decider determines loop parallelised conditions met sequential version loop chosen profiling enabled loop next execution loop evaluation heuristics still performed conditions still met example changes iterations loop loop parallelised since point timing information serial version consecutive executions loop first check heuristics conditions falling back profiling mode condition satisfied however function detect timings versions available utilise information gathered profiling decide loop parallelise providing number iterations loop changed fastest version chosen final decision contrast amount work number loop iterations changed timings get invalidated profiling implement functionality requires additional code compared basic heuristic decision function impose extra overhead produced program although loop iterations static throughout run delay delay listing synthetic benchmark code second benchmark code extract cfd code outlined listing code complex synthetic benchmark representative realistic scientific simulation codes code used explore performance solution loop iterations vary bounds loops dynamic course execution benchmark one loops change loop bound outer loops progressed benchmark environment platform used evaluate dynamic loop parallelisation functionality ness epcc system composed two parts development job submission job execution management two parts handled sun grid engine allows submission jobs must executed nodes isolation part system composed two sun shared memory nodes central processing unit cpu node amd opteron processor processing cores main memory core cache data cache instructions addition also available core combined data instructions used portland group pgi compiler majority benchmarks following compiler flags benchmarking involving openmp functionality used gnc compiler instead version pgi compiler used support thread team nested parallel region one threads outer region serialised clause seems contrary openmp specification clause affects number threads get assigned particular parallel region thread teams nested regions using gnu compiler used following compiler flags timing information collected using omp get wtime function benchmark executed three times worst time taken since limiting factor execution time outer loop work outer loop work outer loop work outer loop work synthetic benchmark results consider example code listing execution time code two internal nested loops outer loop parallelised certain amount threads outer threads calculated shown equation tpouter execution time parallelising outer loop touter work time needed work loops tinner work time needed amount work within innermost loop outer similar fashion parallelising inner loop using inner threads execution time loops shown equation inner want reduction overall execution time parallelising inner loop constraint tpinner tpouter must satisfied solving constraint terms touter work get maximum allowed threshold execution time work outer loop shown equation worth mentioning model ideal performance model work evenly distributed threads reality time touter work might affected presence parallel overheads order test hypothesis measured amount time required delay function various values results shown figure graphs figure show performance four different parallelisation strategies openm outer openm inner results manual static parallelisations individual loops benchmark heuristics results basic decision function using value one parallelise loop iterations threads available heuristic rof iler results system using profiling functionality appropriate fig synthetic benchmark results varying levels work loops results evident loops perfectly nested regular loop bounds changing benefit using profiling functionality basic heuristics choose optimal loop parallelise apart using threads variation outcomes threads consequence number loop iterations chosen benchmark iterations outer loop iterations inner loop distribution iterations threads results threads get assigned iteration outer loop threads get extra iteration total execution time case limited slowest threads time iterations iterations outer loop multiplied iterations inner one parallelising inner loop threads however thread get iterations whereas rest threads get iterations case total execution time parallel loops amount time required iterations iterations inner loop multiplied iterations outer loop since decision functions utilise heuristics decision number threads less number iterations exploit opportunity profiling actually performed case could altered setting decision heuristic value setting heuristic graphs observe threshold value calculations hold parameters used benchmark calculated threshold value approximately touter work seconds work outer work less calculated threshold figures parallelising inner loop threads still faster parallelising outer loop threads amount work increases impact execution time parallelising inner loop table oop parameters used cfd code benchmarking parameter value iters cell cell increased since work serialised cases heuristics decider makes wrong choice figures since decision concerns amount iterations loops available threads contrast profiling used decision function correctly detected fastest execution time achieved parallelising inner loop case amount work outer loop exceeds calculated threshold parallelising inner loop even threads increases total execution time benefit using threads parallelise inner loop enough justify work serialised cfd benchmarking results first benchmark performed using extract cfd code compare openmp clause basic heuristic functionality used reference timings manually parallelised blocks harmonics cell loops compare execution time heuristics decision function two code generation modes compiler order avoid cases iterations evenly distributed threads consider cases threads parameters used loop iterations shown table varying amount work inner loop also consider cases blocks shape altering values cell cell loops alterations indicate blocks grid shape cell cell alteration means first third blocks grid shape whereas second fourth blocks shape performance results shown figure highlight fact significant difference implemented functionality provided openmp clause clause slower basic openmp parallelisation also increases overall execution time code figure threads available loop outer level parallelised code generation modes however clause mode produces slower execution time code duplication mode threads used parallelisation applied cell loop contrast code duplication mode produces execution time similar case statically parallelising loop clause mode still slower similar performance pattern seen threads moreover presence alterations shape blocks shown figures clause mode produces even slower execution time hand small work alterations small work alterations large work alterations large work alterations fig cfd benchmark blocks harmonics varied alterations cell loops varied amount work inner loop code duplication mode exploit opportunity order utilise available threads applying parallelism cell loop increasing amount work core calculation positive effect clause code generation mode observe figure compared figure difference using clause static parallelisation large small numbers threads likely performance cost executing clause proportionally smaller compared overall execution time however performance degradation still observed increasing number threads execution times code using openmp clause raised concerns whether code operating correctly extensive testing verification ascertained versions code clause code duplication correct producing behaviour therefore investigated parallel overheads openmp runtime library gcc compiler authors already studied overheads nested parallelism various compilers including recent version gcc compiler one used work findings suggest implementation nested parallel regions gcc compiler significant overheads presented work whether use clause nested parallel regions produces overheads order ensure behaviour observed results cause nested parallel regions presence clause constructed simple micro benchmark table iii icro benchmark results gnu compiler implementation nested parallelism parallel loop execution time seconds outer inner nested clause nested num threads clause blocks harmonic blocks harmonic alterations alterations nested parallel micro benchmark created four versions benchmark code three nested loops delay function epcc microbenchmark suite block innermost loop first version benchmark creates parallel region loop second level second version performs operation innermost loop third version uses clause loops serialising outer loop value parallelising inner loop value finally last version creates parallel region loops however force number threads thread team outer loop using num threads clause manage reproduce behaviour clause code case inner loop parallelised number iterations parallel loops number available threads table iii presents execution times case see parallelising inner loop nested parallel regions takes seconds longer parallelising inner loop manually even small simple benchmark moreover two versions contain nested parallel regions achieve similar execution times test concluded likely behaviour observed clause code generation mode affected overheads implementation gcc compiler nested parallel regions decision function benchmarking finally investigated performance profiling decision functionality cfd extract code code perfectly nested basic heuristic decision function optimal chose best loop parallelise little overheads whereas profiling function extra functionality therefore imposes extra overheads performance code results experiments shown figure observe figures decision functions make correct choice parallelisation strategy threads however overheads profiling functionality negative impact overall execution time even profiling actually performed functions inserted execution loop count amount work performed loop level increase overall time moreover observe threads profiler actually chooses parallelise harmonics loop whereas heuristics decider produces blocks harmonic blocks harmonic alterations alterations fig cfd benchmark varied alterations cell loops large amount work inner loop correct behaviour profiling cell loop timings performed loop version profiling mode sensitive presence overheads ultimately affect decision function overhead taking timings alterations present shape loops shown figures heuristics decider manages adapt behaviour parallelising innermost loop order utilise threads significantly perform static parallelisation test cases decision function based profiling provides slower execution times decision function based heuristics moreover additional logic included decision function profiling caused suboptimal decision made situations viii mproved profiling decisions results previous benchmarks lead considerations reasons behind poor execution decision function performs profiling comparing functionality function simple case heuristics decision function two sources additional overheads first one logic profiling version loop order make choice two versions loop slow version must also executed however actual simulation code runs significant amount time overhead negligible providing loop bounds alter trigger profiling functionality many times incurred infrequently small work blocks harmonic alterations large work blocks harmonic alterations fig cfd benchmark varied alterations cell loops large amount work inner loop second source overheads inclusion additional function calls loop order measure time execution count amount work performed elimination functionality taking slow path possible since essence profiling versions loop must executed order make comparison execution time however relax conditions validity timings consider number iterations specific loop profiled eliminate logic performs counting work internal loops decision function decides version loop profiled failure heuristics conditions number iterations version loop going executed saved state loop point way code function calls placed loop remains simple adjusting loop level counter thread well marking starting ending times execution loop profiled rather counting iterations internal loops initial profiling functionality order test theory created new version runtime library includes modifications called relaxed profiler graphs figure see removal additional logic performs counting benefits decision function profiling profiling performed threads relaxed version decision function faster accurate version performance pattern holds profiling performed threads figure threads figure comparing execution time new version decision function profiling execution time heuristics decision function latter still produces faster execution time however difference large behaviour expected since presence profiling introduces additional computations within code functions placed loop moreover cases parallelisation applied nested loop decision function must execute versions loop one slow version order make decision finally see relaxed decision function rectifies problem original profiling decision function choosing wrong option cases figure see threads relaxed profiler makes correct choice figure threads performance relaxed profile decision function comparable heuristics decision function onclusion main focus work investigate possibility dynamically choosing runtime best loop nested loop region best utilises available threads successfully created compiler runtime library order automatically allow dynamic choice made runtime solution uses directives based approach similar openmp requires minimum effort code change users point view discovered current mechanism users exploit perform openmp clause perform efficiently least implementation tested despite fact behaviour result inefficient implementation gcc compiler used work compiler code duplication mode able provide additional speedup execution time code conclude relying openmp runtime library perform loop nesting execution time limited compilers implementation nested parallel regions although code duplication considered bad programming practice done automatically eliminate unnecessary parallel overheads also shown level using profiling select loop parallelise provide performance benefits certain circumstances instance loops perfectly nested openmp currently generally used small scale parallelisation code primarily large scale hpc resources however current trend processors suggests near future large scale resources order cores likely commonly available therefore sharedmemory parallelisations likely become utilised interesting large scale scientific simulations eferences openmp openmp application programming interface version duran silvera corbaln labarta runtime adjustment parallel nested loops proc international workshop openmp applications tools wompat chen yew impact synchronization granularity parallel systems proceedings annual international symposium computer architecture ser isca new york usa acm online available http tanaka taura sato yonezawa performance evaluation openmp applications nested parallelism ayguade gonzalez martorell jost employing nested openmp parallelization computational fluid dynamics applications parallel distrib vol may online available http hall chame chen shin rudy khan loop transformation recipes code generation pluto automatic parallelizer locality optimizer online available http lua programming online available http lpeg online available http ness hpc online available http dimakopoulos hadjidoukas philos microbenchmark study openmp overheads nested parallelism proceedings international conference openmp new era parallelism ser iwomp berlin heidelberg online available http
| 6 |
avoiding teacher mistakes training neural networks controlled weak supervision mostafa aliaksei sascha jaap university amsterdam google research dehghani severyn rothe kamps dec abstract paper propose learning method train two neural networks fashion target network confidence network target network optimized perform given task trained using large set unlabeled data weakly annotated propose weight gradient updates target network using scores provided second confidence network trained small amount supervised data thus avoid weight updates computed noisy labels harm quality target network model evaluate learning strategy two different tasks document ranking sentiment classification results demonstrate approach enhances performance compared baselines also speeds learning process weak labels introduction deep neural networks shown impressive results lot tasks computer vision natural language processing information retrieval however success conditioned availability exhaustive amounts labeled data many tasks data available hence unsupervised methods becoming increasingly attractive using weak noisy supervision straightforward approach increase size training data instance web search task ranking ideal training data would rankings documents ordered relevance large set queries however practical collect data large scale small set judged pairs available however task output heuristic methods dehghani clickthrough logs joachims used weak noisy signals along small amount labeled data train learning rank models usually done network weak data true labels dehghani severyn moschitti however two independent stages leverage full capacity information true labels instance stage handle control extent data weak labels contribute learning process different quality paper propose method leverages small amount data true labels along large amount data weak labels proposed method three main components weak annotator heuristic model weak classifier even human via crowdsourcing employed annotate massive amount unlabeled data target network uses large set weakly annotated instances weak annotator learn main task confidence network trained small set estimate confidence scores instances annotated weak annotator train target network confidence network fashion joint learning process target network confidence network try learn suitable representation data layer shared communication channel target network tries learn predict label given input supervision weak annotator time output confidence network confidence scores define magnitude weight updates target network respect loss computed based labels weak annotator propagation phase target network way confidence network helps target network avoid mistakes teacher weak annotator weight updates weak labels look reliable confidence network perspective dehghani goal confidence network trained jointly target network calibrate learning rate instance batch weights target network step updated follows fwt global learning rate batch size loss predicting input target label scoring function learned confidence network taking input instance noisy label regularization term thus effectively control contribution parameter updates target network weakly labeled instances based reliable labels according confidence network learned small supervised data setup requires running weak annotator label large amount unlabeled data done time many tasks possible use simple heuristic implicit human feedback generate weak labels set used train target network contrast small set used train confidence network estimates good weak annotations controls effect weak labels updating parameters target network method allows learning different types neural architectures different tasks meaningful weak annotator available paper study performance proposed model focusing two applications information retrieval natural language processing document ranking sentiment classification whilst two applications differ considerably exact operationalization model cases also clear similarities first cases human gold standard data based cognitively complex subjective judgments causing high interrater variation increasing cost obtaining labels need larger sets labels second also cases weak supervision signal systemic objective facilitates learning data representation experimental results suggest proposed method effective leveraging large amounts weakly labeled data compared traditional tasks also show explicitly controlling weight updates target network confidence network leads faster convergence since filtered supervision signals solid less noisy following section introduce general architecture model explain training process describe details applications apply model section section present experimental setups tasks along results analysis review related works conclude paper proposed method following describe recipe learning neural networks scenario along small training set large set weakly labeled instances leveraged formally given set unlabeled training instances run weak annotator generate weak labels gives training set consists tuples training instances weak labels small set training instances true labels also apply weak annotator generate weak labels creates training set consisting triplets training instances weak labels true labels generate large amount training data almost cost using weak annotator contrast limited amount data true labels general architecture proposed framework train neural network jointly learns confidence score weak training instances main task using controlled supervised signals representation model shown figure comprises weak annotator two neural networks namely confidence network target network goal weak annotator provide weak labels instances assumption provided weak annotator imperfect estimates true labels available set set prediction loss wrt weak labels supervision layer prediction loss wrt weak labels confidence network supervision layer goodness instances representation learning weak annotator goodness instances representation learning true labels full supervision mode training batches data true labels confidence network weak annotator true labels weak supervision mode training batches data weak labels figure learning controlled weak supervision proposed network learning target task fashion using large amount weakly labeled data small amount data true labels faded parts network disabled training corresponding mode arrows show gradient propagation parameters parts network red frames get updated backward pass parameters network blue frames fixed training goal confidence network estimate confidence score training instances learned triplets training set input weak label true label score used control effect weakly annotated training instances updating parameters target network backward pass backpropagation target network charge handling main task want learn words approximating underlying function predicts correct labels given data instance weak label training set target network aims predict label target network parameter updates based noisy labels assigned weak annotator magnitude gradient update based output confidence network networks trained fashion alternating full supervision weak supervision mode full supervision mode parameters confidence network get updated using batches instances training set depicted figure training instance passed representation layer mapping inputs vectors vectors concatenated corresponding weak labels generated weak annotator confidence network estimates probability taking data instance account training target network weak supervision mode parameters target network updated using training set shown figure training instance passed representation learning layer processed supervision layer part target network predicting label main task also pass learned representation training instance along corresponding label generated weak annotator confidence network estimate confidence score training instance confidence score computed instance set confidence scores used weight gradient updating target network parameters words step size noteworthy representation layer shared networks besides regularization effect layer sharing leads better generalization sharing layer lays ground confidence network benefit largeness set target network utilize quality set model training optimization objective composed two terms confidence network loss captures quality output confidence network target network loss expresses quality main task networks trained alternating weak supervision full supervision mode full supervision mode parameters confidence network updated using training instance drawn training set use loss function confidence network capture difference predicted confidence score instance target score ranker log log target score calculated based difference true weak labels respect main task weak supervision mode parameters target network updated using training instances use weighted loss function capture difference predicted label target network target label compositionality embedding weights loss training instance confidence score weakly annotated instance estimated confidence network note treated constant weak supervision mode gradient propagation confidence network backward pass depicted figure minimize two loss functions jointly randomly alternating full weak supervision modes example using ratio training based chosen supervision mode sample batch training instances replacement without replacement since generate much train data set since setups usually training process oversamples instance key point main task confidence scoring task always defined close tasks sharing representation benefit confidence network implicit data augmentation compensate small amount data true labels besides noticed updating representation layer respect loss network acts regularization networks helps generalization target confidence network since try capture tasks related tasks less chance overfitting also investigated possible setups training scenarios instance tried updating parameters supervision layer target network using also data true labels instead using alternating sampling tried training target network using controlled weak supervision signals confidence network fully trained shown experiments architecture training strategy described provide best performance figure target network document ranking applications section apply method two different tasks document ranking sentiment classification task start introduction task followed setup target network description representation learning layer supervision layer document ranking task core information retrieval problem challenging needs capture notion relevance query documents employ pairwise neural ranker architecture target network dehghani setting training instance consists query two documents labels scalar values indicating probability ranked higher respect general schema target network illustrated figure representation learning layer setup proposed dehghani layer function learns representation input data instances consists three components embedding function denotes vocabulary set number embedding dimensions weighting function compositionality function formally function defined tqi tqi tdi tdi tdi tdi tqi tdi denote ith term query respectively document embedding function maps term dense dimensional real value vector learned training phase weighting function assigns weight term vocabulary compositionality function projects set pairs dimensional representation independent value exp exp fact normalized weighted elementwise summation terms embedding vectors shown global term weighting function along embedding function improves performance ranking simulates effect inverse document frequency idf important feature information retrieval dehghani experiments initialize embedding function embeddings mikolov google news weighting function idf supervision layer receives vector representation inputs processed representation learning layer outputs prediction opt simple fully connected network hidden layers followed softmax hidden layer network computes denote weight matrix bias term corresponding hidden layer layers follow sigmoid output employ weighted cross entropy loss log log batch instances confidence score weakly annotated instance estimated confidence network weak annotator robertson unsupervised retrieval method pairwise documents ranking setup given instance probability document ranked higher based scores obtained annotator whereas score obtained weak annotator train confidence network target label calculated using absolute difference true label weak label calculated similar comes true labels created humans sentiment classification task aims identify sentiment positive negative neutral underlying individual sentence target network convolutional model similar deriu severyn moschitti deriu training instance consists sentence sentiment label architecture target network illustrated figure representation learning layer learns representation input sentence shared target network confidence network consists embedding function denotes vocabulary set number embedding dimensions function maps sentence matrix column represents embedding word corresponding position sentence matrix passed convolution layer layer set filters applied sliding window length generate feature map matrix feature map given filter generated denotes concatenation word vectors position concatenation produces feature vector vectors aggregated filters feature map matrix also add bias vector result convolution convolutional layer followed activation function use relu nair hinton applied afterward output passed max pooling layer operates columns feature map matrix returning largest value pool see figure architecture similar model twitter sentiment classification semeval severyn moschitti deriu initialize embedding matrix embeddings mikolov pretrained collection tweets supervision layer neural classifier pooled repr conv feature map embedding embedding figure target network sentiment classification network similar supervision layer ranking task different width depth softmax instead sigmoid output layer returns probability distribution three classes employ weighted cross entropy loss log batch instances confidence score weakly annotated instance set classes weak annotator sentiment classification task simple unsupervised method hamdan kiritchenko use baccianella assign probabilities positive negative neutral token set sentencelevel distribution derived simply averaging distributions terms yielding noisy label number classes empirically found using soft labels weak annotator works better assigning single hard label target label confidence network calculated using mean absolute difference true label weak label onehot encoding sentence label classes experiments results first describe baselines afterward present experimental setups tasks along results analysis baselines general setups tasks evaluate performance method compared following baselines weak annotator unsupervised method used annotating unlabeled data weak supervision target network trained weakly labeled data full supervision target network trained true labeled data weak supervision fine tuning target network trained weakly labeled data true labeled data weak supervision supervision layer target network trained weakly labeled data supervision layer true labeled data representation learning layer kept fixed weak supervision representation fine tuning except supervision layer kept fixed fine tuning new label inference veit similar proposed neural architecture inspired paradigm hinton romero instead confidence network predict confidence score training instance label generator network trained set map weak labels instances new labels new labels used target training target network controlled weak supervision joint training proposed neural architecture jointly train target network confidence network alternating batches drawn sets explained section controlled weak supervision full supervision joint training cwsjt except parameters supervision layer target network also updated using batches regards true labels additionally compare performance cwsjt possible training setups separate training consider confidence network separate network without sharing representation learning layer train set train target network controlled weak supervision signals circular training train target network set confidence network trained data true labels target network trained controlled weak supervision signals progressive training mixture two previous baselines inspired rusu transfer learned information converged target network confidence network using progressive training train target network controlled weak supervision signals proposed architectures implemented tensorflow tang abadi use adam optimizer kingma algorithm furthermore prevent feature use dropout srivastava regularization technique models setup confidence network predict fully connected feed forward network given confidence network learned small set true labels speed training initialize representation learning layer parameters word embeddings use relu nair hinton activation function target network confidence network following describe setups experimental results document ranking setup results collections use two standard trec collections task retrieval first collection consists news articles different news agencies homogeneous collection second collection clueweb category web collection million english documents considered heterogeneous collection spam documents filtered using waterloo spam scorer cormack default threshold data true labels take query sets contain judgments set queries trec topics collection set queries topics experiments clueweb collection query take documents judged relevant plus number documents judged form pairwise combinations among data weak labels create query set using unique queries appearing aol http query logs pass query set contains web queries initiated real users aol search engine sampled period march may applied standard dehghani queries filtered large volume navigational queries containing url substrings http also removed characters queries dataset took queries least ten hits target corpus using weak annotator method applying steps collect million queries train million queries clueweb prepare weakly labeled training set take top retrieved documents using query training query set total leads training instances parameters settings conducted nested cross validation split fold hyperparameters models baselines tuned individually validation set using batched bandits expected improvement acquisition function desautels size number hidden layers ranker confidence network separately selected respectively initial learning rate dropout parameter selected respectively considered embedding sizes batch size experiments set experiments parameters network optimized employing adam optimizer kingma using computed gradient loss perform algorithm inference time query take top retrieved documents using candidate documents using trained models use implementation default parameters results discussions evaluate set report two standard evaluation metrics mean average precision map documents normalized discounted cumulative gain calculated top retrieved documents ndcg statistical significant differences map ndcg values determined using https table performance proposed method baseline models different datasets indicates improvements degradations statistically significant level using paired model respect weak supervision baseline wso cwsjt improvement baselines considered bonferroni correction applied significant tests method wso nli fso cwsjt table performance variants proposed method different datasets indicates improvements degradations statistically significant level using paired model respect weak supervision baseline wso table cwsjt improvement baselines considered bonferroni correction applied significant tests clueweb map ndcg map ndcg paired value bonferroni correction table shows performance datasets based results provides significant boost performance datasets two interesting points want highlight first among experiments updating parameters target network best fine tuning strategy updating parameters representation layer based true labels works better updating parameters supervision layer supports designed choice shared embedding layer gets updated set second seems reasonable make use true labels updating parameters target network achieves better results cwsjt also performs mostly even worse training direction parameter optimization highly affected type supervision signal control magnitude gradients change directions alternating two sets different label qualities different supervision signal types weak string confuses supervision layer target network fine tinning problem since optimize parameters respect supervision two sets two separate stages noteworthy also tried another objective function target network taking weak true labels account slightly better gives method cwsst cwsct cwspt cwsjt clueweb map ndcg map ndcg improvement cwsjt ranking task target network designed particular trained weak annotations dehghani hence training network weak supervision performs better fso due fact ranking complex task requiring many training instances relatively true labels available performance nli worse cwsjt learning mapping imperfect labels accurate labels training target network new labels essentially harder learning filter noisy labels hence needs lot supervised data reason ranking due training instances regards task complexity nli fails generate better new labels hence directly misleads target network completely fails improve performance table shows performance different training strategies shown cwsjt cwsct perform better strategies cwsct let confidence network trained separately still able enjoy shared learned information target network however less efficient need two rounds training weakly labeled data cwsst performs poorly since training data small train confidence network without taking advantage vast amount weakly annotated data also noticed strategy leads slow convergence compared wso also transferring learned information target network confidence network via progressive training cwspt performs better full sharing representation learning layer table performance baseline models well proposed method different datasets indicates improvements degradations statistically significant level using paired model respect weak supervision baseline wso cwsjt improvement baselines considered bonferroni correction applied significant tests method walexicon wso nli fso cwsjt table performance variants proposed method sentiment classification task different datasets indicates improvements degradations statistically significant level using paired model respect weak supervision baseline wso table cwsjt improvement baselines considered bonferroni correction applied significant tests method cwsst cwsct cwspt cwsjt sentiment classification setup results collections test model twitter sentiment classification task rosenthal datasets subsume test sets previous editions semeval tweet preprocessed urls usernames masked data true labels use train tweets development tweets data training tweets validation make results comparable official runs semeval use tweets tweets test sets rosenthal nakov data weak labels use large corpus containing tweets collected two months training word embeddings creating weakly annotated set using method explained section parameters settings similar ment ranking task tuned model including baselines separately respect true labels validation set using batched bandits expected improvement acquisition function desautels size number hidden layers classifier confidence network separately selected respectively tested model convolutional layers number convolutional feature maps filter width selected respectively initial learning rate dropout parameter selected respectively considered embedding sizes batch size experiments set results discussion report performance model baseline models terms official semeval metric table also report statistical significance improvements using paired value bonferroni correction method best performing among baselines unlike ranking task training network data true labels tso performs rather good sentiment classification task learning representation input sentence tweet simpler ranking task try learn representation query long documents consequently need fewer data able learn suitable representation amount available data true labels already capture rather good representation without helps weak data impossible ranking task however results suggest still gain improvement using task behaviors different experiments similar ranking task furthermore updating parameters supervision layer respect true labels model perform better cwsjt supports choice updating representation learning layer respect signals data true labels sentiment classification task performance nli acceptable compared ranking task first generating new classification labels essentially simpler secondly task need learn represent simpler input learn simpler function predict labels relatively bigger set supervised data helps generate new labels however performance nli still lower cwsjt argue cwsjt conservative approach fact equipped soft filter decreases effect noisy training examples set parameter updates training smoother action gradient nli might change direction gradient generating completely new label consequently prone errors especially enough training data learn generate better labels sentiment classification task besides general baselines also report best performing systems also models rouvier favre deriu proposed model outperforms best system datasets table also presents results different training strategies sentiment classification task shown similar ranking task cwsjt cwsct perform better strategies although cwsct slightly better statistically significant terms effectiveness compared cwsjt efficient cwsjt training compared ranking task sentiment classification easier estimate confidence score instances respect amount available supervised data therefore cwsst able improve performance wso significantly moreover cwspt fails compared strategies representation learning layer shared target network confidence network faster learning pace controlling effect supervision train neural networks improves performance also provides network solid signals speeds learning process figure illustrates loss networks compared loss training target network weak supervision along performance test sets respect different amounts training data sentiment classification shown training loss target network model higher loss network trained weakly observed similar learning process ranking task however skip bringing plots due space limit since nested ranking task set plots fold figure loss target network confidence network compared loss wso lwso set performance cws wso test sets respect different amount training data sentiment classification supervised data lwso however since losses calculated respect weak labels true labels low training loss indication overfitting imperfection weak labels words regardless general problem lack generalization due overfitting setup learning weak labels predicting labels similar train labels low training loss necessarily desirable incident validation set however decreases faster lwso supports fact lwso overfits imperfection weak labels setup helps target network escape imperfection good job validation set terms performance compared wso performance cws test sets increases quickly cws able pass performance weak annotator seeing much fewer instances annotated weak annotator related work learning weak noisy labels studied literature verleysen briefly review research relevant work learning supervised learning algorithms zhu developed utilize weakly even unlabeled data rosenberg lee tries predict labels unlabeled data unlabeled data provided additionally particular neural networks methods use greedy weights using unlabeled data alone followed supervised deriu severyn moschitti methods learn unsupervised encodings multiple levels architecture jointly supervised signal ororbia weston perspective approach similar andrychowicz separate recurrent neural network called optimizer learns predict optimal update rule updating parameters target network optimizer receives gradient target network outputs adjusted gradient matrix number parameters modern neural networks typically order millions gradient matrix becomes large feed optimizer approach andrychowicz applied small models contrast approach leverages additional weakly labeled data use confidence network predict scores calibrate gradient updates target network direct learning labels many studies tried address learning condition imperfect labels noise cleansing methods proposed remove correct mislabeled instances brodley friedl studies showed weak noisy labels leveraged employing particular architecture defining proper loss function avoid overfitting training data imperfection dehghani patrini beigman klebanov zeng bunescu mooney modeling imperfection also research trying model pattern noise weakness labels methods leverage generative models denoise weak supervision sources discriminative model learn ratner rekatsinas varma methods aim capture pattern noise inserting extra layer separated module sukhbaatar veit infer better labels noisy labels use supervise training network inspired paradigm hinton romero xiao teacher generates new label given training instance corresponding weak noisy label however show experiments approach sufficient amount supervised data enough generate better labels conclusion future directions training neural networks using large amounts weakly annotated data attractive approach scenarios adequate amount data true labels available paper propose neural network architecture unifies learning estimate confidence score weak annotations training neural networks learn target task controlled weak supervision using weak labels updating parameters taking estimated confidence scores account helps alleviate updates instances unreliable labels may harm performance applied model two tasks document ranking sentiment classification empirically verified proposed model speeds training process obtains accurate results promising future direction going understand extent using weak annotations potential training models neural networks understand exact conditions proposed method works references abadi tensorflow machine learning heterogeneous systems software available http marcin andrychowicz misha denil sergio gomez matthew hoffman david pfau tom schaul nando freitas learning learn gradient descent gradient descent advances neural information processing systems pages stefano baccianella andrea esuli fabrizio sebastiani sentiwordnet enhanced lexical resource sentiment analysis opinion mining lrec volume pages eyal beigman beata beigman klebanov learning annotation noise proceedings joint conference annual meeting acl international joint conference natural language processing afnlp volume association computational linguistics pages carla brodley mark friedl identifying mislabeled training data journal artificial intelligence research razvan bunescu raymond mooney learning extract relations web using minimal supervision acl gordon cormack mark smucker charles clarke efficient effective spam filtering large web datasets inf retr mostafa dehghani sascha rothe enrique alfonseca pascal fleury learning attend copy generate query suggestion proceedings international conference information knowledge management cikm mostafa dehghani aliaksei severyn sascha rothe jaap kamps learning learn weak supervision full supervision arxiv preprint mostafa dehghani hamed zamani aliaksei severyn jaap kamps bruce croft neural ranking models weak supervision proceedings international acm sigir conference research development information retrieval jan deriu maurice gonzenbach fatih uzdilli aurelien lucchi valeria luca martin jaggi swisscheese task sentiment classification using ensemble convolutional neural networks distant supervision proceedings semeval pages jan deriu aurelien lucchi valeria luca aliaksei severyn simon mark cieliebak thomas hofmann martin jaggi leveraging large amounts weakly supervised data multilanguage sentiment classification proceedings international international world wide web conference www pages alec richa bhayani lei huang twitter sentiment classification using distant supervision project report stanford hussam hamdan frederic patrice bellot experiments dbpedia wordnet sentiwordnet resources sentiment analysis second joint conference lexical computational semantics sem volume pages geoffrey hinton oriol vinyals jeff dean distilling knowledge neural network arxiv preprint thorsten joachims optimizing search engines using clickthrough data proceedings eighth acm sigkdd international conference knowledge discovery data mining acm pages diederik kingma jimmy adam method stochastic optimization arxiv preprint svetlana kiritchenko xiaodan zhu saif mohammad sentiment analysis short informal texts journal artificial intelligence research lee simple efficient learning method deep neural networks workshop challenges representation learning icml volume page tomas mikolov ilya sutskever kai chen greg corrado jeff dean distributed representations words phrases compositionality nips pages vinod nair geoffrey hinton rectified linear units improve restricted boltzmann machines proceedings international conference machine learning pages preslav nakov alan ritter sara rosenthal fabrizio sebastiani veselin stoyanov task sentiment analysis twitter proceedings semeval pages alexander ororbia lee giles david reitter learning deep hybrid model text classification proceedings conference empirical methods natural language processing emnlp thomas desautels andreas krause joel burdick parallelizing tradeoffs gaussian process bandit optimization journal machine learning research greg pass abdur chowdhury cayley torgeson picture search infoscale michel verleysen classification presence label noise survey ieee transactions neural networks learning systems giorgio patrini alessandro rozza aditya menon richard nock lizhen making neural networks robust label noise loss correction approach arxiv preprint alexander ratner christopher sen daniel selsam christopher data programming creating large training sets quickly advances neural information processing systems pages theodoros rekatsinas chu ihab ilyas christopher holoclean holistic data repairs probabilistic inference arxiv preprint stephen robertson hugo zaragoza probabilistic relevance framework beyond foundations information retrieval adriana romero nicolas ballas samira ebrahimi kahou antoine chassang carlo gatta yoshua bengio fitnets hints thin deep nets arxiv preprint chuck rosenberg martial hebert henry schneiderman object detection models seventh ieee workshop applications computer vision sara rosenthal preslav nakov svetlana kiritchenko saif mohammad alan ritter veselin stoyanov task sentiment analysis twitter proceedings international workshop semantic evaluation semeval pages mickael rouvier benoit favre task polarity embedding fusion robust sentiment analysis proceedings semeval pages andrei rusu neil rabinowitz guillaume desjardins hubert soyer james kirkpatrick koray kavukcuoglu razvan pascanu raia hadsell progressive neural networks arxiv preprint aliaksei severyn alessandro moschitti twitter sentiment analysis deep convolutional neural networks proceedings international acm sigir conference research development information retrieval acm pages aliaksei severyn alessandro moschitti unitn training deep convolutional neural network twitter sentiment classification proceedings international workshop semantic evaluation semeval association computational linguistics denver colorado pages nitish srivastava geoffrey hinton alex krizhevsky ilya sutskever ruslan salakhutdinov dropout simple way prevent neural networks overfitting mach learn res sainbayar sukhbaatar joan bruna manohar paluri lubomir bourdev rob fergus training convolutional networks noisy labels arxiv preprint yuan tang tensorflow module distributed machine learning arxiv preprint paroma varma bryan dan iter peng rose christopher christopher socratic learning correcting misspecified generative models using discriminative models arxiv preprint andreas veit neil alldrin gal chechik ivan krasin abhinav gupta serge belongie learning noisy datasets minimal supervision conference computer vision pattern recognition jason weston ratle hossein mobahi ronan collobert deep learning via semisupervised embedding neural networks tricks trade springer pages tong xiao tian xia yang chang huang xiaogang wang learning massive noisy labeled data image classification proceedings ieee conference computer vision pattern recognition pages daojian zeng kang liu yubo chen jun zhao distant supervision relation extraction via piecewise convolutional neural networks emnlp pages xiaojin zhu learning literature survey
| 9 |
machine learning application life time materials xiaojiao abstract materials design development typically takes several decades initial discovery commercialization traditional trial error development approach accumulation data experimental computational results data based machine learning becomes emerging field materials discovery design property prediction manuscript reviews history materials science disciplinary common machine learning method used materials science specifically used materials discovery design synthesis even failure detection analysis materials deployed real application finally limitations machine learning application materials science challenges emerging field discussed keywords machine learning materials discovery design materials synthesis failure detection introduction materials science long history date back bronze age however century first book metallurgy published marking beginning systematic studies materials science researches materials science purely empirical theoretical models developed advent computers last century numerical methods solve theoretical models became available ranging dft density functional theory based quantum mechanical modeling electronic structure optoelectronic properties calculation continuum based finite element modeling mechanical properties multiscale modeling bridge various time spatial scales also developed materials science better simulate real complex system even takes several decades materials discovery development commercialization even though physical modeling reduce amount time guiding experiment work limitation also obvious dft used functional materials optoelectronic property calculation limited materials without defect assumption far reality new concept multiscale modeling still far away large scale real industrial application traditional ways materials development impeding progress field relevant technological industry large amount complex data generated experiment especially simulation results published archived data including materials property value processing conditions microstructural images analyzing becoming increasingly challenging researchers inspired human genome initiative obama government launched materials genome initiative hoping reduce current materials development time half increase computing power development machine learning algorithms materials informatics increasingly become another paradigm field researchers already using machine learning method materials property prediction discovery machine learning forward model used materials property prediction trained data experiments physical simulations bhadeshia applied neural network technique model creep property phase structure steel crystal structure prediction another area study machine learning thanks large amount structural data crystallographic database neighbor method used identify materials structure type based neighbors structure types machine learning also applied materials discovery searching compositional structural space desired properties essentially solving constrained optimization problem baerns able find effective multicomponent catalyst oxidation lowconcentration propane genetic algorithm neural network reviews machine learning application materials science already dane morgan gerbrand ceder reviewed data mining methods materials development tim mueller aaron gilad kusne rampi ramprasad also reviewed progress application machine learning materials science specifically phase diagram crystal structural property prediction however reviews mostly based applications fundamental materials science taking practical approach reviewing machine learning application material design development stages deployment first discuss data problems specifically materials science machine learning concept widely used methods introduced reviews machine leaning application materials discovery design development deployment recall conducted relation data driven research traditional experimental physical modeling discussed afterwards finally challenges future endeavors machine learning based materials science research pointed researchers niche area data problem materials science successful application informatics biology astronomy business inspired similar application materials science however materials science differs subjects due unique characteristics researchers debating whether big data problem materials science size materials data nothing comparable biology data largest existing database based experimental results materials data records however rapid progress computational science microscopy techniques resulting enormous amounts output data furthermore materials science data tends complex heterogeneous terms sources types ranging discrete numerical values qualitative descriptions materials behavior imaging data data materials science also exhibit veracity characteristics big data problem acknowledge practical reality data missing uncertainties data according volume variety velocity veracity characteristics big data materials science big data problem emergence big data materials science extract hidden information complex data interpret resulted information becoming increasingly important materials design development machine learning methods machine learning branch artificial intelligence computer learning existing data without explicitly programmed make predictions new data building model input samples depending assigned task machine learning classified three categories supervised learning machine learning algorithms trained set input value labeled output value first used predict output values corresponding unseen input values unsupervised learning labelled output value training data machine learning algorithm used discover patterns input value reinforcement learning program interact environment dynamically maximize accumulated rewards reinforcement learning used materials science field hence introduced detail manuscript supervised learning either classification problem regression problem depends whether output value discrete continuous method workflow machine learning method typically comprise several steps including raw data collection data preprocessing filling missing data handling outliers data transformation feature engineering feature selection extraction principle component analysis model selection training validations testing detailed workflow presented fig select best algorithm particular task model evaluation important different algorithms evaluated different metrics instance classifier evaluation metrics include confusion matrix auc area curve precision recall measure kolomogorov smirnov chart confusion matrix matrix four elements true positive true negative false positive false negati accuracy measures sensitivity true positive specificity true negative auc area roc curve consider relation sensitivity specificity greater area curve accurate model precision recall true positive rate defined shows fraction predictions false positive measure also measure model accuracy defined weighted harmonic mean precision recall test balance precision recall evaluate model separates positive negative distributions higher value means better separation regression algorithms evaluation metric includes mean absolute error root mean squared error rmse coefficient determination measures percent total variability explained regression model fig flowchart typical machine learning method method comparison common machine learning algorithms svm support vector machine ann artificial neural network logistic regression decision trees support vector machine algorithms used find hyperplane separate different classes highest margin advantage svm solution global unique computation complexity svm depend dimension input space less prone overfitting however svm work well unbalanced data artificial neural network inspired biological brain artificial neurons connected mimic connection neurons brain multiple hidden layers neurons add complexity neuron network architecture strength ann flexible represent nonlinear linear function however needs large amount training data prone overfitting hyperparameter tuning tedious troublesome ann decision tree another commonly used basis classification algorithm comprises root node internal node branch leaf node depth decision tree progressively splits tested data based input feature value decision process follows branch collection internal node parent node reaches leaf node ensemble methods random forest adaboost based constructing large number trees bootstrap samples iteratively build ensemble weak learners attempt generate strong overall model ensemble methods usually perform better basic machine learning algorithms terms reducing variance bias machine learning application materials discovery design important concept materials science field relationship developing materials meet required performance property goes back control processing conditions structural compositions materials hence understanding processing condition structural compositions affect materials property performance first step towards materials design traditionally controlled experiments conducted isolate effect one variable however variables often correlated infeasible isolate variable experimental testing data mining help revealing hidden relations large amount materials parameters processing conditions lations dependent materials properties traditional ways materials development disrupted reshaped making use available data materials property prediction materials design first requires understanding sired properties materials yield strength toughness ultimate tensile strength fatigue life etc affected intrinsic microstructure chemical composition crystal structure external processing loading conditions temperatures machine learning algorithm derive quantitative relation independent dependent variables hence make prediction enough training data physical model exist complicated apply neural network algorithm used ferritic steel welds toughness prediction due ability handle complex models toughness studied function chemical composition microstructure welding process testing temperature influence toughness shown fig interaction different variables also predicted neural network algorithm shown fig cross two toughness curves function temperature manganese compositions indicates higher temperatures influence manganese toughness reduced also negative fig bar chart showing measure significance input variable influencing toughness fig variation normalized toughness function manganese concentration test temperature ann also used predict constitutive relations instance constitutive flow behavior steel predicted strain log strain rate temperature input flow stress output predicted results show good correlation experimental value indicating excellent capacity developed model predicting flow stress fig austenite stainless steel grade ultimate tensile strength yield strength tensile elongation rate strain hardening exponent strength coefficient also able predicted ann function temperature strain rate optimum architecture ass ass using feed forward back propagation learning model accuracy verified correlation coefficient average absolute error standard deviation fatigue properties always among difficult ones predict due high cost long time fatigue testing prevalence structural failure caused fatigue existing physical models either lacking generality fail give quantitative indications agrawal predicted fatigue strength steel using data japan national institute materials science nims matnavi database used predictive model among neural network decision tree multivariate polynomial regression able achieve high value fig comparison experimental value predicted flow stress steel using ann predicted training data predicted testing data inversed design materials understanding mechanical properties influenced materials internal external factors help reducing searching space inversed materials design task however inverse problem challenging possibility multiple solutions enormous structural dimension machine learning application shown promise inversed materials discovery design reducing searching path searching region ruoqian liu developed machine learning method inverse design alloy microstructure enhanced elastic plastic magnetostrictive properties systematic approach consisting random data generation feature selection classification developed firstly features quantitatively describe microstructures properties developed randomly generated structural properties pairs simulated form desired least desired classes two crucial steps search path refinement search space reduction conducted prior actual searching find efficient orders features search promising search regions features method validated five design problems involves identification microstructures satisfy linear nonlinear property constraints framework shows supremacy comparing traditional optimization methods reducing much running time achieving optimality would attained machine learning application materials processing synthesis design materials facilitated data driven machine learning approach however commercialization materials still impeded availability synthesize disrupt trial error synthesis methods olivetti group mit working creating predictive synthesis system advanced materials processing building curated database solid state materials synthesis methods compiled thousands materials synthesis journal articles database also contains algorithms developed machine learning approaches capable predicting synthesis routes novel materials based chemical formulae known physical input data even failed experiments used machine learning algorithm materials discovery synthesis truly shows power data mining machine learning onl small amount information published research work data archived used full potential paul raccuglia trained machine learning model based failed hydrothermal syntheses data predict reaction outcomes different conditions temperature concentration reactant quantity acidity model validated tested previously untested data shown better performance human researchers years experience able predict conditions new organically templated inorganic product formation success rate machine learning application microstructure recognition failure analysis microstructure damage failure another area machine learning find applications traditionally materials scientist examines sem opm images samples failure analysis similar medical doctors analyze images patients increasing penetration machine learning methods medical imaging analysis kind application materials imaging expect happen well fact already reports machine learning computer vision researches materials microstructure automatic recognition aritra applied computer vision methods identify images contain dendritic morphology classify whether dendrites ong longitudinal direction traverse direction exist image extract features reduce feature dimensions used visual bag words texture shape statistics pre convolutional neural network classification conducted using support vector machine nearest neighbors random forest models shown convolutional neural network performs best terms micrograph recognition feature extraction confirmed reports classification methods able reach great accuracy task another example automatic measurement ferrite volume fraction binary phase structures based gpf graph processing framework algorithm developed hafiz muhammad tanveer machine learning algorithm also used failure detection examining microstructure images matthias demant introduced enhanced machine learning algorithm crack detection photoluminescence images wafers detection algorithm based classification cracks due comparison crack descriptions previous trained crack data crack centers identified detecting features appearing star structure grain boundary information extracted additional images visible range avoid false detections support vector machine used train labelled data crack structures classification algorithm able achieve high precision sensitivity crack length greater elaheh rabiei developed dynamic bayesian network dbn based variation modulus elasticity estimate damages prognostic approach crack observable yet various sources information taken account reduce uncertainties dbn applied relate variables causal correlation relationship degradation model parameters learned joint particle filtering technique support vector regression models applied define unknown nonparametric nonlinear correlation input variables precise damage estimation crack initiation prediction metallic alloy fatigue confirmed experimental observations method different traditional empirical damage models paris law since direct damage indicators crack required predict damage stage thus underling damages monitored earlier stage easy imagine manufacturing companies monitor jet engine data predict whether needs inspection maintenance fig overview crack detection algorithm limitations machine learning materials science applications although machine learning widely used lot fields increasingly used materials science machine learning means panacea without understanding limitations blindly apply every possible area lead wrongful predictions waste time effort first machine learning system opaque making hard debug machine learning prediction heavily relies training data machine learning often overfitting overfitting problems needs concerned taking prediction results consideration input data quality needs ensured interpolation extrapolation lead problems training data sufficient interpolated extrapolated regime training data noisy hence error bar prediction needed evaluating prediction accuracy machine learning explain results physics point view materials scientists often interested understanding mechanism certain phenomena machine learnin elucidate mechanism since works data driven model training prediction interpretation machine learning results needs domain knowledge without understanding underline physics nonsense predictions recognized even process feature selection good understanding causal relationship variable dependent properties helpful selecting effective features build less complicated models machine learning also inseparable experiment physical simulation typically used supplemental tool materials discovery design property prediction machine learning training data either experimental results physical simulation results machine learning models also rely experiments simulations validation advance field people different discipline experimentalist computational scientist collaborate data collection storage curation interdisciplinary researchers need trained understand materials science machine learning literature gnesin origin metallurgical technologies bronze age powder metall met karl alfred von zittel history geology palaeontology hafner christopher wolverton gerbrand ceder mrs bulletin volume september ashkan vaziri arvind gopinath vikram deshpande journal mechanics materials structures vol merryn tawhai jeff bischoff daniel einstein ahmet erdemir trent guess jeff reinbolt ieee eng med biol mag lesar richard alan bryden multiscale design materials ames laboratory conference papers posters presentations whittingham june electrical energy storage intercalation chemistry science neugebauer tilmann hickel wiley interdiscip rev comput mol sci sep holdren material genome initiative strategic plan technical report december https national science technology council design ferritic steels isij sourmail bhadeshia mackay neural network model creep strength austenitic stainless steels bergerhoff hundt sievers brown inorganic compu white rodgers lepage crystmet database structures powder patterns metals acta cryst rodemerck wolf buyevskaya baerns synthesis screening catalytic study search catalyst oxidation chem eng dane morgan gerbrand ceder handbook materials modeling tim mueller aaron gilad kusne rampi ramprasad abby parrill kenny lipkowitz reviews computational chemistry volume doi villars iwata pauling file verifies reveals principles materials science supporting four cornerstones given nature chem met alloys belianinov vasudevan strelcov big data deep data scanning electron microscopies deriving functionality multidimensional data sets adv str chem imaging krishna rajan materialstoday volume issue pages xinjian guo yilong yin cailing dong gongping yang guangtong zhou fourth international conference natural computation volume jesse davis mark goadrich proceeding icml proceedings international conference machine learning pages pittsburgh pennsylvania usa june matthew boutell jiebo luo xipeng shen christopher brown pattern recognition volume issue september pages luengo herrera soft comput chen wang tourism management volume issue february pages boser guyon vapnik algorithm optimal margin classifiers fifth annual workshop computational learning theory pages pittsburgh computational materials science theodoridis koutroumbas pattern recognition fourth academic press massachusetts jianchang mao mohiuddin computer volume issue pages mar diertrich heller yang data science big data analytics indianapolis wiley suneetha ijcse international journal computer science engineering vol abraham wyner matthew olson justin bleich apr chih hang tung journal semiconductor technology science september bhadeshia mackay svensson materials science technology lin jun zhang jue zhong computational materials science raghuram karthik desu hansoge nitin krishnamurthy aditya balu amit kumar gupta swadesh kumar singh mater res technol kumar materials science engineering april issn kumar materials science engineering doi schooling modelling fatigue nickel base alloys thesis university cambridge agrawal deshpande cecen kalidindi exploration data science techniques predict strength steel composition processing parameters integr mater manuf innovation ankit agrawal alok choudhary apl materials ruoqian liu abhishek kumar zhengzhang chen ankit agrawal veera sundararaghavan alok choudhary scientific reports doi paul raccuglia katherine elbert philip adler casey falk malia wenny aurelio mollo matthias zeller sorelle friedler joshua schrier alexander norquist nature may miles wernick yongyi yang jovan brankov grigori yourganov stephen strother ieee signal process mag jul aritra chowdhury elizabeth kautz bulent yener daniel lewis computational materials science gatys ecker bethge exture synthesis controlled generation natural stimuli using convolutional neural networks gatys ecker bethge neural algorithm artistic style hafiz muhammad tanveer hafiz muhammad tahir mustafa waleed asif munir ahmad ijacsa international journal advanced computer science applications vol matthias demant marcus oswald tim welschehold sebastian nold sebastian bartsch stephan schoenfelder stefan rein presented european solar energy conference exhibition september amsterdam netherlands elaheh rabiei enrique lopez droguett mohammad modarres advances mechanical engineering vol mohsen ostad shabani ali mazahery metallurgical materials transactions volume june ashley white mrs bulletin volume august
| 5 |
detect segment cysts lung images without manual annotation ling vissagan ronald joel jianhua jan radiology imaging sciences department cardiovascular pulmonary branch nhlbi national institutes health nih bethesda abstract image segmentation fundamental problem medical image analysis recent years deep neural networks achieve impressive performances many medical image segmentation tasks supervised learning large manually annotated data however expert annotations big medical datasets tedious expensive sometimes unavailable weakly supervised learning could reduce effort annotation still required certain amounts expertise recently deep learning shows potential produce accurate predictions original erroneous labels inspired introduce weakly supervised learning method cystic lesion detection segmentation lung images without manual annotation method works manner segmentation generated previous steps first unsupervised segmentation neural networks used ground truth next level network learning experiments cystic lung lesion dataset show deep learning could perform better initial unsupervised annotation progressively improve index convolutional neural networks weakly supervised learning medical image segmentation graph cuts introduction image segmentation fundamental problem medical image analysis classic segmentation algorithms usually formulated optimization problems relying cues image features recent years deep learning made much progress image segmentation tasks fcn hed achieved dominant performances many medical image segmentation benchmarks unet competitive enough many applications success deep learning based segmentation requires supervised learning large manually annotated data however expert annotations big medical datasets expensive obtain research supported part intramural research program national institutes health clinical center authors thank zhang beijing institute big data research inspiring discussion nvidia titan pascal gpu donation mild moderate severe fig examples cystic lung lesions different severity levels image manual annotation red even unavailable example manual annotation hundreds cysts volume dataset examples shown fig feasible recent clinical study lymphangioleiomyomatosis lam alleviate annotation burden researchers exploit weakly supervised methods deep learning based segmentation one direction reduce effort time expertise annotation combining fcn active learning training data needed train model comparable performance training data another direction applies annotation incorporating fcn multiple instance learning framework however expertise physicians still needed assigning imagelevel annotations estimating lesion size recently deep learning shown potential beat teacher perform better training data labels even expert without human knowledge alphago zero specifically classification semantic segmentation tasks provided data labels certain amount errors deep learning could produce lower errors original erroneous labels addition assisting algorithm monte carlo tree search game grabcut image segmentation training generated iteratively recursively update neural network parameters achieve better performance transfer unsupervised segmentation segmentation net level transfer segmentation net level segmentation net level data stream annotation stream fig learning segment medical images without manual annotation segmentation networks level level recursively trained previous network segmentation training labels paper propose weakly supervised approach lam cyst detection segmentation shown fig detection segmentation cysts challenging task due large number cysts greatly variation cyst sizes severe touching cysts inconsistent image quality image noise motion artifact etc moreover infeasible obtain manual segmentation lam studies method differs weakly supervised methods automatically learn medical images without manual annotation without segmentation network labeled datasets starting classic segmentation techniques specifically unsupervised clustering spatial information followed graph cuts refinement initial annotation generated serves labels segmentation network unet paper learning new networks recursively trained previous network predictions training labels improved segmentation network could trained two hypotheses deep learning might generate better predictions training data labels better training data labels produce better predictions note value clustering value provided framework methods given medical image dataset without manual annotation method works manner fig previously generated first unsupervised segmentation segmentation networks annotations serve inputs next level network learning unsupervised segmentation clustering unsupervised segmentation approach involving pixel intensity average median pixel intensities local window feature space spatial classifies image grouping similar pixels feature space clusters number ters needs manually set different applications cyst segmentation images set obtain three clusters cluster centers indicating cyst lung tissue others respectively construct graph energy function consisting data term pixel continuity term data term assigned squared intensity differences pixels cluster centers pixel continuity term two neighboring pixels values otherwise empirical evaluation data algorithm used optimize energy function global optimal pixel labels obtained segmentation network obtaining initial annotation images dataset using spatial graph cuts unet used network architecture learn better segmentor efficiency accuracy medical image segmentation unet constituted four layers contraction pooling four layers expansion skip connections contracting path expansive path strengthen context information higher resolution layers unet training inputs raw images original resolution outputs annotations loss utilized training focuses distinguishing cysts lung tissues ignoring background labels one critical problem training unet medical images distribution highly imbalanced much positive samples negative vice versa experiments use distribution cysts lung tissues image balance positive negative classes loss function also avoid sampling empty slices cyst slice training recursive learning trained unet become teacher applied segment images training set generate new set cyst labels used new ground truth train next level unet network parameters previous unet transferred initialize next network lower learning rate used train next network terminates similarity successive segmentation larger threshold experimental methods study evaluated method lam dataset total volumes patients lam natural history protocol studied high resolution scans chest obtained scans contained slices slice thickness ranged intervals slice pixels unet implemented using caffe train unet model scratch three unet models trained progressively recursive framework named respectively initial learning rate decreases factor every next level thanks transfer learning previous level trained iterations image since provides better performance etc preliminary experiment proposed method tested dell tower workstation ghz xeon cpu ram nvidia titan pascal gpu memory model trained volumes remaining volumes including mild moderate severe cases left unseen testing data evaluate segmentation performance medical student manually detect segment one slice testing volumes manual segmentation tedious took working days quantification metrics include dice coefficient absolute difference cyst scores adcs cyst score defined percentage lung region occupied cysts critical clinical factor lam assessment worth mentioning differing traditional concept training set model learn manual annotation training data available therefore data also seen testing data performance evaluation six images volumes large adcs unsupervised segmentation results unet results additionally selected dataset manual segmentation conducted slices evaluation progressive improvement framework addition compare method cyst segmentation method thresholding followed postprocessing techniques used results table shows performance unseen images images good image quality noisy table performance comparison unseen images spatial graph cuts adcs absolute difference cyst scores bold indicates best results dice adcs teacher student student student table performance comparison images large adcs unet learning set spatial graph cuts adcs absolute difference cyst scores bold indicates best results dice adcs teacher student student student student unet learning could achieve higher segmentation accuracy teacher spatial graph cut seems stop level trends could observed table performance images learning set shown images large adcs unet compared manual annotation unet learning performs substantially better lower dice unet table compared table mainly caused lower dice values mild cases unet dice values around proposed method also accurate method three examples fig show proposed strategy recursively improves segmentation performance given inaccurate segmentation provided one level unet learning already correct oversegmentation undersegmentation cysts thus achieve higher sensitivity higher specificity higher levels unet tend obtain accurate cyst boundaries especially overlapping cysts whole training process takes hours testing conclusions report first results weakly supervised learning detect segment cysts lung images without manual annotation first learning classic unsupervised segmentation deep learning shows potential perform even better levels future work extend method segment medical images slice manual annotation fig three examples good image quality noisy show segmentation results obtained given manual annotation reference shown due space constraint references sonka hlavac boyle image processing analysis machine vision cengage learning long shelhamer darrell fully convolutional networks semantic segmentation cvpr xie edge detection iccv ronneberger fischer brox convolutional networks biomedical image segmentation miccai yao jones julienwilliams stylianou moss sustained effects sirolimus lung function cystic lung lesions lymphangioleiomyomatosis respir crit care vol yang zhang chen zhang chen suggestive annotation deep active learning framework biomedical image segmentation miccai jia huang chang constrained deep weak supervision histopathology image segmentation ieee tmi guan gulshan dai hinton said modeling individual labelers improves classification arxiv preprint khoreva benenson hosang hein schiele simple weakly supervised instance semantic segmentation cvpr silver schrittwieser simonyan antonoglou huang guez hubert baker lai bolton chen lillicrap hui sifre van den driessche graepel hassabis mastering game without human knowledge nature vol boykov kolmogorov experimental comparison algorithms energy minimization vision tpami vol liu yin cytoplasm nucleus segmentation cervical smear images using radiating gvf snake pattern recognition vol jia caffe open source convolutional architecture fast feature embedding http
| 1 |
jan entity retrieval text mining online reputation monitoring pedro dos santos saleiro cruz departamento engenharia faculdade engenharia universidade porto partial fulfillment requirements degree doctor informatics engineering feup supervisor carlos soares eduarda mendes rodrigues faculdade engenharia universidade porto rua roberto frias porto portugal copyright pedro saleiro doctoral committee oliveira full professor feup university porto mark carman senior lecturer monash university bruno martins assistant professor ist university lisbon torgo associate professor fcup university porto carlos soares associate professor feup university porto esta tese dedicada minha maria lurdes pelo seu amor constantes acknowledgements first would like thank everybody contributed somehow work reviewers colleagues faculty staff probably forget mention someone particular sincerely apologize work funded sapo labs fct microsoft research without financial support would able conclude thesis deeply grateful supervisor carlos soares although working carlos always showed genuine enthusiasm work thank giving freedom grow independently researcher pursuit ideas even moments delivering rhythm expected believe made pragmatic really interesting thorough discussions last years tried cool ideas hope someday last would like thank support protection well believing expand horizons send postcard chicago also thankful eduarda mendes rodrigues work started thank receiving feup back october still remember day drew first draft framework orm always encouraged positive feedback source inspiration motivation even distance always available needed must also mention decisive role helping pursuing summer internship top notch place microsoft research graduate student opportunity collaborate new inspirational people happened chance start working natasa microsoft research around natasa believe make things happen thank patience support motivation hope keep collaboration many years show gratitude oliveira helped smooth transition liacc always door open discuss issues furthermore enthusiastic work even supervisor sincerely appreciate thank advices miss conversations past future wish thank sarmento introducing world data science text mining particular regret chance collaborate often must thank two special friends also graduate students jorge teixeira rodrigues jorge brother arms throughout journey always thankful friend colleague years friend searched sharing ups downs graduate student thank support motivation would like thank cristina ribeiro administrative support regarding funding throughout years must also address rosaldo rossetti believing abilities starting productive collaboration course really enjoyable funny colleague arian pasquali also deserves personal mention support well gomes rei amir tiago cunha gustavo laboreiro would like also thank members popstar project specially pedro nesta hora poderia deixar estender agradecimentos aos amigos especial queria referir grupo rumo penta onde todos proporcionam grandes momentos boa sem quais seria dos problemas dia dia grande para miguel jorge queria deixar aqui uma palavra para minha prima xana pela amizade desde sempre como tenho imenso agradecer minha quem dedico esta tese obrigado pelo amor pela liberdade que sempre deste todas minhas escolhas como deixaria ser sempre foste uma entusiasta deste meu desafio que agora chega fim deixo beijinho minha guida querido diogo que ainda conheci mas vida emigrante tem destas coisas por fim deixo meu sentido agradecimento minha namorada maria foste crucial nesta caminhada meu lado todos dias para mais pouco sei que este trabalho comprometeu muito nosso tempo dois mas incentivos constantes para levar isto fim prometo compensar futuro claro tenho que agradecer bobby pela companhia que fez durante escrita por conseguir fazer sorrir mesmo quando vida madrasta abstract online reputation monitoring orm concerned use computational tools measure reputation entities online politicians companies practice current orm methods constrained generation data analytics reports aggregate statistics popularity sentiment social media argue format restrictive end users often like flexibility search information available predefined charts propose inclusion entity retrieval capabilities first step towards extension current orm capabilities however entity reputation also influenced entity relationships entities therefore address problem retrieval goal search multiple connected entities challenging problem traditional entity search systems cope besides retrieval also believe orm would benefit prediction capabilities predicting entity popularity social media based news events outcome political surveys however none tasks provide useful results effective entity disambiguation sentiment analysis tailored context orm consequently thesis address two computational problems online reputation monitoring entity retrieval text mining researched developed methods extract retrieve predict information spread across web proposed new probabilistic modeling problem retrieval together two design patterns creating representations entities relationships furthermore propose dependence model novel supervised model based markov random field framework retrieval together new method create test collections retrieval released new test collection purpose foster research area performed experiments scale results showing possible perform retrieval without using fix entity relationship types enabling wide range queries addressed tackled entity filtering financial sentiment analysis using supervised learning approach studied several possible features purpose participated two well known external competitions tasks obtaining performance moreover performed analysis predictive power wide set signals extracted online news predict popularity entities twitter also studied several sentiment aggregate functions twitter study feasibility using sentiment social media predict political opinion polls finally created released adaptable entity retrieval text mining framework puts together building blocks necessary perform orm reused multiple application scenarios computational journalism politics finance framework able collect texts online media identify entities interest perform entity retrieval well classify sentiment polarity intensity supports multiple data aggregation methods together visualization modeling techniques used descriptive predictive analytics resumo online mro consiste ferramentas computacionais para medir entidades online como por exemplo empresas actuais mro restringidos por dados tais como agregadas popularidade sentimento nos media sociais consideramos que esta demasiado restritiva uma vez que utilizadores finais das plataformas mro desejam frequentemente ter flexibilidade que lhes permita pesquisar por centrada nas entidades que vai disponibilizada nos por conseguinte propomos capacidade entidades como primeiro passo sentido estender estado atual das ferramentas mro entanto uma dada entidade influenciada pelas desta com outras entidades neste sentido tratar problema onde objectivo consiste pesquisa por entidades relacionadas entre desafio que sitemas tradicionais entidades ainda capazes lidar para acreditamos que mro iria beneficiar capacidade efectuar baseadas texto centradas nas entidades como por exemplo popularidade entidades nos media sociais utilizando eventos retratados nas resultado sondagens entanto nenhuma destas tarefas sucesso utilidade houver capacidade efetiva desambiguar entidades mencionadas nos textos assim como uma sentimento para contexto mro consequentemente esta tese trata dois problemas computacionais online entidades texto desenvolvemos para extrair recuperar prever centrada entidades espalhada pela internet propomos novo modelo problema conjuntamente com dois desenho baseados texto para criar entidades propomos modelo viii mder novo modelo supervisionado antecipada baseado campo markov para conjutamente com novo teste para uma nova teste com esse que fomentar nesta efetuamos grande escala resultados mostram que realizar sem utilizar tipos fixos entidades que permite atuar sobre conjunto alargado pesquisas tratamos das tarefas filtragem entidades sentimento financeiro utilizando uma abordagem aprendizagem supervisionada que estudamos para esse fim duas exterrnas ambas tarefas atingindo resultados estado arte disso uma poder preditivo grande conjunto sinais das online para parever popularidade entidades twitter assim como estudo sentimento twitter para estudar praticabilidade utilizar sentimento nos media sociais para prever sondagens eleitorais finalmente uma plataforma entidades texto que conjuga todos blocos para mro pode ser reutilizada diversos desde jornalismo computacional esta plataforma capaz recolher textos dos media online identificar entidades alvo efectuar entidades assim como classificar sentimento intensidade associada suporta dados juntamente com pode ser utilizada tanto para descritivas como preditivas table contents list figures xiii list tables introduction thesis statement objectives research methodology contributions applications foundations thesis outline background related work online reputation monitoring related frameworks entity retrieval semantic search markov random field sequential dependence model mrf entity retrieval named entity disambiguation sentiment analysis word embeddings predicting collective attention political data science entity retrieval online reputation retrieval queries modeling retrieval monitoring table contents design patterns early fusion association weights early fusion example late fusion late fusion example implementation dependence model graph structures feature functions ranking discussion summary contributions retrieval retrieval web corpus relink query collection tabular data entity relationships selection tables formulation queries collection statistics experimental setup data indexing retrieval method parameter tuning test collections results analysis summary contributions entity filtering financial sentiment entity filtering task overview features experimental setup results financial sentiment analysis task overview financial word embeddings analysis table contents approach experimental setup results analysis concluding remarks summary contributions prediction exploring online news reputation monitoring approach experimental setup results discussion predicting political polls using twitter sentiment methodology data experimental setup results discussion feature importance outlook summary contributions framework online reputation monitoring framework overview relink texrep relink use case news processing pipeline demonstration texrep use case data aggregation visualization learning word embeddings orm neural word embedding model experimental setup results analysis concluding remarks summary contributions twitter xii table contents conclusions summary main contributions limitations future work references list figures entity retrieval text mining computational problems orm markov random field document term dependencies bayesian networks retrieval queries different lengths markov random field dependencies retrieval markov random field dependencies retrieval example wikipedia table row example metadata provided editors illustration indexing web corpus values erdm obtained using sum normalization results grouped entity category using run daily popularity twitter entities study training testing sliding window first iterations individual feature type score negatives share berminghamsovn political leaders twitter representation monthly poll results political candidate error predictions polls results error predictions polls results variation mean absolute error buzz sentiment aggregate functions importance random forests models overview orm framework relink framework architecture overview architecture data flows texrep framework news processing pipeline xiv list figures cristiano ronaldo egocentric network twitter buzz share political leaders continuous line represents loss training data dashed line represents loss validation data left side effect increasing using training data right side effect varying amount training data used list tables retrieval definitions illustrative example entity index early fusion illustrative example relationship index early fusion illustrative example document index late fusion clique sets associated feature functions type input nodes examples query annotations relink collection statistics extractions statistics description query sets used evaluation early fusion erdm comparison using results erdm compared three baselines replab filtering task dataset description entity filtering versions description official results version plus validation set accuracy training set examples microblog results features validation test sets features performance breakdown test set using news headlines results features validation test sets features performance breakdown test set using mlp summary four type features consider score popularity high function equal respectively distribution positive negative neutral mentions per political number available training different sizes target vocabulary xvi list tables overall statistics combinations models learned varying volume training data results observed training epochs evaluation resulting embeddings using class membership class distinction word equivalence tests different thresholds cosine similarity chapter introduction nowadays people pervasive access connected devices applications services enable obtain share information almost instantly basis social media growing astonishing speed user opinions people companies products quickly spread large communities consequently companies personalities thorough scrutiny every event every statement potentially observed evaluated global audience reflects one perceived reputation van riel fombrun define reputation overall assessment organizations authors use term organization definition may well apply individuals politicians products mobile phone brands stakeholder someone relationship organization employees customers shareholders definition similar ones focus perspective reputation represents perceptions others target entity however rise social media online news publishing brought wider public awareness entities activities influencing people perceptions reputation traditional reputation analysis mostly manual focused particular entities online media possible automate much process collecting preparing understanding large streams content identify facts opinions much wider set entities online reputation monitoring orm addresses challenge use computational tools measure reputation entities online media content early orm started counting occurrences brand name social media channel estimate brand introduction several challenges collect process mine online media data purposes social media texts short informal many abbreviations slang jargon idioms often users care correct use grammar therefore text tends misspellings incomplete unstructured sentences furthermore lack context poses difficult problem tasks relevant context text mining named entity disambiguation sentiment analysis classify sentiment polarity given document tweet news title necessary aggregate several document scores create meaningful indicators tasks technically complex people interested tracking entities web reason research focused investigating parts problem leading development tools address endeavor text data usually includes large number entities relationships broadly define entity thing concept exists world person company organization event film entities exist mentions across documents external knowledge resources recent years entities gained increased importance basic unit information answer particular information needs instead entire documents text snippets volume data rapidly increasing web including rdf linked data facebook open graph google knowledge graph describing entities footballers coaches relationships manages developments great impact online reputation monitoring mainly focused entities specifically orm process consists searching tracking entity interest personality company organization analysis hand news stories topics events discussed news social media usually contain mentions entities concepts represented knowledge base thus say entities gravitational force drives online reputation monitoring process thesis statement ultimate goal orm track everything said web given target entity consequently impact reputation perspective goal hard achieve two reasons first reason difficulty computationally processing interpreting accessing huge amount information published online everyday second thesis statement reason inherent definition reputation intangible tangible outcomes specifically fombrun van riel later stacks found correlation several indicators reputation trust financial indicators sales profits however finding imply causality financial indicators influenced many factors besides stakeholders perceived reputation conclusion consensus measure reputation neither intrinsically extrinsically best knowledge current orm still limited naive standard approach consists counting mentions entity names applying sentiment analysis produce descriptive reports aggregated entity popularity overall sentiment propose make progress orm tackling two computational problems entity retrieval text mining figure online reputation monitoring text entities entity retrieval text mining fig entity retrieval text mining computational problems orm believe orm platform besides providing aggregated statistics trends entity popularity sentiment news social media would benefit providing entity retrieval capabilities end users often like flexibility search specific information available predefined charts however orm specificities traditional entity search systems cope specifically entity reputation also influenced entity relationships entities introduction instance reputation apple severely damaged called apple foxconn scandal foxconn one several contractor companies apple supply chain accused exploiting chinese workers although facts directly concerned apple relationship foxconn triggered bad public opinion apple happened recently weinstein sex scandal accusations sexual harassment aimed harvey weinstein created wave damage companies personalities associated disgraced hollywood producer therefore orm platform provide search capabilities retrieval complex case entity retrieval goal search multiple unknown entities relationships connecting contrary traditional entity queries queries expect tuples connected entities answers instance technology companies contracts chinese electronics manufacturers answered tuples apple foxconn companies founded disgraced hollywood producer expecting tuples miramax harvey weinstein essence query decomposed set specify types entities types relationships entities hand orm requires accurate robust text processing data analysis methods text mining plays essential enabling role developing better orm several challenges collecting extracting relevant information raw text data necessary filter noisy data otherwise downstream processing tasks sentiment analysis compromised specifically essential develop named entity disambiguation approaches distinguish relevant text passages named entities often ambiguous example word bush surface form two former presidents music band shrub ambiguity named entities particularly problematic social media texts users often mention entities using single term orm platforms would even useful would able predict social media users talk lot target entities instance april david cameron mentioned news regarding panama papers story acknowledge story detail day however news cycle kept mentioning topic following days mentions social media kept high publicly address issue april reputation already severely damaged blaming providing details earlier thus also want study feasibility objectives using knowledge extracted social media online news predict real world surveys results political polls objectives work reported dissertation aimed understand formalize explore scientific challenges inherent problem using unstructured text data different web sources online reputation monitoring describe specific research challenges proposed overcome retrieval existing strategies entity search divided approaches former usually rely statistical language models match rank terms proximity target entity latter consists creating sparql query using structured knowledge base retrieve relevant rdf triples neither paradigms provide good support retrieval recent work search tackled retrieval extending sparql support joins multiple query results creating extended knowledge graph extracted entities relationships typically stored knowledge graph however always convenient rely structured knowledge graph predefined constraining entity types particular orm interested transient information sources online news social media general purpose knowledge graphs usually fed stable reliable data sources wikipedia furthermore predefining constraining entity relationship types semantic approaches reduces range queries answered therefore limits usefulness entity search particularly one wants leverage best knowledge retrieval using approaches new unexplored research problem within information retrieval research community one objectives research explore degree leverage textual context entities relationships terminology relax notion entity relationship type instead characterized fixed type person country place entity would characterized contextual term applies relationships traditional knowledge graphs fixed schema relationships child created works approach relies contextual terms text proximity introduction every two entities raw document relationships descriptions criticizes hits back meets interested would possible search expected significantly reduce limitations structured approaches suffer enabling wider range queries addressed entity filtering sentiment analysis entity filtering named entity disambiguation ned named entity mention want classify related related given target entity relatively easy problem well formed texts news articles however social media texts pose several problems task particularly interested entity filtering tweets aim study large set features generated describe relationship given target entity tweet well exploring different learning algorithms create supervised models task sentiment analysis thoroughly studied last decade several phd thesis entirely dedicated subject broad problem several ramifications depending text source specific application within context orm focus particular domain finance sentiment analysis financial texts received increased attention recent years neverthless challenges yet overcome financial texts microblogs newswire usually contain highly technical specific vocabulary jargon making development specific lexical machine learning approaches necessary prediction hypothesize entities frequently mentioned news politicians possible establish predictive link online news popularity social media cast problem supervised learning classification approach decide whether popularity high low based features extracted news cycle aim assess online news valuable source information effectively predict entity popularity twitter specifically want find online news carry different predictive power based nature entity study predictive performance varies different times prediction propose explore different features particular ones affect overall predictive power specific entities particular hand study possible use knowledge extracted social media texts predict outcome public opinion surveys automatic content analysis mass media social sciences become necessary possible research methodology rise social media computational power one particularly promising avenue research concerns use sentiment analysis microblog streams however one main challenges consists aggregating sentiment polarity timely fashion fed prediction method framework orm majority work orm consists studies researchers collect data given social network produce specific analysis predictions often unreproducible availability open source platforms area scarse researchers typically use specific apis software modules produce studies however effort among research community address issues open source research platforms therefore aim create adaptable text mining framework specifically tailored orm reused multiple application scenarios politics finance framework able collect texts online media twitter identify entities interest classify sentiment polarity intensity framework supports multiple data aggregation methods well visualization modeling techniques used descriptive analytics analyze political polls evolve time predictive analytics predict elections research methodology adopted distinct research methodologies process developing research work described thesis origin work popstar project popstar public opinion sentiment tracking analysis research project developed methods collection measurement aggregation political opinions voiced microblogs twitter blogs online news first prototype framework orm implemented served backend popstar website http ground work concerned development framework orm carried scope project therefore popstar website served use case validating effectiveness adaptability framework entity filtering sentiment analysis modules framework evaluated using well known external benchmarks resulting performance participated replab filtering task evaluated entity filtering method using dataset created competition one submissions obtained first place competition also participated semeval task introduction sentiment analysis financial microblogs news ranked using one metrics microblogs performed two experiments regarding entity centric predictions predicting entity popularity twitter based news cycle collected tweets news articles portugal using socialbus twitter collector online news different news outlets collected sapo used number entity mentions twitter target variable extracted features news datasets datasets aligned time used twitter dataset studying different sentiment aggregate functions serve features predicting political polls private opinion studies company eurosondagem improvements retrieval techniques hampered lack test collections particularly complex queries involving multiple entities relationships created method generating test queries support comprehensive search experiments queries relevance judgments created content exists tabular form columns represent entity types table structure implies one relationships among entities editorial work involved creating natural language queries based relationships represented entries table publicly released relink test collection comprising queries relevance judgments obtained sample wikipedia tables evaluated new methods proposed retrieval using relink query collection together two smaller query collections created research work semantic retrieval used large web corpus containing million web pages creating retrieval tailored indexes running experiments moreover implemented demo using large news collection million portuguese news articles resulting best demo award ecir contributions applications work resulted following contributions text mining framework puts together building blocks required perform orm framework adaptable reused different application scenarios finance politics framework provides entityspecific text mining functionalities enable collection disambiguation sentiment analysis aggregation prediction visualization information heterogeneous web data sources furthermore given contributions applications built using modular architecture providing abstraction layers well defined interfaces new functionalities easily integrated generalization problem search cover entity types relationships represented attribute predicate respectively rather set general probabilistic model retrieval using bayesian networks proposal two design patterns support retrieval approaches using model proposal dependence model builds basic sequential dependence model sdm provide extensible representations dependencies suitable complex queries indexing retrieval approach including learning fusion methods handle entity relationships ranking merging results proposal method strategy automatically obtaining relevance judgments queries make publicly available queries relevance judgments previous task entity filtering financial sentiment analysis methods tailored twitter able cope short informal texts constraints analysis predictive power online news regarding metrics twitter popularity sentiment analysis combine knowledge obtained heterogeneous sources prediction tasks believe work useful wide range applications highlight six reputation management concerned influencing controlling company individual reputation consequently tracking said entities online one main concerns area instance knowing given news article negative impact entity reputation would crucial damage control introduction digital libraries special libraries comprising collection digital objects text images stored electronic media format ubiquitous nowadays academic repositories biomedical databases law enforcement repositories etc believe contributions make retrieval research problem applied digital library enabling new wide range search capabilities fraud detection inside trading detection area information entities individuals companies relationships entities useful discover hidden relationships contexts entities might represent conflicts interests even fraud journalism specifically computational journalism would benefit powerful search tool journalists could investigate entities previously mentioned web including online news time well relationships among entities semantics political science given lot attention social media recent years due sheer amount people reactions opinions regarding politically relevant events able analyze interplay online news social media political entity perspective interesting political scientists hand becoming increasingly difficult obtain pollsresponses via telephone necessary start testing alternative approaches social media marketing focuses communicating social networks company potential effective customers evaluating success given campaign key aspect area therefore assessing volume polarity mentions given company campaign would useful foundations material thesis previously published journal conference workshop publications rodrigues soares oliveira texrep text mining framework online reputation monitoring new generation computing volume number foundations saleiro rodrigues soares relink research framework test collection retrieval international acm sigir conference research development information retrieval sigir saleiro rodrigues soares early fusion strategy retrieval first workshop knowledge graphs semantics text retrieval analysis sigir saleiro sarmento rodrigues soares oliveira learning word embeddings portuguese twitter stream study practical aspects progress artificial intelligence epia saleiro rodrigues soares oliveira feup task predicting sentiment polarity intensity financial word embeddings international workshop semantic evaluation semeval acl saleiro soares learning news predicting entity popularity twitter advances intelligent data analysis ida saleiro teixeira soares oliveira timemachine search visualization news archives advances information retrieval european conference research ecir saleiro gomes soares sentiment aggregate functions political opinion polling using microblog streams international conference computer science software engineering saleiro amir silva soares popmine tracking political opinion web ieee international conference computer information technology ubiquitous computing communications dependable autonomic secure computing pervasive intelligence computing iucc saleiro rei pasquali soares popstar replab name ambiguity resolution twitter fourth international conference clef initiative clef introduction thesis outline chapter discuss related work thesis chapter present formalization problem retrieval using approach provide two design patterns retrieval early fusion late fusion end chapter introducing new supervised early entity relationship dependence model erdm seen extension mrf framework retrieval adapted retrieval chapter describe set experiments retrieval web corpus first introduce new query collection relink specifically tailored problem developed approach collect relevance judgments tabular data editorial work consisted creating queries answered relevance judgments run experiments using dataset provide evaluation results new proposed methods retrieval chapter dedicated entity filtering financial sentiment analysis evaluate approaches using well known external benchmarks namely replab semeval chapter present two experiments predictions first experiment try predict popularity entities social media using solely features extracted news cycle second experiment try assess sentiment aggregate functions useful predicting political polls results chapter present unified framework orm framework divided two major containers relink entity retrieval texrep text mining present data flow within framework used reference open source framework researching orm also present case studies using framework end thesis chapter dedicated conclusions chapter background related work chapter introduces overview background concepts previous research work tasks addressed dissertation start presenting brief description task online reputation monitoring orm including related frameworks orm survey previous research work entity retrieval semantic search including detailed explanation markov random field model retrieval variations describe tasks named entity disambiguation sentiment analysis previous work training word embeddings end chapter providing overview related work predictions including predicting social media attention outcome political elections online reputation monitoring reputation company important company well stakeholders specifically stakeholders make decisions company products faster aware image company company perspective reputation asset attracts stakeholders represent economic profit end newell goldsmith used questionnaire survey methodologies introduce first standardized reliable measure credibility companies consumer perspective also studies find correlation company indicators reputation trust credibility financial indicators sales profits studies found although reputations intangible influence tangible assets following reasoning fombrum created successful measurement framework named reptrak background related work different methodology compared questionnaires media analysis news radio broadcasts typically analysis involves consuming categorizing media according stakeholder polarity positive negative towards company recently social media analysis becoming important proxy people opinion originating field online reputation monitoring traditional reputation monitoring mostly manual online media pose opportunity process understand aggregate large streams facts company individual orm requires level continuous monitoring crucial detect early changes perception company personality conveyed social media online buzz may good bad consequently companies must react address negative trends also creates opportunity monitor reputation competitors context text mining plays key enabling role offers methods deriving information textual content instance gonzalo identifies different text mining research areas relevant orm entity filtering topic tracking reputation priority detection user profiling automatic social media new way communication collaboration influence every stakeholder society personalities companies individuals social media users share every aspect lives includes information events news stories politicians brands organizations companies access sharing opens new horizons obtaining insights valuable online reputation companies also invest big share public relations social media building strong reputation take long time effort destroying take place overnight therefore importance social media increased importance powerful tools deal enormous amount data related frameworks great majority work orm consists studies platforms orm usually developed private companies share internal information however open source research projects considered related frameworks work trendminer one platforms enables real time analysis twitter data simple sentiment analysis using word counts lacks flexibility order support data processing framework orm entity retrieval semantic search collect process aggregate texts information extracted texts relation entities monitored context addresses adaptability reusability allowing modular interface allowing plugin components extend framework specially perspective data sources text analysis modules instance support sentiment analysis module default could plugged neverthless context support plugin aggregation prediction modules makes suitable orm fora framework specifically tailored orm creates ontology based fuzzy clustering texts concerned extracting relevant linguistic units regarding target entities include automatic sentiment analysis allow plugin new modules popmine first version text mining framework orm developed specifically context project political data science comprises richer set modules including cross media data collection twitter blog posts online news trend analysis based entity filtering sentiment analysis modules fact current version texrep text mining framework orm seen extension popmine architecture creating general purpose framework orm restricted political analysis would possible adapt popmine entity disambiguation sentiment analysis modules aggregations specific political scenarios hand texrep supports users define plug aggregate functions moreover popmine limited user configurations lacks support word embeddings include predictive capabilities entity retrieval semantic search information retrieval deals search information defined activity finding relevant information resources usually documents meet information need usually query within large collection resources unstructured nature usually text early boolean retrieval systems documents retrieved exact query term present represented list terms introduction vector space model term represents dimension space consequently document query represented vectors values dimension document vector correspond term frequency background related work term document therefore ranking list documents produced based spatial distance query vector concept inverse document frequency idf later introduced limit effect common terms collection term occurs many documents collection lower idf terms occur less often combination variants became commonly used weighting statistics vector space model recently observed people focused information needs entities better satisfy queries list documents large text snippets type retrieval called entity retrieval retrieval includes extra information extraction tasks processing documents named entity recognition ner named entity disambiguation ned entity retrieval closely connected question answering though systems focus understanding semantic intent natural language query deciding sentences represent answer user considering query british politicians panama papers expected result would list names rather documents related british politics panama papers news story two search patterns related entity retrieval first user knows existence certain entity aims find related information example user searching product related information second user defines predicate constrains search certain type entities searching movies certain genre online reputation monitoring systems usually focus reporting statistical insights based information extracted social media online news mentioning target entity however kind interaction limits possibility users explore knowledge extracted target entity believe entity retrieval could enhance online reputation monitoring allowing free text search mentions target entity consequently allow users discover information descriptive statistical insights might able identify entity retrieval differs traditional document retrieval retrieval unit document retrieval considers document atomic response query entity retrieval document boundaries important entities need identified based occurrence documents focus level granular objective search rank entities among documents however traditional entity retrieval systems exploit semantic relationships terms entity retrieval semantic search query collection documents match query terms terms describing entity relevant entities tend missed entity retrieval active research topic last decade including various specialized tracks expert finding track inex entity ranking track trec entity track sigir eos workshop previous research faced two major challenges entity representation entity ranking entities complex objects composed different number properties mentioned variety contexts time consequently single definition atomic unit entity retrieved additionally challenge devise entity rankings use various entity representations approaches tackle different information needs two main approaches tackling entity retrieval profile based approach voting approach profile based approach starts applying ner ned collection order extract entity occurrences entity identified created concatenating every passage entity occurs index entity created standard document ranking method applied rank respect given query one main challenges approach transformation original text documents index including collection order extract entities context voting approach query processed typical document retrieval obtain initial list documents entities extracted documents using ner ned techniques score functions calculated estimate relation entities captured initial query instance counting frequency occurrence entity top documents combined document score relevance query another approach consists taking account distance entity mention query terms documents recently increasing research interest entity search linked data also referred semantic search due availability structured information entities relations form knowledge bases semantic search exploits rich structured entity related machine readable rdf format expressed triple entity predicate object two types search natural language based search regardless search type objective interpret semantic structure queries translate underlying schema target knowledge base research focus interpreting query intent others focus devise ranking framework deals background related work similarities different attributes entity entry query terms relationship queries first study relationship queries structured querying entities wikipedia text multiple predicates work used query language typed variables entities entity pairs integrates text conditions first computes individual predicates aggregates multiple predicate scores result score proposed method score predicates relies redundant contexts yahya defined relationship queries spo queries joined one relationships authors cast problem structured query language sparql extended support textual phrases spo arguments therefore allows combine structured triples text simultaneously extended yago knowledge base triples extracted clueweb using open information extraction approach scope relational databases graph search widely studied including ranking however approaches consider full documents graph nodes limited structured data searching structured data precise limited various respects order increase recall results returned enable prioritization results many elbassuoni propose ranking results similarly models like entityrank cheng shallow semantic queries relax predicate definitions structured queries instead implement proximity operators bind instances across entity types yahya propose algorithms application set relaxation rules yield higher recall entity retrieval proximity web documents contain term information used apply pattern heuristics statistical analysis often used infer entities investigated conrad utt petkova croft rennie jaakkola fact early work conrad utt demonstrates method retrieves entities located proximity given keyword show using window around effective supporting search people finding relationship among entities similar considerations statistics used identify salient terminology keyword include document index entity retrieval semantic search markov random field section detail generic markov random field mrf model retrieval variation sequential dependence model sdm later show model basis retrieval model markov random field mrf model retrieval first proposed metzler croft model query term document dependencies context retrieval objective rank documents computing posterior given document query purpose mrf constructed graph follows local markov property every random variable independent given observed values neighbors therefore different edge configurations imply different independence assumptions fig markov random field document term dependencies metzler croft defined consists query term nodes document node depicted figure joint probability mass function random variables defined query term nodes document node set maximal cliques potential function clique configurations parameter partition function normalizes distribution generally unfeasible compute due exponential number terms summation ignored influence ranking background related work potential functions defined compatibility functions nodes clique instance score measured reflect aboutness query term document metzler croft propose associate one real valued feature function clique graph potential functions defined using exponential form exp feature weight free parameter model associated feature function model allows parameter feature functions sharing across cliques configuration size type nodes one query term node one document node query construct graph representing query term dependencies define set potential functions cliques graph rank documents descending order rank log rank log log rank log rank log exp rank metzler croft concluded given general form mrf emulate retrieval dependence models language models sequential dependence model sequential dependence model sdm popular variant mrf retrieval model defines two clique configurations represented following potential functions basically considers sequential dependency adjacent query terms document node potential function containing query term node document node represented exp clique configuration containing contiguous query terms document node represented two real valued functions first considers exact ordered matches entity retrieval semantic search two query terms document second aims capture unordered matches within fixed window sizes consequently second potential function exp replacing potential functions equation factoring parameters sdm represented mixture model computed term phrase proximity feature classes rank free parameters must follow constraint coordinate ascent chosen learn optimal values maximize mean average precision using training data considering frequency term document frequency term entire collection feature functions sdm set cfq tfqi log log log uwn uwn dirichlet prior smoothing function searches exact matches phrase uwn function searches within window terms usually terms across document sdm shown performance document retrieval compared several bigram dependence models standard retrieval models across short long queries background related work mrf entity retrieval current methods entity retrieval knowledge graphs based mrf fielded sequential dependence model fsdm extends sdm structured document retrieval applied entity retrieval knowledge graphs context entity documents composed fields representing metadata entity entity document five fields names attributes categories similar entity names related entity names fsdm builds individual language models field knowledge base corresponds replacing sdm feature functions mixture language models feature functions fsdm defined log tfqi wjt log log cfqi uwn uwn dirichlet priors field weights field must constraint coordinate ascent used two stages learn values parameterized fielded sequential dependence model pfsdm extends fsdm dynamically calculating field weights different query terms features applied capture relevance query terms specific fields entity documents instance nnp feature positive query terms proper nouns therefore query terms mapped names field therefore field weight contribution given query term query bigram field linear weighted combination features wqi wqi named entity disambiguation feature function query unigram field respective weight bigrams feature function query bigram field respective weight consequently pfsdm total parameters number fields number field mapping features unigrams number field mapping features bigrams plus three parameters estimation performed two stage optimization first parameters learned separately unigrams bigrams achieved setting zero corresponding parameters second stage parameters learned coordinate ascent used stages elr model exploits entity mentions queries defining dependency entity documents entity links query named entity disambiguation given mention document named entity disambiguation ned entity linking aims predict entity reference knowledge base string refers nil entity available usually reference knowledge base includes set documents document describes one specific entity wikipedia far popular reference previous research typically performs three steps link entity mention representation mention extend entity mention relevant knowledge background document candidate generation find possible entries mention might refer representation disambiguation computing similarity represented mention candidate entities entity filtering targeted entity disambiguation special case ned one candidate entity entity monitored increasing interest developing entity filtering methods social media texts considering specificities limitations approaches focus finding relevant keywords positive negative cases using web collection based features another line work creates entity extraction systems entities belong certain topic used evidence disambiguate short message given topic similarly hangya create features representing topic distributions tweets using latent dirichlet allocation lda majority research work ned usually applied disambiguate entities reasonably long texts news blog posts recent years increasing interest developing ned methods social media texts specificities background related work limitations survey evaluation ner ned tweets concluded current approaches perform robustly terse linguistically compressed microblog texts methods reach measures still behind results obtained news texts social media texts short provide sufficient information calculate context similarity accurately addition approaches leverage neighboring entities documents tweets short one two entities mentioned extract information obtained tweets disambiguate entity mentions tweets collectively assumption twitter users content generators tend scatter interests many different messages broadcast necessarily true entity filtering also studied context classification davis propose pipeline containing three stages clearly positive examples exploited create filtering rules comprising collocations users hashtags remaining examples classified using model trained using clearly positive examples recently habib proposed hybrid approach authors first query google retrieve set possible candidate homepages enrich candidate list text wikipedia extract set features candidate namely language model overlapping terms tweet document well url length string similarity addition prior probability mention corresponding certain entity yago knowledge base also used recent work ned entity linking includes graph based algorithms collective entity disambiguation tagme babelfy wat word entity embeddings also used entity disambiguation specifically fang moreno propose learn embedding space entities words compute similarity features based combined representations sentiment analysis last decade automatic processing subjective emotive text commonly known sentiment analysis triggered huge interest text mining research community typical task sentiment analysis text polarity classification context work formalized follows given text span mentions sentiment analysis target entity decide whether conveys positive negative neutral sentiment towards target rise social media research sentiment analysis shifted towards twitter new challenges risen including slang misspelling emoticons poor grammatical structure number competitions organized semeval leading creation resources research two main approaches sentiment polarity classification using dictionary terms phrases annotated polarity supervised learning building model differences language associated polarity based training examples supervised learning approach classifier specifically trained particular type text tweets politics consequently possible capture peculiarities language used context expected reduces generality model biased towards specific domain supervised learning approaches require training data twitter previous work obtained training data assuming emoticons represent tweet polarity positive negative neutral using third party software stanford sentiment analyzer approaches shown work effectively conventional text tend ill suited twitter data purpose overcoming limitation algorithm uses lexicon specifically tailored social media text introduced sentistrength become reference recent years due relatively good performance consistent performance polarity classification social media texts nevertheless confined fixed set words context independent recent interest deep learning led approaches use deep learned word embeddings features variety text mining tasks sentiment analysis recent work integrated polarity information text word embedding extending probabilistic document model obtained latent dirichlet allocation others learned embeddings existing embedding sentences annotated polarity learning polarity specific word embeddings tweets collected using emoticons directly incorporating supervision sentiment polarity loss functions neural networks background related work word embeddings popular simple way model represent text data vector space model vector features feature space represents lexical item word document item independent items document allows compute geometric operations vectors lexical items using well established algebraic methods however vector space model faces limitations instance word express different meanings different contexts polysymy problem different words may used describe meaning synonymy problem since variety different methods lda resources dbpedia developed try assign semantics meaning concepts parts text word embedding methods aim represent words real valued continuous vectors much lower dimensional space compared traditional models moreover low dimensional space able capture lexical semantic properties words statistics fundamental information allows creating representations two approaches exist building word embeddings one creates low rank approximation word matrix case latent semantic analysis glove approach consists extracting internal representations neural network models text levy goldberg showed two approaches closely related although word embedding research goes back several decades recent developments deep learning framework captured attention nlp community moreover mikolov showed embeddings trained using models cbow exhibit linear structure allowing analogy questions form man woman boost performance several text classification tasks context objective maximize likelihood words predicted given context two models learning word embeddings model model cbow focus cbow formally every word mapped unique vector represented column projection matrix embedding dimension total number words vocabulary given sequence words objective maximize average log probability log word embeddings size context window word context window center word context vector obtained averaging embeddings word prediction center word performed using softmax multiclass classifier vocabulary eywt output word training low dimensionality embedding matrix encapsulating information word vocabulary surrounding contexts learned transforming sparse representation words compact real valued embedding vector size matrix used input learning algorithms tailored specific tasks enhance performance large vocabularies unfeasible compute partition function normalizer softmax therefore mikolov proposes use hierarchical softmax objective function approximate partition function using technique called negative sampling stochastic gradient descent usually applied training softmax gradient obtained via backpropagation several approaches generating word embeddings one build models explicitly aim generating word embeddings glove one extract embeddings general models implicitly compute word embeddings process solving language tasks one issues recent work training word embeddings variability experimental setups reported instance paper describing glove authors trained model five corpora different sizes built vocabulary frequent words mikolov trained vocabulary mikolov trained vocabulary recently arora proposed generative model learning embeddings tries explain theoretical justification nonlinear models glove hyper parameter choices authors evaluated model using vocabulary semeval sentiment analysis twitter organizers report participants either used general purpose word embeddings trained tweet dataset sort dataset however participants neither report size vocabulary used neither possible effect might task specific results background related work recently rodrigues created distributed first general purpose embeddings portuguese gensim implementation used authors report results different values parameters framework furthermore authors used experts translate well established word embeddings test sets portuguese language also made publicly available use work predicting collective attention online reputation monitoring systems would even useful would able know advance social media users talk lot target entities recent years number research works studied relationship predictive behavior user response publication online media items commenting news articles playing youtube videos sharing urls retweeting patterns first attempt predict volume user comments online news articles used metadata news articles linguistic features prediction divided two binary classification problems article would get comments would high low number comments similarly studies found shallow linguistic features sentiment named entities good predictive power research work line tries predict popularity news articles shares url sharing twitter based content features authors considered news source article category article author subjectivity language article number named entities article features recently large study life cycle news articles terms distribution visits tweets shares time across different sections publisher work able improve content type prediction web visits using data social media ten twenty minutes publication lines work focused temporal patterns user activities consistently identified broad classes temporal patterns based presence clear peak activity classes differentiate specific amount duration activity peak crane sornette define endogenous exogenous origin events based triggered internal aspects social network external respectively find hashtag popularity mostly influenced exogenous factors instead epidemic spreading work extend classes creating distinct clusters activity based distributions political data science different periods peak interpreted based semantics hashtags consequently authors applied text mining techniques semantically describe hashtag classes yang leskovec propose new measure time series similarity clustering authors obtain six classes temporal shapes popularity given phrase meme associated recent event well ordering media sources contribution popularity recently tsytsarau studied time series news events relation changes sentiment time series expressed related topics social media authors proposed novel framework using time series convolution importance events media response function specific media event type framework able predict time duration events well shape time political data science content analysis mass media established tradition social sciences particularly study effects media messages encompassing topics diverse addressed seminal studies newspaper editorials media uses political rhetoric among many others riffe freitag reported increase use content analysis communication research suggested digital text computerized means extraction analysis would reinforce trend expectation fulfilled use automated content analysis surpassed use hand coding increase digital sources text one hand current advances computation power design making development necessary possible also raising awareness inferential pitfalls involved one avenue research explored recent years concerns use social media predict present future political events namely electoral results although consensus methods consistency summarizes differences studies conducted far stating vary period method data collection data cleansing techniques prediction approach performance evaluation one particular challenge using sentiment aggregate opinions timely fashion fed prediction method two main strategies used predict elections buzz number tweets mentioning given candidate background related work party use sentiment polarity different computational approaches explored process sentiment text namely machine learning linguistic based methods practice algorithms often combine strategies johnson concluded predicting elections social media used gauge sentiment specific events political news speeches defending idea diakopoulos studied global sentiment variation based twitter messages obama mccain political debate still happening tumasjan used twitter data predict federal election germany stated mere number party mentions accurately reflects election result bermingham correctly predicted irish general elections also using twitter data also tested share volume predictor senate special election massachusetts hand several studies use sentiment polls result indicator connor used sentiment aggregate function study relationship sentiment extracted twitter messages polls results defined sentiment aggregate function ratio positive negative messages referring specific political target used sentiment aggregate function predictive feature regression model achieving correlation results poll results capturing important trends bermingham also included regression model sentiment features bermingham introduced two novel sentiment aggregate functions sentiment modified share volume function represent share positive negative volume sentiment used log ratio number positive negative mentions given party moreover concluded inclusion sentiment features augmented effectiveness model introduced different aggregate function race negative messages party interpreted positive party summary suggestions potentially independent words predictive metrics appear wide variety forms mention share party received within party mentions given mention share political candidates share positive mentions party received positive mention share candidates share users commenting candidate party share mentions candidate followed word indicative electoral success failure relative increase positive mentions candidate simply collection various potentially political data science politically relevant words identified statistical relationship polls political actors past suggestions dependent variable metrics political success show similar variety include vote share party received election day vote share party adjusted include votes parties included analysis vote share candidates election day campaign tracking polls politicians job approval ratings number seats parliament party received election chapter entity retrieval online reputation monitoring start presenting formal definition queries model retrieval problem probabilistic perspective assume query formulated sequence individual targeting specific entity relationship create specific representations entities context terms well pairs entities relationships create graph probabilistic dependencies representations show dependencies depicted probabilistic graphical model bayesian network therefore answering query reduced computation factorized conditional probabilities graph documents however possible compute conditional probabilities directly raw documents collection traditional entity retrieval documents serve proxies entities relationships representations necessary fuse information spread across multiple documents propose two design patterns inspired model model balog create centric document centric representations first design pattern early fusion consists aggregating context terms entity relationship occurrences create two dedicated indexes entity index relationship index possible use retrieval method compute relevance score entity relationship documents given second design pattern late fusion applied top standard document index alongside set entity occurrences document first compute relevance score documents given based entity retrieval online reputation monitoring entity occurrences top results compute individual entity relationship scores retrieval method used score documents combined traditional retrieval methods language models design patterns used create unsupervised baselines retrieval finally follow recent research line entity retrieval exploits term dependencies using markov random field mrf framework retrieval introduce dependence model erdm novel supervised early model retrieval creates mrf compute term dependencies queries documents retrieval retrieval complex case entity retrieval queries expect tuples related entities results instead single ranked list entities happens general entity queries instance query ethnic groups country expecting ranked list tuples ethnic group country results goal search multiple unknown entities relationships connecting table retrieval definitions qei dei query congresswoman hits back president entity congresswoman relationship hits back representation entity frederica wilson representative congresswoman use terminology representation document interchangeably representation relationship frederica wilson donald trump hits back use terminology representation document interchangeably set entity query congresswoman president set relationship query set entity documents retrieved query set relationship documents retrieved query query length corresponding number entity relationship entity tuple retrieved frederica wilson donald trump retrieval section present definition queries probabilistic formulation retrieval problem information retrieval perspective table presents several definitions used throughout chapter queries queries aim obtain ordered list entity tuples result contrary entity search queries expected result ranked list single entities results queries contain two entities instance complex information need silicon valley companies founded harvard graduates expects company founder results turn european football clubs brazilian player trophy expects triples club player trophy results pair entities entity tuple connected relationship complex information need expressed relational format decomposed set specify types entities types relationships entities relationship must two one entities involved relationship thus query expects mapped triple entity attributes queried respectively relationship attribute describing consider query chain entity relationship qen define length query number number entity must number relationship equal consequently size entity tuple retrieved must equal number entity instance query soccer players dated top model answers cristiano ronaldo irina shayk represented three soccer players dated top model automatic mapping terms query qei scope work seen problem query understanding assume information needs decomposed constituent entity relationship using natural language processing techniques user input interface enforces structure qen entity retrieval online reputation monitoring modeling retrieval approach retrieval assumes raw document collection news articles document associated one entities words documents contain mentions one entities related since goal retrieve tuples related entities given query expresses entity attributes relationship attributes need create representations entities relationships denote representation entity dei retrieval interested retrieving tuples entities result number entities tuple two three depending structure particular query query aims get tuples two entities assume possible combine tuples length two instance associate two tuples length two share entity retrieve tuple length three therefore create representations relationships pairs entities denote representation relationship considering example query spiritual leader award vice president formulated relational format spiritual leader award vice president associating tuples length two dalai lama nobel peace prize nobel peace prize gore would result expected dalai lama nobel peace prize gore sake clarity consider example query three query aims retrieve tuple length two pair entities connected relationship based definition query entity resulting tuple must relevant corresponding entity moreover relationship two entities must also relevant relationship instead calculating simple posterior traditional information retrieval retrieval objective rank tuples based joint posterior multiple entity relationship representations given query queries seen chains interleaved entity relationship subqueries take advantage chain rule formulate joint probability product conditional probabilities formally want rank entity relationship candidates descending order joint posterior retrieval rank rank rank rank consider conditional independence entity representations within joint posterior probability given entity representation dei relevant given query independent knowing entity relevant well example consider query action movies starring british actor retrieving entity representations action movies independent knowing tom hardy relevant british actor however independent knowing set relevant relationships starring given action movie set relevant starring make sense consider relevant consequently since queries decomposed constituent entity relationship subqueries ranking candidate tuples using joint posterior rank proportional product conditional probabilities corresponding entity relationship consider longer query aiming retrieve triple connected entities query three entity two relationship thus previously explained one relationship need join relevant relationship one entity common probabilistic point view seen conditional dependence retrieved previous relationship rank entity relationship candidates need calculate following joint posterior entity retrieval online reputation monitoring rank rank rank compared previous example joint posterior shows entity candidates conditional dependent words entity candidates must belong candidates relationships representations connected able make generalization retrieval factorization conditional probabilities joint probability entity representations dei relationship representations entity qei relationship set random variables conditional dependencies easily represented probabilistic directed acyclic graph bayesian network bayesian networks nodes represent random variables edges represent conditional dependencies every nodes point given node considered parents bayesian networks define joint probability set random variables factorization conditional probability random variable conditioned parents formally pai represents parent nodes figure depicts representation retrieval different query lengths using bayesian networks easily conclude graphical representation contributes establish guidelines modeling retrieval first points respective document node second relationship document nodes always point contiguous entity representations last one relationship relationship documents also point subsequent relationship document draw graph structure number able compute product conditional probabilities node given parents adapting retrieval fig bayesian networks retrieval queries different lengths general joint probability formulation bayesian networks retrieval come following generalization rank dei dri qei dri qri denote set candidate relationship documents graph set candidate entity documents graph information retrieval often convenient work affect ranking transforms product conditional probabilities summation follows rank log rank logp dei dri qei logp dri qri entity retrieval online reputation monitoring present two design patterns compute conditional probability every entity relationship candidate documents design patterns retrieval traditional document retrieval approaches create direct representations raw documents retrieval model language models used match information need expressed keyword query representations however retrieval requires collecting evidence entities relationships spread across multiple documents possible create direct representations raw documents serve proxy connect queries entities relationships abstractly speaking entity retrieval seen problem object retrieval search process fusing information given object case verticals google finance recently zhang balog presented two design patterns object retrieval first design pattern early fusion approach termbased representation objects created earlier retrieval process first creates aggregating term counts across documents associated objects later matches queries using standard retrieval methods second design pattern late fusion approach relevant documents query retrieved first later retrieval process ranks objects associated top documents design patterns represent generalization balog model model expertise retrieval essence retrieval extension complex case besides ranking objects need rank tuples objects satisfy relationship expressed query requires creating representations entities relationships fusing information spread across multiple raw documents propose novel design patterns retrieval inspired design patterns presented zhang balog single extend design patterns accommodate specificities retrieval hypothesize possible generalize term dependence models represent achieve effective retrieval without entity relationship type restrictions categories happens semantic web based approaches design patterns retrieval early fusion early fusion strategy presented zhang balog consists creating representation object retrieval containing terms proximity every object mention across document collection described previous section queries formulated sequence multiple entity queries relationship queries early fusion approach queries match previously created representation since two types queries propose create two types representations one entities relationships early fusion design pattern similar model balog thought creating two types dei created aggregating context terms occurrences across raw document collection hand pair entities close together across raw document collection aggregate context terms describe relationship create approach focus sentence level information entities relationships although design pattern applied complex segmentations text dependency parsing rely entity linking methods disambiguating assigning unique identifiers entity mentions raw documents collect entity contexts across raw document collection index entity index done collecting indexing entity pair contexts relationship index define pseudo frequency term entity dei follows dei total number raw documents collection term frequency context entity raw document association weight corresponds weight document mentions entity across raw document collection similarly term pseudo frequency term relationship defined follows entity retrieval online reputation monitoring term frequency context pair entity mentions corresponding relationship raw document association weight work use binary associations weights indicating entity mention raw document well relationship however weight methods used relevance score entity tuple calculated using posterior defined previous section equation calculate individual conditional probabilities product retrieval score association weight formally consider logp dei dri qei score dei qei logp dri qri score dri qri score dri qri represents retrieval score resulting match query terms relationship qri relationship dri applies retrieval score score dei qei corresponds result match entity qei entity dei computing score dri qri score dei qei retrieval model used different scoring functions introduced use binary association weight represents presence relevant entity qei contiguous relationships bayesian network must relevant qri association weight building block guarantees two entities relevant also part relationship relevant ranked higher tuples one none entities relevant entity hand association weight guarantees consecutive relationships share one entity order create triples entities longer queries relevance score entity tuple given query calculated summing individual relationship entity relevance scores qei define score tuple given query follows design patterns retrieval rank score dei qei score dri qri considering dirichlet smoothing unigram language models constituent retrieval scores computed follows scorelm dri qri log scorelm dei qei log term qei qri dei dri pseudo frequencies defined equations collection frequencies represent frequency term either entity index relationship index represent total number terms represent total number terms collection finally dirichlet prior smoothing generally corresponds average document length collection association weights early fusion late fusion share three components first two represent document associations determine weight given raw document contributes relevance score particular entity tuple last one association indicates strength connection given entity within relationship work consider binary association weights methods could used according binary method define weights follows otherwise entity retrieval online reputation monitoring otherwise dri otherwise dri otherwise approach weight given association independent number times entity relationship occurs document general approach would assign real numbers association weights depending strength association instance uniform weighting would proportional inverse number documents given entity relationship occurs option could approach early fusion example let consider illustrative example early fusion design pattern retrieval using unigram language models query soccer players dated top model query decomposed three qei soccer players top model qri dated first two target entity index last targets relationship index table presents toy entity index entities example two entity including term frequency dei term table illustrative example entity index early fusion tom brady cristiano ronaldo lionel messi figo gisele bundchen irina shayik helen svedin dei design patterns retrieval considering remaining variables required calculate scorelm dei qei soccer player top model calculate scorelm dei qei respective entities first entity query soccer players ranked list relevant entities respective score would following lionel messi cristiano ronaldo figo tom brady second entity query top models gisele bundchen irina shayik helen svedin table shows relationships entity pairs relevant dated respective term frequency dri considering remaining variables required calculate scorelm dri qri dated entity retrieval online reputation monitoring table illustrative example relationship index early fusion gisele bundchen tom brady irina shayik cristiano ronaldo helen svedin figo dri calculate scorelm dri qri respective relationship obtain following ranked list gisele bundchen tom brady irina shayik cristiano ronaldo helen svedin figo sum individual scores calculate final score early fusion design pattern score using equation final ranked list tuples following irina shayik cristiano ronaldo helen svedin figo gisele bundchen tom brady entity tuple irina shayik cristiano ronaldo relevant query soccer players dated top model although gisele bundchen tom brady higher individual scores two top model dated ranks last due poor relevance tom brady soccer player entity lionel messi relevant entity soccer player relevant relationship therefore excluded final ranked list entity tuples late fusion late fusion design pattern presented zhang balog documentcentric strategy first query raw individual documents aggregate associated objects relevant documents instead creating representations entities relationships pairs entities late fusion use design patterns retrieval raw documents hidden variables separating query relevant entity tuples retrieved vision orm implies processing raw documents detect entities occurrences extract sentence level information used downstream entity retrieval text mining tasks therefore interested applying late fusion strategy work however believe makes sense present theoretical formulation late fusion design pattern retrieval leave practical experiments late fusion future work context generic retrieval process retrieving entity tuples using late fusion strategy consists processing independently early fusion strategy case use single index comprising term based representation collection raw documents retrieval model used calculate relevance score individual raw document given relevant documents use entity linking extract entities mentioned relevant raw document following strategy calculate aggregated counts entity occurrences weighted individual relevance score individual raw documents end join results calculate overall relevance score entity tuples formally define relevance score entity tuple given query follows rank score qei score qri score qri represents retrieval score resulting match query terms relationship qri raw document applies retrieval score score qei corresponds result match entity qei raw document weights represent association weights relationships raw documents entities raw documents respectively use binary association weights work weights used also use binary association weight entity retrieval online reputation monitoring represent association weights similarly happens case early fusion computing score qri score qei retrieval model used considering scores computed follows scorebm qri log avg avg term query term frequency raw document inverse document frequency idf computed log number documents collection number documents term total number terms raw document avg average document length free parameters usually chosen absence specific optimization scorebm qei log late fusion example considering toy example query introduced previous single index document index illustrated table remaining parameters required calculating scorebm qei scorebm qri following soccer player dated top model avg design patterns retrieval table illustrative example document index late fusion cristiano ronaldo lionel messi cristiano ronaldo figo gisele bundchen gisele bundchen tom brady irina shayik gisele bundchen adriana lima tom brady irina shayik cristiano ronaldo figo helen svedin first entity soccer players relevant documents ranked scorebm qei following cristiano ronaldo lionel messi figo cristiano ronaldo figo helen svedin gisele bundchen adriana lima tom brady second entity top model irina shayik entity retrieval online reputation monitoring gisele bundchen figo helen svedin gisele bundchen adriana lima tom brady relationship dated gisele bundchen tom brady irina shayik cristiano ronaldo gisele bundchen adriana lima tom brady figo helen svedin since late fusion relationship could used directly entity tuples need extract candidate tuples raw documents retrieved using relationship two entity associations relevant document combine entities create tuples instance three entity associations therefore extract three candidate tuples gisele bundchen tom brady gisele bundchen adriana lima adriana lima tom brady candidate tuple sum scorebm qri every relevant document relationship associated entity tuple applies individual entities candidate tuples associated relevant documents entity instance entity soccer players sum score qei relevant documents mentioned entity belongs candidate tuple entities candidate tuple mentioned relevant documents entity helen svedin figo assign entity maximizes final score score use scores entity soccer player figo entity top model helen svedin final ranked list entity tuples following irina shayik cristiano ronaldo helen svedin figo gisele bundchen tom brady design patterns retrieval gisele bundchen adriana lima adriana lima tom brady lionel messi excluded final ranked list entity tuples associated document relevant relationship dated hand adriana lima included final ranking although true dated either tom brady gisele bundchen example top three entity tuples ranked order early fusion strategy example implementation section proposed two design patterns retrieval early fusion late fusion seen flexible framework ranking tuples entities given query expressed sequence entity relationship framework flexible enough allow using retrieval method compute individual retrieval scores document query nodes graph structure using language models scoring functions design patterns used create unsupervised baseline methods retrieval case early fusion overhead traditional document search since need create two dedicated indexes store entity relationship entity index created harvesting context terms proximity every occurrence given entity across raw document collection process must carried every entity raw document collection similar process applied create relationship index every two entities occurring close together raw document extract text occurrences representation relationship two process must carried every pair entities sentences across raw document collection late fusion requires less overhead implemented top web search engine reduced effort need list entity occurrences alongside document therefore need create separate index hand requires processing query time since need first rank raw documents aggregate entity occurrences top documents retrieved moreover contain information entity retrieval online reputation monitoring entity occurrences two entities occurring far text might considered relationship candidates might prone higher false positive rate one advantage early fusing lies flexibility need create two separate indexes retrieval possible combine data multiple sources seamless way instance one could use well established knowledge base dbpedia entity index use specific collection news collection social media stream harvesting relationships transient nature common design patterns challenge inherent problem retrieval size search space although problem formulated sequence independent results must joined together consequently search space need join results based shared entities problem becomes particularly hard short contain popular terms let consider actor qei many results probably thousands high probability need process thousands finding one entity also relevant relationship time computational power constraints probably apply strategy considering top results lead reduced recall case short popular terms dependence model section present dependence model erdm novel supervised early model retrieval recent approaches entity retrieval demonstrated using models based markov random field mrf framework retrieval incorporate term dependencies improve entity search performance suggests mrf could used model query term dependencies among entities relationships documents one advantages mrf framework retrieval flexibility need construct graph representing dependencies model define set potential functions cliques learn parameter vector score document unique unnormalized joint probability mrf potential functions defined using exponential form exp feature weight free parameter model dependence model associated feature function learning rank used learn feature weights minimize loss function model allows parameter feature functions sharing across cliques configuration size type nodes one query term node one document node graph structures dependence model erdm creates mrf modeling implicit dependencies terms entities relationships entity relationship modeled document nodes within graph edges reflect term dependencies contrary traditional retrieval using mrf sdm objective compute posterior single document given query erdm allows computation joint posterior multiple documents entities relationships given query consists also multiple fig markov random field dependencies retrieval graph structures erdm two queries one depicted figure figure respectively graph structures contain two different types query nodes document nodes entity query relationship query nodes plus entity relationship document nodes within mrf framework considered documents actual real documents rather objects representing entity relationship two entities unlike real documents objects direct explicit representations usually necessary gather evidence across multiple real documents mention given object order able match keyword queries therefore erdm seen early retrieval model existence two different types documents implies two different indexes entity index relationship index entity retrieval online reputation monitoring fig markov random field dependencies retrieval dependencies erdm found formed one entity document one relationship document dei dei dri dri graph structure need assume explicit dependence entity documents given relationship document implicit connection dependencies relationship document likelihood observing entity document dei given relationship document affected observation entity document explicit dependence two entity documents could used represent direction relationship two entities support dependence relationship documents would need account following constraint representing relationship index would compute ordered feature function entities relationship similar ordered bigram feature function sdm work explicitly model asymmetric relationships instance user searches relationship entity criticized entity fact entity criticized entity assume entity tuple entity entity still relevant information need expressed query erdm follows sdm dependencies query terms documents due proved effectiveness multiple contexts therefore erdm assumes dependence neighboring terms qjei qjei dependence model mrf retrieval requires definition sets cliques maximal nonmaximal within graph one feature functions applied set cliques erdm containing least one document following set containing entity document node exactly one term entity set containing entity document node two ordered terms entity set containing relationship document node exactly one term relationship set containing relationship document node two ordered terms relationship set containing one entity document node one relationship document node rer set containing one entity document node two consecutive relationship document nodes joint probability mass function mrf computed using set potential functions configurations maximal cliques graph potential functions constructed one real valued feature functions associated respective feature weights using exponential form feature functions erdm two types feature functions textual textual feature functions measure textual similarity one terms document node feature functions measure compatibility entity relationship documents share given entity table presents overview feature functions associated clique sets type input nodes although could define wide set different feature functions decided adapt sdm textual feature functions erdm clique configurations therefore define unigram based feature functions fte ftr entity retrieval online reputation monitoring table clique sets associated feature functions type input nodes clique set rer feature functions fte foe fue ftr fue fser fsrer type textual textual textual textual input nodes qjei dei qjei dei dei dei dri containing single term entity relationship document node containing consecutive terms document node define two feature functions one considers consecutive terms matches ordered bigrams entity relationship documents feature function denoted foe depending clique second feature function matches bigrams documents using unordered window terms matches bigrams documents two terms bigram occur maximum terms feature function denoted fue fur depending clique textual feature function decided use two variants dirichlet smoothing language models present summary textual feature functions used work qjei dei log dei dei dei qjei dei log qjei dei log dependence model log log log qjei dei represent term frequencies entity document relationship document respectively collection frequencies qjei represent frequency term either entity index relationship index variants functions represent ordered unordered bigram matching frequency represent total number terms represent total number terms collection finally dirichlet prior smoothing generally corresponds average document length collection log dei dei avg entity retrieval online reputation monitoring qjei qjei dei qjei dei avg qjei dei qjei dei avg qjei avg dependence model avg avg represent total number documents entity index relationship index respectively document frequency unigrams bigrams represented using total number terms entity relationship document avg avg average entity relationship document length free parameters usually chosen absence specific optimization define two features erdm first one fter assigned composed one entity document one relationship document inspired feature function hasibi balog elr model defined follows fser dei dei linear interpolation implements smoothing method dei measures entity represented entity retrieval online reputation monitoring dei belongs relationship represented background model employs notion entity popularity within collection relationship documents dei represents number relationship documents contain entity represents total number relationship documents relationship index queries one relationship draw edge consecutive relationship documents within erdm graph edge creates containing two relationship documents one entity document feature function fsrer measures given entity shared consecutive relationship documents within graph opted define simple binary function fser dei dri dei dri otherwise summary described set feature functions associated clique configuration within erdm graph leave future work possibility exploring type features describe textual similarity compatibility different nodes erdm graph neural language models ranking defined set clique configurations real valued feature functions constitute potential functions cliques graph erdm formulate calculation posterior using probability mass function mrf follows dependence model rank rank fte qjei dei qei foe qjei dei qei dei fue qjei qei ftr qri qri fur qri fser dei fser dei dri essence retrieval using erdm corresponds ranking candidate entity tuples using linear weighted sum feature functions cliques graph therefore apply linear learning rank algorithm optimize ranking respect vector feature weights given training set composed relevance judgments ranking entity tuples evaluation function produces real valued output objective find values vector maximizes explained require consider ranking produced individual scores standard characteristic among information retrieval evaluation metrics map ndcg discussion section introduced dependence model erdm novel supervised early model retrieval inspired recent work entity retrieval believe modeling term dependencies documents increase search performance entity retrieval online reputation monitoring erdm seen extension sdm model document retrieval way besides modeling query term dependencies create graph structures depict dependencies entity relationship documents consequently instead computing single posterior propose use mrf retrieval computing joint posterior multiple entity relationship documents given query moreover since erdm supervised model believe tuning weights feature functions besides optimizing search performance also help explain terms respective documents also entity documents relationship documents contribute overall relevance entity tuples given query summary contributions chapter present several contributions problem retrieval perspective generalization problem search cover entity types relationships represented attribute predicate respectively rather predefined set general probabilistic model retrieval using bayesian networks proposal two design patterns support retrieval approaches using model proposal dependence model builds basic sequential dependence model sdm provide extensible representations dependencies suitable complex queries chapter retrieval web corpus start chapter presenting new method generating test collections together new test collection relink query collection comprising queries leverage web tabular data containing entities relationships among share row table exploit wikipedia tree articles containing lists entities form tables developed table parser extracts tuples entities tables together associated metadata information provided editors create queries fulfilled extracted tuples report set evaluations erdm model using four different query sets order leverage information entities relations corpus necessary create representation entity related information amenable search approach focus sentence level information entities although method applied complex segmentation text experiments based data set text annotation refer entities found text including variances surface forms entity designated unique unique entity instance created entity documents comprising collection sentences contain entity context documents indexed comprising entity index done creating entity pair documents entity pair index two indexes enable execute queries using different retrieval models including erdm models dependence entities retrieval web corpus relink query collection improvements search techniques hampered lack test collections particularly complex queries involving multiple entities relationships section describe method generating test queries support comprehensive search experiments queries relevance judgments created content exists tabular form columns represent entity types table structure implies one relationships among entities editorial work involves creating natural language queries based relationships represented entries table publicly released relink test collection comprising queries relevance judgments obtained sample wikipedia tables latter comprise tuples entities extracted columns labelled corresponding entity types relationships represent improvement methods extraction search hampered lack query sets relevance judgments gold standards could used compare effectiveness different methods section introduce method acquiring instances entities entity relationships tabular data relink query collection queries corresponding relevance judgments essential approach observation tabular data typically includes entity types columns entity instances rows table structure implies relationship among table columns enables create queries answered entity tuples across columns following approach prepared released relink comprising queries relevance judgments based sample wikipedia tables query collection research framework publicly enabling community expand relink framework additional document collections alternative indexing search methods important maintain enhance relink providing updates existing entity types creating new queries relevant instances additional tabular data material contained section published saleiro rodrigues soares relink research framework test collection retrieval https relink query collection tabular data entity relationships information satisfies complex queries likely involve instances entities relationships dispersed across web documents sometimes information collected published within single document wikipedia page cases traditional search engines provide excellent search results without applying special techniques considering entity relationship types indeed data collection aggregation tabularization done wikipedia editor also means tabular wikipedia content comprising various entities considered representing specific information need need motivated editors create page first place content fact satisfy many different information needs focus exploiting tabular data exhaustive search types order specify queries use column headings entity types column entries relevance judgments entity query similarly given pair columns correspond distinct entities formulate implied relationship example pair car manufacturing plant could refer made manufactured relationships instances entity pairs table serve evidence specific relationship generalized complex information needs involve multiple entity types relationships automated creation queries tabular content interesting research problem asked human editors provide natural language structured queries specific entity types collect sufficient amounts data human editors able automate query creation process machine learning techniques relink compiled set queries relevance judgments wikipedia lists topic areas selection tables wikipedia contains dynamic index lists lists lists represents root tree spans curated lists entities various domains used wikipedia snapshot october traverse lists lists lists tree starting root page following every hyperlink type list children resulted collection list pages pages contain tabular data include tables consistent column row structure restrict content extraction wikitable html class http retrieval web corpus typically denotes data tables wikipedia ignore types tables infoboxes first instance focus relational tables tables key column referring main entity table instance list books skepticism contains table books columns author category title among others case key column title contains titles books skepticism require relationship specified entity types table must contain title type involve title column order detect key columns created table parser uses set heuristics adopted lehmberg ratio unique cells column text length key column identified parser creates entity pairs consisting key column one column table content column cells constitutes set relevant judgments relationship specified pair entities sake simplicity consider wikipedia lists contain single relational table furthermore goal create queries verifiable entity entity pair instances therefore selected relational tables key column least one column cell content linked wikipedia articles requirements collected tables final step selected tables performing stratified sampling across semantic domains covered wikipedia lists new table calcuated jaccard similarity scores title corresponding wikipedia page titles pages associated tables already pool setting maximum similarity threshold obtained set tables process creating relink queries involves two steps automatic selection tables columns within tables manual specification information needs example table grammy award album year columns winner work automatically selected serve entity types query figure relationship among entities suggested title let human annotator formulate query relink query set created annotators provided annotators access full table metadata table title first paragraph page entity pairs triples used specify query figure entity pair triple annotators created natural language information need query relational format qei shown table relink query collection fig example wikipedia table row fig example metadata provided editors formulation queries relational query format introduced support variety experiments queries essence complex information need decomposed set subqueries specify types entities types relationships entities relationship query one query entity involved relationship thus query expects pair entities given relationship mapped three qei qei entity types respectively relationship type describing collection statistics relink covers thematic areas wikipedia mathematics logic religion belief systems technology applied sciences miscellaneous people geography places natural physical sciences general reference culture arts common thematic areas culture arts queries geography places queries table show characteristics natural language relational queries among queries refer entity pairs entity triples retrieval web corpus table examples query annotations query relational format regiments held indian army regiment held indian army seasons nhl players scored goals team represented nhl season scored goals nhl player played nhl team table relink collection statistics total queries avg queries length avg length avg length uniq entity attributes uniq relationships avg relevant judgments expected natural language descriptions queries longer average characters compared queries characters analyze structure relational queries components entity queries specify entity type relationship queries specify relationship type across queries unique entity types total occurrences rather unique across queries entity types occur one query occur exactly queries commonly shared entity type country present queries case relationships unique relationship types occurrences dominant type located occurs queries surprising since many domains key entity tied location included one columns nevertheless relationship types occurring implying relink diverse set queries including relationship types occurring experimental setup experimental setup section detail conducted experiments retrieval since access test collections comprising general purpose queries decided use web corpus dataset precisely dataset created support research information retrieval related human language technologies contains billion web pages part subset popular million english web pages including wikipedia part created resource research groups without processing power processing collection used web collection text span annotations linked wikipedia entities show relink used retrieval web content developed prototype using apache lucene indexing search used specific python library pylucene allowed customized implementation tailored retrieval data indexing entity index relationship index fig illustration indexing web corpus text corpus use combined text span annotations links wikipedia entities via freebase entity linking precision recall estimated respectively experiments created two main indexes one entity extractions one entity pairs https retrieval web corpus relationships extractions extract entity pairs occurrences using open information extraction method like ollie annotated corpus follows entity annotation extract sentence occurred entity context pairs entities look entities sentence extract separating string context relationship connecting figure illustrates indexing process adopted work obtained million entity extractions million entity pairs extractions described table order compute incrementally updated two auxiliary indices containing number terms per entity per entity pair respectively ran experiments using apache lucene made use groupingsearch grouping extractions entity entity pair query time get statistics ordered unordered bigrams made use spannearquery table extractions statistics entities entity pairs total unique avg doc len retrieval method parameter tuning experiments using erdm adopted three stage retrieval method first queries qei submitted entity index submitted index initial sets top results grouped entity respectively retrieved using lucene default search settings second feature functions specific retrieval model calculated set using implementation process easily parallelized final ranking score computed using learned weights evaluation scores reported top results parameter tuning erdm baselines directly optimized respect mean average precision map make use ranklib implementation coordinate ascent algorithm sum normalization constraints random restarts coordinate ascent commonly used optimization technique iteratively optimizes single parameter holding parameters fixed parameters estimated using cross validation query sets separately able use train test folds throughout experiments experimental setup first randomly create fixed train test folds initial result set query set reported evaluation metrics folds optimize dirichlet priors language models set equal traditional average document length average entity entity pairs extractions length respectively unordered window size fue fur set suggested test collections ran experiments total queries decided perform experiments using queries aiming entities leave future work evaluation queries aiming triples besides relink used relationshipcentric query sets pairs wikipedia entities answers relevance judgments query sets cover wide range domains described table query sets retrieval scarce generally entity retrieval query sets table description query sets used evaluation query set count domains geography places politics society culture arts technology science erq complex relink award city club company film novel person player song university cinema music books sports computing military conflicts general reference culture arts geography places mathematics logic natural physical sciences people religion belief systems society social sciences technology applied science total one exception query set used collection contains subset relational queries designed brooklyn bridge relational queries fixed relevant entity brooklyn bridge easily transformed single entity relevance judgments pairs relational queries identified fixed relevant entity retrieval web corpus query give capitals countries cases provided single entity relevance judgment needed annotate missing entity manually create pair instance given capital city africa identified corresponding african country addition used two benchmarks created previous work using approaches erq complex neither erq complex provide complete relevance judgments consequently manually evaluated answer experiments erq consists queries adapted however queries given fixed entity query find eagles songs queries asking pairs unknown entities find films starring robert niro please tell directors complex queries created approach contains queries removed expect entities query set consists pure queries unknown pairs entities currency country whose president james mancham kings city led peloponnesian starred movie directed hal ashby used four different retrieval metrics mean average precision results map precision mean reciprocal rank mrr normalized discounted cumulative gain ndcg results analysis start performing simple experiment comparing early fusion erdm using language models retrieval functions since interested comparing relative performance opted scale experimental setup instead computing term frequency every extraction given entity relationship cap number group documents retrieved first passage tried several different values values extraction performance reduced significantly performance reduces dramatic setup reduces experimental runtime since limited resources proved useful table depicts results comparative evaluation decided use three test collections specifically tailored relationship retrieval see results similar erdm variants three test collections erdm presents slightly better performance corresponding variant however performing statistical results analysis significance tests obtained comparing erdm interesting shows general purpose evaluation overhead computing sequential dependencies carry significant improvements table early fusion erdm comparison using erq map mrr ndcg complex map mrr ndcg relink queries map mrr ndcg hand detect sensitivity retrieval function used erq outperform opposite happens complex relink sensitivity means generalize assumption one retrieval functions adequate retrieval another important observation overall lower results relink test collection comparison erq complex contrary expectations low coverage entity tuples relevant relink test collection present results comparing erdm three baselines using sequential dependence evaluate impact modeling dependencies query terms first baseline method baseee consists submitting two queries entity index qei created cross product two entity results set retrieved query method compute sequential dependence model sdm scores retrieval web corpus second baseline method basee consists submitting single query towards entity index used erdm created cross product entity results set third baseline method baser consists submitting single query towards index index created using full sentence instead separating string erdm approach aims capture entity context might present sentence erdm relies entity index purpose evaluation decided cap number extractions compute term frequencies inside group results returned first passage lucene groupingsearch due low coverage clueweb entire relink collection decided perform evaluation using top queries highest number relevance judgments indexes also include results adapted test collection table results erdm compared three baselines baseee basee baser erdm map baseee basee baser erdm map baseee basee baser erdm map baseee basee baser erdm map mrr ndcg erq mrr ndcg complex mrr ndcg relink queries mrr ndcg results analysis table presents results experiments query set start comparing three baselines among follows table baser baseline outperforms baseee basee query sets baseee worst performing baseline baser retrieval approach three baselines document collection comprises corpus baseee basee retrieve entity pairs created step reduces probability retrieving relevant results results shows need document collection aiming answer queries erdm significantly outperform baselines query sets performed statistical significance testing map using erdm baseline obtaining query sets results show early fusion approach using two indexes one entities relationships adequate promising believe approach become reference future research retrieval perspective nevertheless based absolute results obtained evaluation metric query set conclude retrieval still far solved problem room explore new feature functions retrieval approaches difficult problem methods proposed still far optimal performance queries find world war flying aces services mountain highest annnapurna examples queries zero relevant judgments returned hand erdm exhibits interesting performance queries high complexity computer scientists professors university frederick terman speculate aspects might influence performance one aspect lack query relaxation experimental setup relevant entity tuples might indexes query terms used search entity tuples match query terms harvested possible retrieve relevant judgments query relaxation approaches tried future work specifically recent advances word embeddings possible expand queries alternative query terms indexes hand adopted simple approach extracting entities relationships use dependency parsing complex methods relation extraction would allow filter noisy terms also leave future work moreover assess influence extraction method propose use retrieval web corpus selective text passages containing target entity pairs query terms associated well different extraction methods could tried straightforward evaluation impact fig values erdm obtained using sum normalization understand much importance attributed different types clique sets plot values lambda parameters parameters represent feature importance set functions targeting dependence entity query terms entity documents overall ranking score represent importance feature functions relationship type queries finally value assigned feature function evaluates entity retrieved entity type queries belongs retrieved relationship type query summary contributions plot feature weights learned query set depicted figure see weight unigram language model entity type queries dominate ranking function evaluated relative weights one three functions using sum normalization three weights entity documents documents observe dominates every query set however happen relationship type queries bigram features higher values complex relink summary contributions chapter presented following contributions retrieval research area indexing method supports generalization entity types attribute predicate respectively method generating test collections resulted relink query collection comprising queries results experiments scale comprehensive set queries corpora chapter entity filtering financial sentiment analysis chapter present work developed tackle two fundamental text mining problems orm entity filtering sentiment analysis start describing participation filtering task replab developed supervised method classify tweets relevant given target entity method obtained first place competition entity filtering seen target based named entity disambiguation ned given target entity study need develop binary classifier filter tweets talking target entity task fundamental orm downstream tasks sentiment analysis predictions would produce misleading results noisy signals used sentiment analysis widely studied last decade research area several ramifications dependent type texts objective analysis decided focus efforts well explored sentiment analysis semeval task focused sentiment analysis financial news microblogs one use cases orm track online reputation companies try assess impact stock market decided specific task within sentiment analysis could make contribution obtained fourth place microblogs using one evaluation metrics task consisted predicting real continuous variable representing polarity intensity sentiment concerning mentioned short texts modeled regression analysis problem entity filtering financial sentiment analysis entity relationship people public entities changed rise social media online users social networks blogs able directly express spread opinions public entities politicians artists companies products online reputation monitoring orm aims automatically process online information public entities common tasks within orm consist collecting processing aggregating social network messages extract opinion trends entities twitter one used online social networks provides search system allows users query tweets containing set keywords orm systems often use twitter source information monitoring given entity however search results necessarily relevant entity keywords ambiguous instance tweet containing word columbia related several entities federal state city university furthermore tweets short results reduced context entity disambiguation monitoring reputation given entity twitter first necessary guarantee tweets relevant entity consequently processing tasks sentiment analysis benefit filtering noise data stream work tackle aforementioned problem applying supervised learning approach given set entities stream texts tweets interested monitoring mentions entity stream discrete function cast prediction supervised learning classification problem want infer target variable implemented large set features generated describe relationship entity representation text mention use metadata entity names category provided user configurations text represented similarity texts wikipedia freebase entities disambiguation feature selection terms based frequency feature matrix transformation using svd learning algorithms python library tested entity filtering include naive bayes svm random forests logistic regression multilayer perceptron material contained section published rodrigues soares oliveira texrep text mining framework online reputation monitoring entity filtering task overview replab focused monitoring online reputation entities twitter filtering task consisted determining tweets relevant entity corpus consists collection tweets obtained querying twitter search api entity names period june december corpus contains tweets english spanish balance languages varies entity tweets manually annotated related unrelated respective target entity data provided participants consists tweets list entities tweet corpus target entity language tweet timestamp tweet content url tweets also provided due twitter terms service participants responsible download tweets using respective data related entities contain query used collect tweets bmw official name entity bayerische motoren werke category entity automotive content homepage wikipedia articles english spanish entity filtering module includes methods normalize texts removing punctuation converting text lower case removing accents converting characters ascii equivalent lists stop words several languages also available used filter non relevant words rely natural language toolkit nltk provide lists contrary types online texts news blog posts tweets contain informal language including emoticons spelling errors wrong letter casing unusual punctuation abbreviations therefore dealing tweets entity filtering module uses tokenizer optimized segmenting words tweets tokenization extract user mentions urls hashtags textual content features many different types features used optimize relevance classification including language models keyword similarities tweets entities well external resources projections implemented large number assume future users framework orm provide data entity filtering financial sentiment analysis content prior training configuring entity filtering module language model text encapsulated single feature avoid high dimensionality issues adding features representation unigrams bigrams trigrams training text classifier calculates probability text related expected entity output probabilities classifier used feature keyword similarity similarity scores metadata texts obtained calculating ratio number common terms texts terms query entity name similarities character level also available order include possible spelling errors text web similarity similarity text normalized content entity homepage normalized wikipedia articles also available similarity value number common terms multiplied logarithm number terms tweet freebase keyword entity query exists text two bigrams created containing keyword word submitted freebase search api list retrieved entities compared target entity freebase freebase score computed using inverse position target entity list results retrieved target entity first result score second score target entity results list score zero feature corresponds maximum score extracted bigrams text category classifier sentence category classifier created using wikipedia articles entity sentence wikipedia articles annotated category corresponding entity unigrams bigrams trigrams calculated classifier svm trained classify text feature probability text relevant target class experimental setup dataset used competition consists collection tweets english spanish possibly relevant entities four domains automotive banking entity filtering dataset related unrelated total training development validation test table replab filtering task dataset description universities dataset consists collection tweets obtained querying twitter search api entity names period june december balance languages varies entity complementary data target entity following query used collect tweets bmw official name entity bayerische motoren werke category entity automotive content entity homepage wikipedia article english spanish tweets manually annotated related unrelated respective target entity dataset divided training test development table training set consists total tweets able download approximately tweets training set labeled related split training dataset development set validation set containing original respectively adopted randomly stratified split approach per entity group tweets target entity randomly split preserving balance related unrelated tweets test dataset consists tweets able download used development set trying new features test algorithms divided development set folds generated randomly stratified approach used validation set validate results obtained development set purpose validation step evaluate well entity filtering classifier generalizes training data validation data thus estimate well generalize test set allows spot overfitting validation trained classifier using data training dataset evaluated test set entity filtering financial sentiment analysis results created different classifier runs using different learners features also created entity specific models explained table applied selection features based frequency transformation content representation using svd learners tested include naive bayes svm random forests logistic regression multilayer perceptron mlp evaluation measures used accuracy official metric competition harmonic mean reliability sensitivity present results top models regarding replicated best system replab run run learner features models svm global global per entity table entity filtering versions description table shows results top performing runs official baseline competition baseline classifies tweet label similar tweet target entity training set using jaccard similarity coefficient baseline results obtained using test set run acc val set official baseline best replab acc table official results version plus validation set accuracy based results achieved able conclude models classifier able generalize successfully results obtained validation set similar obtained test set development solutions based one model per entity consistently outperformed solutions based global models also noticed development language specific models english spanish exhibit improvements global accuracy therefore opted use language feature results show best model uses random forests entity filtering classifier estimators training global model though language modeling feature encapsulates text using specific model trained tweets performed break analysis one four categories replab using run model depicted figure observe university banking automotive categories exhibit similar average results contrast results music shows rather difficult category entities disambiguate achieving fact entity names category contain ambiguous tokens alicia keys wanted script fig results grouped entity category using run main goal task classify tweets relevant given target entity explored several types features namely similarity keywords language models also explored external resources freebase wikipedia results show possible achieve accuracy test set containing tweets entities future work expect include possibility using embedding learn joint embedding space entities words similar entity filtering financial sentiment analysis financial sentiment sentiment analysis financial texts received increased attention recent years nevertheless challenges yet overcome financial texts microblogs newswire usually contain highly technical specific vocabulary jargon making development specific lexical machine learning approaches necessary research sentiment analysis financial domain focused analyzing subjective text labeled explicitly expressed sentiment however also common express financial sentiment implicit way business news stories often refer events might indicate positive negative impact news title company cut jobs economic indicators unemployment change time drop increase also provide clues implicit sentiment contrary explicit expressions subjective utterances factual text types often contain objective statements convey desirable undesirable fact recent work proposes consider types implicit sentiment expressions authors created fine grained sentiment annotation procedure identify polar expressions implicit explicit expressions positive negative sentiment target company interest identified polar expression identify sentiment expressions relevant annotation procedure also collected information polarity intensity sentiment expressed towards target however still automatic approach either machine learning based tries model annotation scheme work propose tackle aforementioned problem taking advantage unsupervised learning word embeddings financial tweets financial news headlines construct syntactic semantic representation words combine traditional approaches techniques financial features train regressor sentiment polarity intensity study different regression algorithms perform using features two different task microblogs news headlines mentioning moreover compare different combinations features perform system source code word embeddings developed competition publicly material contained section published saleiro rodrigues soares oliveira feup task predicting sentiment polarity intensity financial word embeddings https financial sentiment analysis task overview task semeval consisted sentiment analysis financial short texts divided two based type text microblogs consisted stocktwits tweets focusing stock market events assessments investors traders identified using stock symbols called cashtags amzn company news headlines consisted sentences extracted yahoo finance financial news sources internet case identified using canonical name previously annotated task organizers microblogs headlines company jpmorgan glencore text span sentiment score time sell banks glencore annual results beat forecasts table training set examples goal following predict sentiment polarity intensity mentioned short text instance microblog message news sentence sentiment score real continuous variable range designating neutral sentiment table presents two examples training set task organizers provided microblog messages training messages testing news sentences provided training testing submissions evaluated using cosine similarity financial word embeddings mikolov created computationally efficient method learn distributed representation words word represented distribution weights embeddings across fixed set dimensions furthermore mikolov showed representation able encode syntactic semantic similarities embedding space training objective model defined mikolov learn target word representation embeddings maximize prediction surrounding words context window given word vocabulary objective maximize average log probability entity filtering financial sentiment analysis log size context window total number words vocabulary word context window training low dimensionality embedding matrix encapsulates information word vocabulary use surrounding contexts used learn word embeddings context financial texts using unlabeled tweets news headlines mentioning tweets collected using twitter streaming api cashtags stocks titles serving request parameters yahoo finance api used requesting financial news feeds querying canonical name datasets comprise total tweets news titles learned separate word embeddings tweets news headlines using model tried several configurations hyperparameters setup resulting best performance dimensions removing words occurring less times using context window words negative samples per positive example even though text collections training embeddings relatively small resulting embedding space exhibited ability capture semantic word similarities financial context performed simple algebraic operations capture semantic relations words described mikolov instance model trained tweets shows vector bearish vector loss vector gain results vector bullish similar word representation approach section describe implementation details proposed approach set operations applied every microblog message news sentence sets well external collections training word embeddings character encoding stopwords every message headline encoded standard english stopword removal also applied financial sentiment analysis cash obfuscation cashtags canonical company names strings replaced string dollar euro signs followed numbers replaced string mapping numbers signs numbers mapped strings using bins minus plus signs coverted minus plus billions millions respectively symbol converted percent question exclamation marks also converted strings tokenization punctuation lowercasing tokenization performed using twokenizer remaining punctuation removed characters converted lowercase features combined three different group features features apply standard features tried unigrams unigrams proving obtain higher cosine similarity sentiment lexicon features incorporate knowledge manually curated sentiment lexicons generic sentiment analysis well lexicons tailored financial domain financial sentiment dictionary several types word classes positive negative constraining litigious uncertain modal word class create binary feature match word polarity score feature positive negative normalized text span length sentiment lexicon use mpqa created binary features positive negative neutral words well polarity score feature create taking average word vectors word text span used corresponding embedding matrix trained external twitter yahoo finance collections respectively entity filtering financial sentiment analysis experimental setup order avoid overfitting created validation set original training datasets provided organizers used split sampled validation set using distribution original training set sorted examples training set target variable values skipped every examples results evaluated using cosine similarity mean average error mae former gives importance differences polarity predicted sentiment latter concerned well system predicts intensity sentiment opted model single regression problems three different regressors applied random forests support vector machines svm multilayer perceptron mlp parameter tuning carried using fold cross validation training sets results analysis section present experimental results obtained provide comparison different learning algorithms using features well comparison different subsets features understand information contained also complement task microblogs table presents results obtained using features validation set test sets results test set worse validation set exception mlp official score obtained using random forests regressor achieves higher cosine similarity lower mae training validation set regressor set val test svr val svr test mlp val mlp test table microblog results cosine mae features validation test sets financial sentiment analysis compared results obtained different subsets features using best regressor depicted table interestingly bow boe complement obtaining better cosine similarity system using features financial word embeddings boe capture relevant information regarding target variables single group features achieves cosine similarity mae also able boost overall performance bow gains cosine similarity reducing mae individual group features best performance worst system trained using lex features lex alone exhibits poor performance value marginal combined another group features improves results latter case boe lex bow lex features cosine mae lex boe bow boe lex bow lex bow boe table features performance breakdown test set using task news headlines results obtained news headlines different ones previous proving predicting sentiment polarity intensity news headlines completely different problem compared microblogs table shows mlp obtains best results test set using metrics svr obtains best performance validation set best regressor outperformed svr mlp official result obtained cosine similarity using mlp table shows results different groups features mlp regressor evident observation word embeddings effective scenario hand lexical based features significantly better performance news headlines microblogs despite best results obtained using features entity filtering financial sentiment analysis regressor set val test svr val svr test mlp val mlp test table news headlines results features boe lex bow boe lex bow lex bow boe table features performance cosine mae features validation test sets cosine mae breakdown test set using mlp analysis financial word embeddings able encapsulate valuable information microblogs much case news headlines hypothesize access much smaller dataset training financial word embeddings news headlines resulted reduced ability capture semantic similarities financial domain related works sentiment analysis usually take advantage much larger dataset training word embeddings hand lexical features showed poor performance microblog texts seem useful using news headlines fact microblogs poor grammar slang informal language reveals financial lexicons created using well written formal financial reports result better news headlines rather microblog texts inspecting microblog texts headlines models showed poor performance believe would important also encapsulate syntactic semantic dependencies models instance model predicted sentiment score microblog message right reject offer true value similar examples include glencore shares record crash profit fears grow would rather buyer levels trying sell models summary contributions absolute errors around type errors intensity sentiment model correctly predicts polarity still large error concluding remarks work reported reported concerned problem predicting sentiment polarity intensity financial short texts previous work showed sentiment often depicted implicit way domain created continuous word representations order obtain domain specific syntactic semantic relations words combined traditional lexicalbased features train regressor sentiment polarity intensity results show different combination features attained different performances future work consist collecting larger external datasets training financial word embeddings microblogs news headlines also planned perform regression analysis using deep neural networks summary contributions chapter present contributions two fundamental text mining problems orm supervised learning approach entity filtering tweets achieving performance using relatively small training set created made available word embeddings trained financial texts supervised learning approach sentiment analysis financial texts chapter prediction chapter explore predictive power information online news social media context orm address two different predictive tasks first concerned predicting entity popularity twitter based signals extracted news cycle aim study different sets signals extracted online news mentioning specific entities could influence least correlated future popularity entities twitter know entity popularity social media influenced several factors interested exploring interplay online news social media entities frequently mentioned news cycle politicians footballers could particularly interesting anticipating public relations damage control polemic news article published even editorial purposes maximize buzz social media second predictive task consists using sentiment polarity extracted tweets predict political polls several research work trying assess predictive power social media predict outcome political opinion surveys elections however study proposes method aggregating polarity scores time however consensus sentiment aggregate function adequate problem propose use contrast several sentiment aggregate functions reported literature assessing predictive power specific case comprising data collected portuguese bailout prediction exploring online news reputation monitoring twitter online publication news articles become standard behavior news outlets public joined movement either using desktop mobile terminals resulting setup consists cooperative dialog news outlets public large latest events covered commented parties continuous basis social media twitter sharing commenting news social media users tend mention predominant entities mentioned news story therefore entities public figures organizations companies geographic locations act latent connections online news social media online reputation monitoring orm focuses continuously tracking said entities social media online news automatic collection processing comments opinions social media crucial understand reputation individuals organizations therefore manage public relations however orm systems would even useful would able know advance social media users talk lot target entities hypothesize entities frequently mentioned news politicians possible establish predictive link online news popularity social media cast problem supervised learning classification approach decide whether popularity high low based features extracted news cycle define four set features signal textual sentiment semantic aim respond following research questions online news valuable source information effectively predict entity popularity twitter online news carry different predictive power based nature entity study different thresholds defining high low popularity affect effectiveness approach performance remain stable different prediction times important feature set predicting entity popularity twitter based news cycle material contained section published saleiro soares learning news predicting entity popularity twitter exploring online news reputation monitoring twitter individual sets features exhibit different importance different entities approach starting point hypothesis entities frequently mentioned news politicians possible predict popularity social media using signals extracted news cycle first step towards solution requires definition entity popularity social media entity popularity different ways expressing notion popularity social media example classical way defining number followers twitter account number likes facebook page another notion popularity associated entities consists number retweets replies twitter post likes comments facebook define entity popularity based named entity mentions social media messages mentions consist specific surface forms entity name example cristiano ronaldo might mentioned also using ronaldo given set entities daily stream social media messages daily stream online news articles interested monitoring mentions entity social media stream discrete function let daily time frame time time prediction prediction horizon time want learn target popularity function social media stream function given entity online news stream time frame corresponds integrating given day time prediction extract features news stream predict prediction horizon measure popularity daily basis consequently adopted everyday example equals extract features predict interval day case equals midnight extract prediction features hours previous day predict hours cast prediction supervised learning classification problem want infer target variable defined low high inverse cumulative distribution function measured training set similar approach tsagkias instance corresponds median training set higher values mean higher examples training set consider resulting reduced number training examples positive class high news features previous work focused influence characteristics social media stream adoption popularity memes hashtags contrast main goal work investigate predictive power online news stream therefore extract four types features label signal textual iii sentiment semantic depicted table one important issue filter relevant news items consensus link news stream social media stream works use urls shared filter simultaneously relevant news articles social media messages work entity oriented select news articles mentions relevant signal features type features depict signal news cycle mentioning include set counting variables features focusing total number news mentioning specific time intervals mentions news titles average length news articles different number news outlets published news mentioning well features specific day week capture seasonal trend popularity idea capture dynamics news events instance sudden peak mentions relevant event might happened may influence textual features collect textual features build daily profile news cycle aggregating titles online news articles mentioning daily time frame select top frequent terms unigrams exploring online news reputation monitoring twitter training set create matrix two distinct methods applied capture textual features first method apply weighting employ singular value decomposition svd capture similarity terms reduce dimensionality computes linear approximation final set features training testing weighted matrix combined produces real valued latent features testing system uses terms training data calculates using idf training data well applying svd test data second method consists applying latent dirichlet allocation lda generate topic model topics features system learns distribution word distribution topics using training data given entity testing system extracts word distribution news title vector test day using learned training data calculates probability belonging one topics learned objective extracting set features create characterization news stream mentions namely salient terms phrases day well latent topics associated learning classifier hope obtain correlations certain terms topics sentiment features include several types word level sentiment features assumption subjective words news result reactions social media exposed extract features titles news mentioning daily time frame use sentiment lexicon sentiwordnet extract subjective terms titles daily profile label positive neutral negative polarity compute count features number positive negative neutral terms well difference ratio positive negatives terms similar textual features create tfidf weighted matrix using subjective terms title apply svd compute real valued sentiment latent features semantic features use number different named entities recognized day well number distinct news category tags extracted news feeds metadata tags common news articles consist author annotated terms phrases describe sort semantic hierarchy news categories topics news stories european debt crisis create weighted matrices applied svd reduce dimensionality idea capture interesting entity prediction table summary four type features consider number signal textual sentiment semantic feature description news news news total news titles avg content sources weekday weekend number news mentions number news mentions number news mentions number title mentions news average content length news number different news sources day week true weekend false otherwise tfidf titles lda titles news titles news titles pos neg neu ratio diff subjectivity tfidf subj number positive words news titles number negative words news titles number neutral words news titles positive negative positive negative neutral words subjective words pos neg neu entities tags tfidf entities tfidf tags number entities news number tags news entities news news tags well news stories less transient time might able trigger popularity twitter learning framework let feature vector extracted online news stream day want learn probability done using inner product weighting parameter vector using logistic regression binary classification one unify definition exploring online news reputation monitoring twitter given set pairs solve binary class penalized logistic regression optimization problem min log apply approach following entity specific basis train individual model entity given set entities want apply approach training set example days extract feature vector entity training day therefore able learn model assumption popularity social media dependent entity consequently extract entity specific features news stream instance top words news titles mentioning experimental setup work uses portuguese news feeds tweets collected january january consisting million tweets million online news collect process raw twitter data use crawler recognizes disambiguates named entities twitter news data provided portuguese online news service handles online news portuguese news outlets able recognize entities mentioned news choose two common news categories politics football select entities highest number mentions news categories politicians two former pedro passos coelho incumbent costa football entities two coaches jorge jesus mourinho famous portuguese football player cristiano ronaldo figure depicts behavior daily popularity six entities selected community stream twitter users day july july expected easily observable days popularity twitter exhibits dataset available research purposes access requests via http prediction pedro passos coelho costa jorge jesus cristiano ronaldo mourinho aug sep oct nov dec jan feb mar apr may jun fig daily popularity twitter entities study training iteration jan feb test dec jan training iteration jan feb mar feb test jan feb fig training testing sliding window first iterations bursty patterns instance arrested november cristiano ronaldo fifa ballon january defined years training set whole year test set applied monthly sliding window setting start predicting entity popularity every day january test set using model trained previous months days training set use february test set using new model trained previous months march depicted figure perform evaluation process rolling training test set december resulting days evaluation exploring online news reputation monitoring twitter process applied one six entities different time predictions different values decision boundary test therefore report results section different experimental settings one six entities goal understand useful news cycle predicting entity popularity twitter different entities different hours hours cycle different thresholds considering popularity high low results discussion results depicted table report positive class since online reputation monitoring valuable able predict high popularity low nevertheless also calculated overall accuracy results better reported consequently means system fairly capable predicting low popularity organize section based research questions presented beginning section online news valuable source information effectively predict entity popularity twitter online news carry different predictive power based nature entity study results show performance varies target entity general results better case predicting popularity politicians case football public figures jorge jesus exhibits similar results three politicians mourinho especially cristiano ronaldo represent worst results setting instance cristiano ronaldo scores three goals match burst popularity almost immediate possible predict advance analysis showed online news failed informative popularity case live events covered media interviews debates one hand live football games consist events unpredictable effects popularity cristiano ronaldo considered special case experiments far famous entity experiments addition also active twitter user followers work focus assessing predictive power online news limitations assume cristiano ronaldo endogenous features twitter would necessary obtain better results prediction table score popularity high function equal respectively entity hour costa pedro passos coelho cristiano ronaldo jorge jesus mourinho costa pedro passos coelho cristiano ronaldo jorge jesus mourinho costa pedro passos coelho cristiano ronaldo jorge jesus mourinho different thresholds defining high low popularity affect effectiveness approach system exhibits top performance corresponds balanced training sets number high low popularity examples training set political entities exhibit scores hand increase performance deteriorates observe system predicts high number false positives difficult predict extreme values popularity social media happen plan tackle problem future also including features target variable current previous hours components performance remain stable different time predictions results show time prediction affects performance system specially political entities case higher time prediction noon exploring online news reputation monitoring twitter fig individual feature type score evidence politics news events trigger popularity social media broadcast news outlets morning interesting compare results midnight former use news articles previous day explained section latter use news articles first hours day prediction examples twitter popularity triggered events depicted news previous day current day important feature set predicting entity popularity twitter based news cycle individual set features exhibit different importance different entities figure tries answer two questions first observation combination groups features lead substantial improvements semantic features alone achieve almost score combination features however case mourinho ronaldo combination features lead worse results semantic set alone prediction sentiment features second important entities except mourinho signal textual features less important somehow surprise signal features represent surface behavior news articles volume news mentions expecting higher importance regarding textual features believe news articles often refer terms phrases explain past events order contextualize news article future work consider alternative approaches predicting future popularity entities occur everyday news social media public accounts musicians actors opposition entities occur often news economics ministers like often occur social media pose also different problem predicting political polls using twitter surveys polls using telephone widely used provide information people think parties political entities surveys randomly select electorate sample avoiding selection bias designed collect perception population regarding subject politics marketing however method expensive time consuming furthermore years becoming difficult contact people persuade participate surveys hand rise social media namely twitter facebook changed way people interact news way people able react comment news real time one challenge several research works trying solve understand opinions expressed social media sentiment leading indicator public opinion however time might exist simultaneously positive negative neutral opinions regarding subject thus need obtain value reflects general image political target social media given time period end use sentiment aggregate functions summary sentiment aggregate function calculates global value based number positive negative neutral mentions political target given period conducted exhaustive study collected implemented several sentiment aggregate functions state art material contained section published saleiro gomes soares sentiment aggregate functions political opinion polling using microblog streams predicting political polls using twitter sentiment thus main objective work study define methodology capable successfully estimating poll results based opinions expressed social media represented sentiment aggregators applied problem portuguese bailout case study using tweets sample portuguese tweetosphere portuguese polls gold standard given monthly periodicity polls needed aggregate data month approach allows aggregate value represent monthly sentiment political party due absence general sentiment aggregate function suitable different case studies decided include aggregate functions features regression model therefore learning algorithm able adapt informative aggregate functions time methodology collect process raw twitter data use online reputation monitoring platform extended researchers interested tracking political opinion web collects tweets predefined sample users applies named entity disambiguation generates indicators frequency mention polarity mentions entities time case tweets collected stream thousand different users representing sample portuguese community twitter sample obtained expanding manually annotated seed set users using heuristics language posts language followers posts platform automatically classifies tweet according sentiment polarity message expresses positive negative neutral opinion regarding entity politicians classified positive negative neutral mention respectively sentiment classifier uses corpus annotated tweets training set achieved accuracy using cross validation tweets manually annotated political science students mentions entities respective polarity aggregated counting positive negative neutral total mentions entity given period sentiment aggregate functions use cumulative numbers input generate new value specific time period since want use sentiment aggregate functions features regression model produce estimate political opinion decided use traditional poll results gold standard prediction sentiment aggregate functions let mei mention twitter entity positive neutral negative classified mentions entity twitter therefore given time frame month sentiment aggregate functions applied aggregated data polls following entitybuzz mei sum number mentions buzz given entity time frame entitypositives sum positively classified mentions given entity time frame entityneutrals sum neutral classified mentions given entity time frame entitynegatives sum negatively classified mentions given entity time frame entitysubjectivity ratio positive negative classified mentions entity buzz time frame entitypolarity ratio positive negative classified mentions time frame berminghamsovn ratio negative classified mentions entity total number negative mentions entities time frame bermingham berminghamsovp connor mei mei gayo polarity polarityon eutral predicting political polls using twitter sentiment polarityot otal subjot otal subjn euv subjsov pei subjv share shareof egdistribution mei poll number political entities meei mei sentiment aggregate functions used features regression models prediction fig negatives share berminghamsovn political leaders twitter data data used work consists tweets mentioning portuguese political party leaders polls august december period corresponds portuguese bailout several austerity measures adopted incumbent right wing governmental coalition psd cds parties twitter table distribution positive negative neutral mentions per political party psd cds cdu negative positive neutral total mentions twitter data set contains classified messages collected network thousand different users classified portuguese table presents distribution positive negative neutral mentions political leaders voted political parties portugal psd cds pcp negative mentions represent majority total mentions except cdu number negative mentions smaller neutral ones positive mentions represent less total mentions party except represent predicting political polls using twitter sentiment total mentions mentioned parties psd cds total mentions three parties represent data sample total mentions figure depicts time series berminghamsovn negatives share sentiment aggregate function higher value function higher percentage negative tweets mention given political entity comparison entities expected pedro passos coelho psd leader higher score throughout whole time period study paulo portas cds leader party coalition also member government second negatively mentioned period seguro periods second higher psd cds incumbent parties main opposition party time frame study psd cds government parties raising taxes cutting salaries incumbent government years led bailout fraction population considered responsible financial crisis bailout consequent austerity measures could explain overwhelming percentage negative mentions although verified time periods high percentage negatives mentions remains say twitter users sample mentioning political leaders tweets tend criticize political opinion polls polling performed eurosondagem portuguese private company collects public opinion data set contains monthly polls results five main portuguese parties june december figure represents evolution portuguese polls results see two main party groups first group psd included higher value vote intention psd despite starting preferred party vote intention downtrend along time losing leadership september hand general uptrend second group composed cds pcp vote intention range cds downtrend public opinion pcp ascendant one although constant tendencies trends noticed maximum variation observed two consecutive months june political crises government cds threaten leave government coalition due austerity measures implemented corresponds moment takes lead polls prediction fig representation monthly poll results political candidate experimental setup defined period december training set whole year test set applied sliding window setting predict poll results given month using previous months training set training set containing monthly values aggregators sentiment buzz aggregator months prior month intended predicted test set containing values aggregators sentiment buzz aggregator month intended predicted start predicting poll results january using previous months training set select values aggregators months prior january september december use data train regression model input aggregators values january first record test set trained model obtain poll results prediction predicting political polls using twitter sentiment select next month test set repeat process months predicted models created using two regression algorithms linear regression algorithm ordinary least squares ols regression algorithm random forests also run experiment using derivative polls time series gold standard poll results variations poll poll thus also calculate variations aggregate functions month month features furthermore repeat experiment including excluding lagged self polls last result poll given candidate last polls result variation predicting polls variations use mean absolute error mae evaluation measure determine absolute error prediction calculate average twelve mae could know global prediction error model number forecasts model forecast real outcome results discussion section explain detail experiments results perform two different experiments using absolute values using monthly variations predicting polls results experiment sentiment aggregators take absolute values order predict absolute values polls results mathematically speaking experiment seen buzzaggregators sentimentaggregators figure see global errors obtained results show obtain mae parties poll results months using ordinary least squares using random forests lagged self polls assuming last known poll result prediction results mae expectable since polls exhibit slight changes month month experiment shows inclusion lagged self produces average errors similar lagged self prediction fig error predictions polls results fig error predictions polls results variation predicting polls results variation according exploratory data analysis polls results small variation two consecutive months thus instead predicting absolute value poll results tried predict variation particular experiment inclusion feature regression model determinant role figure including feature could obtain lower mae excluding means real monthly poll variation constant year general using regression algorithm obtain lower mae results show leading polls results slight predicting political polls using twitter sentiment fig mean absolute error buzz sentiment changes poll poll makes sense transform dataset taking differences consecutive buzz sentiment several studies state buzz predictive power reflects correctly public opinion social media following premise trained models buzz sentiment aggregators separately predict polls variations experiment allowed compare behavior buzz sentiment aggregators according figure buzz sentiment aggregators similar results although ols algorithm combined buzz aggregators slightly lower error models significant improvement results also show random forests algorithm performs best combined sentiment aggregators feature selection one main goals work understand aggregator group aggregators better suits case study according previous experiments achieve lower prediction errors training model buzz sentiment aggregators separately however training model two kinds aggregators separately implicitly performing feature selection prediction two buzz features share due small amount features necessary perform feature selection technique within buzz features thus decided apply feature selection technique sentiment aggregators order select informative ones predict monthly polls results variation use univariate feature selection selecting sentiment features total features using technique random forests global error rose however ols presents mae drop another important fact notice perform univariate feature selection aggregators buzz sentiment achieve mae value applied sentiment aggregators means buzz aggregators discarded feature selection technique try different approach perform recursive feature elimination technique technique features eliminated recursively according initial score given external estimator method allows determine number features select thus also selecting features ols mae drops none buzz features selected furthermore feature selection techniques select different features monthly prediction feature importance select random forest model monthly variations study features importance depicted figure higher score important feature importance feature computed normalized total reduction criterion brought feature also known gini importance values correspond average gini importance different models trained experiments single important feature bermingham aggregate function followed neutrals important notice combining aggregate functions features single regression model buzz comprise high gini importance even though used single feature produces similar results sentiment aggregate functions general standard deviation gini importance relatively high experimental setup values depicted bar chart correspond average gini importance different models months testing set therefore feature importances vary time mae tends remain unchanged say different features different informative value time consequently useful combine sentiment aggregation functions features regression models time predicting political polls using twitter sentiment fig aggregate functions importance random forests models outlook studied large set sentiment aggregate functions use features regression model predict political opinion poll results results show estimate polls results low prediction error using sentiment buzz aggregators based opinions expressed social media introduced strong baseline comparison lagged self polls study built model achieve lowest mae using linear algorithm ols combined buzz aggregators using monthly variations model mae performed two feature selection techniques univariate feature selection recursive feature elimination applying recursive technique sentiment features achieve mae matching best model furthermore chosen features every prediction regarding feature importance analysis prediction experiments showed bermingham aggregate function represents highest gini importance random forests model summary contributions chapter presented research work prediction orm making following contributions analysis predictive power online news regarding entity popularity twitter entities frequently mentioned news analysis combine different sentiment aggregate functions serve features predicting political polls chapter framework online reputation monitoring chapter present framework puts together building blocks required perform orm framework divided two distinct components one dedicated entity retrieval text mining practice two components act two separate frameworks adaptable reused different application scenarios computational journalism finance politics start framework overview description focus specifically two components first component relink research framework retrieval carried experiments retrieval described chapter using relink furthermore since access training data based news articles describe case study using relink entity retrieval large news collection describe texrep framework responsible text mining related tasks orm entity filtering sentiment analysis predictive tasks experiments described chapter chapter carried using texrep also provide detail texrep used backend popstar project finally perform independent study practical aspects general purpose word embeddings twitter stream serve resource future users texrep framework overview framework provides entity retrieval text mining functionalities enable collection disambiguation retrieval entities relationships sentiment analysis data aggregation prediction visualization information framework online reputation monitoring heterogeneous web data sources furthermore given components built using modular architectures providing abstraction layers well defined interfaces new functionalities methods easily integrated framework divided two components relink texrep work independently dedicated frameworks using specific data sources put together unifying setup orm depicted figure working together relink texrep connected entity occurrences warehouse central module framework orm entity occurrences warehouse contains extractions occurrences entities interest across web data sources relink entity retrieval entity occurrences warehouse texrep text mining fig overview orm framework data flow starts texrep collecting data web text data sources extraction text passages containing entity mentions disambiguation entitycentric text passages stored entity occurrences warehouse data used retrieval indexing using relink downstream text mining tasks sentiment analysis using modules texrep describe relink texrep architectures internal data flow relink relink framework designed facilitate experiments retrieval query collections formulation queries natural language relational format qei provide opportunities define explore range query formulations search algorithms although relink provides support late fusion design patterns mostly tailored early fusion approaches necessary create entity relationship representations indexing time typical early fusion retrieval experimental setup would involve search collection extract relevant instances entity tuples verify correctness relevance judgments key enabling components therefore test collections documents annotated entity instances could framework overview extracted search indexing facility retrieval module process queries rank results fig relink framework architecture overview figure depicts architecture relink used experiments described chapter include modules responsible deriving relevance judgments wikipedia table parser module described section chapter currently relink framework includes collection combined text span annotations links wikipedia entities via freebase entity linking precision recall estimated respectively relink extractor part indexer applies open information extraction method annotated corpus two additional components corpus index retrieval depicted figure implementation modules retrieval indexer http framework online reputation monitoring module corpus index based apache lucene letor module serves wrapper indexing retrieval based collection create two essential resources entity index entity pair relationship index entities occur corpus given entity instance indexer identifies terms within sentence considers entity types observed entity instance similarly given pair entities indexer verifies whether occur sentence extracts separating string string considered context term entity pair describes relationship type obtain entity entity pair extractions corresponding sentences processed indexer inverted index index created instance entity entity pair retrieved response contextual terms entity types relationship types specified users search process retrieval process managed relinker module figure query analyzer module processes information requests passes queries structured format retriever query search performed stages allow experimentation different methods parameter settings first retriever provides initial set results using lucene default search settings groups entity entity pairs query time using lucene groupingsearch scorer generates applies feature functions specific retrieval models required statistics currently scorer implementations early fusion variants erdm relinker responsible providing final results based scores provided scorer parameter weights learned letor texrep texrep research framework implements text mining techniques perform online reputation monitoring orm various application domains computational social sciences political data science computational journalism computational finance online marketing http framework overview texrep designed two main challenges mind able cope text mining problems underlying orm flexible adaptable reusable order support specificities different application scenarios define text mining based system online reputation monitoring must follow set technical operational requirements batch operation system must naturally able operate collecting data generated processing updating indicators however also important able operate batch mode collects specific data period indicated user available processes system use distributed approach deal great volumes data hadoop also able operate autonomously long periods time measured months adaptability system able adapt models polarity classification time well across different applications updating models often requires manually annotated data ned therefore system provide flexible annotation interface modularity researchers able plug specific modules new data source respective crawler different visualization system interfaces use rest apis json data format allow users add new modules interact data sources wikipedia facebook reusability system enable repeatability experiments allow research community obtain equal results make software package prototype publicly available well data sources configuration parameters used experiments language independence component system apply statistical language modeling completely agnostic language texts decompose use text mining orm four distinct interconnected tasks data collection entity filtering sentiment analysis analytics task accomplished one software modules instance analytics tasks usually involves use aggregation prediction visualization modules figure presents texrep architecture including data flow modules framework online reputation monitoring pipeline manager sentiment analysis configurations training data data collection server data collection client visualization data collection client entity occurrences warehouse entity filtering aggregation prediction texrep data sources knowledge base fig architecture data flows texrep framework entity filtering sentiment analysis represent challenging text mining problems tackled texrep framework tracking said online target entities necessary disambiguate mentions done incorrectly knowledge obtained modules negatively affected consequently text mining tasks sentiment analysis benefit filtering non relevant texts current implementation entity filtering module uses python library machine learning library interface providing access texrep users suitable learning algorithm parameter tuning specific needs studied large set features describe relationship target entity representation given text tried several different supervised learning algorithms available framework support vector machines svm random forests sentiment analysis module also uses implementation supervised learning algorithms order predict sentiment polarity intensity short texts framework overview using regression analysis use unsupervised learning word embeddings short texts construct syntactic semantic representations words sentiment analysis module combines word embeddings traditional approaches techniques features train classifier sentiment polarity regressor sentiment intensity analytics modules include aggregation visualization prediction modules application specific depend user configurations instance political domain common create aggregate functions represent relative popularity indicators political parties candidates indicators used predict elections hand consider financial domain due high volatility aggregation usually performed lower granularity minutes instead days target prediction variables individual stock prices variations texrep implements various aggregation functions allows custom tailored prediction models based application therefore texrep able adapt specificities different application scenarios implementing modular flexible design user configurations abstraction layers data collection depends specified data sources thus texrep decouples implementations data collection process management using rest api user needs different data collection client ones provided default able implement specific client easily integrated framework applies analytics modules extensible loading methods abstraction layer furthermore users wish extend texrep topic modeling need new module write topic assignments entity occurrences warehouse new aggregation functions could implemented use topic mention input order create topic trends visualizations framework fully configured using configuration files processed pipeline manager module responsible forwarding specific parameterization modules possible specify entities interest data sources aggregate functions prediction time windows module specific configurations also specified module training data used modules rely machine learning explained texrep addresses two aforementioned challenges developing text mining framework orm current version framework implemented python uses mongodb nosql database implements mapreduce paradigm aggregations external pluggable resources used framework online reputation monitoring learn library matplotlib visualization though users replace two resources others preference provide implementations module believe generic possible within context orm nevertheless users also able extend module methods see fit new features data steps describe detail different modules interact well detailed explanation current implementation entity filtering sentiment analysis analytics modules data flow texrep collects data continuously performs processing analytics tasks standard data flow organized follows first user defines entities interest configurations files including canonical alternative names configurations processed pipeline manager forwarded data collection clients search texts news articles tweets using entity names queries data api data collection clients implement api clients case twitter yahoo finance instance user interested collecting rss feeds news outlets data collection client adapted subscribe feeds process accordingly collected texts stored entity occurrences warehouse entity filtering classifies text relevant target entity using supervised learning approach knowledge base freebase used extract target entity representations compute similarity features extracted mentions contexts texts filtered sentiment analysis takes place framework implements polarity classification sentiment regression sentiment intensity detection analytics modules able aggregate create visualizations trends data predictions application specific dependent variables data collection data collection server communicates data collection client using rest api therefore allows modularity plugin approach adapting specific data sources task data collection based entity configurations containing list entities study data source specific web interfaces rss feeds yahoo finance api twitter api data collection server manages data collection clients specific interfaces plugins adequate corresponding source instance collecting data relink use case twitter poses challenges namely due limits amount data collected opted create default data collection client socialbus distributed twitter client enables researchers continuously collect data particular user communities topics respecting established limits data sources allow query topics entity names others rss feeds moreover case twitter might interested continuously monitoring fixed group twitter users accounts entities interest cases search directly entity name specific data source use list entity names process collected texts might relevant data collection server applies sequential classification approach using prefix tree detect mentions method seen first step filtering still prone noisy mentions instance tweet word cameron relative several entities former prime minister filmmaker company consequently problem later tackled entity filtering module collected texts news tweets stored centralized nosql database mongodb entities occurrence warehouse setup provides modularity flexibility allowing possibility developing specific data collection components tailored specific data sources completely agnostic data format retrieved data source data collection server annotates text target entity used entity filtering module validate annotation relink use case section present use case relink framework context orm applied computational journalism never computation tightly connected practice journalism recent years computer science community researched new ways processing exploring news archives help journalists perceiving news content enhanced perspective created demo timemachine brings together set natural language processing text mining information retrieval technologies automatically extract index entity related knowledge news articles allows users issue queries containing keywords phrases news stories events retrieves relevant entities mentioned news articles newsexplorer ibm watson http framework online reputation monitoring time timemachine provides readable insights temporal perspective news stories mentioned entities visually represents relationships among public figures news articles social network graph using force atlas algorithm layout interactive clustering entities news processing pipeline news processing pipeline depicted figure starts news cleaning module performs boilerplate removal raw news files news content processed apply nerd module recognizes entity mentions disambiguates mention entity using set heuristics tailored news job descriptors barack obama president usa linguistic patterns well defined journalistic text style use bootstrap approach train ner system method starts annotating entity names dataset news items performed using simple dictionarybased approach using training set build classification model based conditional random fields crf use inferred classification model perform additional annotations initial seed corpus used training new classification model cycle repeated ner model stabilizes fig news processing pipeline entity snippet extraction consists collecting sentences containing mentions given entity snippets concatenated generating entity document indexed entity index entity index represents frequency entity term occurs news therefore relying redundancy news terms phrases associated entity able retrieve relevant entity given input keyword phrase query also index snippet datetime possible filter query results based time span instance keyword corruption might retrieve different entity list results different time periods quotations typically short informative sentences may directly indirectly quote given entity quotations automatically relink use case extracted refer quotations extraction module using linguistic patterns thus enriching information extracted entity finally mentioned entities given news articles extract entity tuples representing entities given news article update entity graph incrementing number occurrences node entity number occurrences edge relation two mentions demonstration setup demonstration uses news archive portuguese news comprises two different datasets repository main portuguese news agency stream online articles provided main web portal portugal sapo aggregates news articles online newspapers total number news articles used demonstration comprises million news articles system working daily basis processing articles collected news stream timemachine allows users explore news archive entity search box selecting specific date options available website homepage top bar every page set stories recommendations homepage suited first time visitors entity search box designed main entry point website connected entity retrieval module timemachine fig cristiano ronaldo egocentric network users may search surface names entities cristiano ronaldo know entities interested explore news although framework online reputation monitoring powerful queries ones containing keywords phrases describing topics news stories eurozone crisis ballon nominees selecting entity ranked list results users access entity profile page contains set automatically extracted entity specific data name profession set news articles quotations entity related entities entity timeline also provided allow users navigate entity specific data time selecting specific period different news articles quotations related entities retrieved furthermore users option view network consists interactive network depicting connections among entities news articles selected time span example visualization depicted figure implemented using graph drawing library sigma together force atlas algorithm clustered layout entities nodes consist entities edges represent mentioned entities news articles size nodes width edges proportional number mentions respectively different node colors represent specific news topics entities mentioned selecting date interval homepage instead issuing query users get global interactive network mentions frequent entities mentioned news articles selected period time texrep use case section describes design implementation popmine system use case proposed framework developed scope popstar project open source platform used extended researchers interested tracking reputation political entities web popmine operates either batch online mode able collect texts conventional media news items mainstream media sites social media blogs twitter process texts recognizing topics political entities analyze relevant linguistic units generate indicators frequency mention polarity mentions political entities across sources types sources across time proof concept present indicators web application tailored tracking political opinion portugal popstar website system available open source software package used researchers social sciences also area interested tracking public opinion web texrep use case opted use data news articles tweets blog posts data sources requires specific crawler news articles blog posts collected using rss feeds eases implementations specific crawler collecting data twitter poses challenges need large amounts data coupled twitter imposed limits demand distributed system opted use enables researchers continuously collect data particular user communities respecting twitter imposed limits data collection components crawl data specific data sources implement specific web interfaces rss feeds twitter api data source must data collection module turn connects popmine system using rest services popmine stores data collected document oriented nosql database mongodb configuration allows modularity flexibility allowing possibility developing specific data collection components tailored specific data sources default setting data collection modules comprise following components news data online news provided service verbetes labs sapo service handles online news portuguese news sources able recognize entities mentioned news blogs blog posts provided blogs monitoring system labs sapo includes blogs domain blogger wordpress blogs written portuguese twitter tweets collected using platform socialbus responsible compilation messages portuguese users twitter tweets collected real time submitted language classification experiments opted collect tweets written portuguese information extraction component comprises knowledge base containing metadata entities names jobs using knowledge base crucial filter relevant data mentioning politicians news tweets blog posts application scenario opted use verbetes knowledge base comprises names alternative names professions portuguese people mentioned often news articles information extraction components address two tasks named entity recognition named entity disambiguation envision application scenario http framework online reputation monitoring need track political entities usually type entities well known therefore opted use knowledge base provide metadata target entities namely common surface forms names list surface forms search applied sequential classification approach using prefix tree detect mentions method effective news articles blog posts result noisy mentions applied twitter instance tweet containing word cameron related several entities former prime minister filmmaker company furthermore tweets short results reduced context entity disambiguation apply entity filtering approach texrep opinions warehouse contains messages filtered information extraction component applies polarity classification messages using external resource opinionizer classifier one requirements opinionizer use manually labeled data train classifier developed online annotation tool effect create opinion polls indicators using aggregator responsible apply aggregation functions smoothing techniques obtain aggregated data make available set web services consumed different applications popstar website research experiences polls predictions using social media opinions data aggregation buzz daily frequency political leaders mentioned twitter users bloggers online media news use two types indicators first type relative frequency party leaders mentioned medium twitter blogs news day indicator expressed leader party percentage relative total number mentions party leaders second indicator absolute frequency mentions simple count citations political leader estimate trends buzz use kalman filter allow users choose smoothing degree estimated trend users choose three alternatives fairly reactive one trend highly volatile allowing close monitoring variations smooth one ideal capture long term trends intermediate option displayed default identifying polarity tweets several ways quantify overall sentiment regarding political leaders instance look texrep use case target independently relative terms compare positive negative references simply look one side polarity look daily weekly monthly data records first prototype opted present two separate indicators evolution across time using cases day reference period fist indicator logarithm ratio positive negative tweets political leader party leaders president words positive sign means political leader consideration received positive negative tweets day negative result means received negative positive tweets mathematical notation logsentimenti log positivesi negativesi second approach simply look negative tweets vast majority tweets base classifier calculate relative frequency leader way possible follow day party leaders relative terms less subject tweets negative polarity mathematical notation negativesi negativesshare negativesd fig twitter buzz share political leaders framework online reputation monitoring visualization created allow interactive visualization data collected processed real time popmine platform site developed within scope popstar project public opinion sentiment tracking analysis research presents following data mentions portuguese party leaders twitter blogosphere online news sentiment conveyed tweets regarding party leaders voting intentions main political parties measured traditional polls evaluation performance said party leaders measured polls example chart depicted figure besides providing indicators form charts website also dashboard offering compact view trends across indicators politicians learning word embeddings word embeddings great practical importance since used precomputed features models significantly reducing amount training data required variety text mining tasks aim provide general purpose word embeddings text mining tasks orm particularly interested learning word embeddings twitter stream due specificities user generated content relatively easy get access word embeddings trained well formed texts wikipedia online news however best knowledge publicly available word embeddings learned portuguese twitter stream several challenges computing consistently distributing word embeddings concerning intrinsic properties embeddings many dimensions actually need store useful semantic information big embedded vocabulary practical value two factors interplay type model used generating embeddings multiple possible models obvious one best general context specific type applications http material contained section published saleiro sarmento rodrigues soares oliveira learning word embeddings portuguese twitter stream study practical aspects learning word embeddings orm size properties training data minimum amount training data needed include vocabulary words training optimization techniques used model hyperparameter training parameters space possibilities aspects large also challenges performing consistent evaluation resulting embeddings makes systematic experimentation alternative configurations extremely difficult work make progress trying find good combinations previous parameters focus specifically task computing word embeddings processing portuguese twitter stream content twitter messages tends populated words specific medium constantly added users dynamics pose challenges nlp systems difficulties dealing vocabulary words therefore learning semantic representation words directly stream words arise would allow keep dynamics medium reduce cases information words starting implementation neural word embedding model seen flexible baseline model experimentation research tries answer following practical questions large vocabulary one realistically embed given level resources organizations afford buy manage opposed large clusters gpu available organizations much data function size vocabulary wish embed enough training meaningful embeddings evaluate embeddings automatic consistent way reasonably detailed systematic exploration previously describe space possibilities performed answering questions based reasonably small sample twitter data hope find best way proceed train embeddings twitter vocabulary using much larger amount twitter data available parameter experimentation would unfeasible work thus seen preparatory study subsequent attempt produce distribute database embeddings processing portuguese twitter data framework online reputation monitoring neural word embedding model neural word embedding model use continuous cbow given sequence words task model tries perform predicting middle word based two words left two words right produce embeddings closely capture distributional similarity words belong semantic class synonyms antonyms embedded close regions embedding neural model composed following layers input word embedding layer maps input words represented vectors dimensions low dimension space bits projections matrix winput shared across inputs embedding matrix wish produce merge layer concatenates previous embeddings single vector holding context information concatenation operation ensures rest model explicit information relative position input words using additive merge operation instead would preserve information presence words sequence intermediate context embedding dense layer maps preceding representation words lower dimension space still representing entire context fixed context representation dimensions ultimately determines dimension resulting embeddings intermediate layer important point view performance isolates still relatively input space bits input word embeddings output space final output dense layer maps takes previous representation entire input context produces vector dimensionality word output space dimensions matrix woutput one stores word embeddings interested softmax activation layer produces final prediction word space distribution learning word embeddings orm neural activations model sigmoid functions model implemented using library relies keras model development train model using adam optimizer default parameters experimental setup interested assessing two aspects word embedding process one hand wish evaluate semantic quality produced embeddings want quantify much computational power training data required train embedding model function size vocabulary try embed aspects fundamental practical importance deciding attempt produce database embeddings provide future resources developed work publicly apart size vocabulary processed hyperparamaters model could potentially explore dimensionality input word embeddings dimensionality output word embeddings mentioned set bits performing quick manual experimentation full hyperparameter exploration left future work experimental testbed comprises desktop nvidia titan pascal intel core quad ram ssd drive training data randomly sampled tweets corpus tweets collected portuguese twitter community comprise total words approx words per tweets average tweets generated database containing distinct along frequency counts process text help anonymizing information substituted twitter handles artificial token also substituted http links token link prepended two special tokens complete generated first two words tweet correspondingly appended two special tokens complete centered around two last tokens tweet tokenization perform trivially separating tokens blank space linguistic example separating punctuation words made https https framework online reputation monitoring table number available training different sizes target vocabulary opted introducing linguistic bias another tool tokenization user generated content trivial problem direct consequence performing linguistic increasing vocabulary size diluting token counts however principle given enough data embedding model able learn correct embeddings actual words ronaldo words punctuation attached ronaldo practice believe actually advantage downstream consumers embeddings since also relax requirements tokenization stage overall dictionary thus produced contains approximately distinct entries dictionary sorted frequency words lowest index correspond common words corpus used information database generate training data used experiments fixed size target vocabulary embedded scanned database obtain possible tokens among top words dictionary top frequent words corpus depending different numbers valid training found database larger valid would pass filter number examples collected values shown table since one goals experiments understand impact using different amounts training data size vocabulary embedded run experiments training models using data available metrics related learning process tracked metrics related learning process function vocabulary size embedded fraction training data used learning word embeddings orm possible configurations recorded values training validation loss cross entropy epoch tracking metrics serves minimalistic sanity check model able solve word prediction task degree success observe substantial decay losses one expect embeddings capture distributional information supposed capture tests data intrinsic evaluation using gold standard data described performed three types tests class membership tests embeddings corresponding members semantic class months year portuguese cities smileys close since supposed found mostly contexts class distinction test reciprocal previous class membership test embeddings elements different classes different since words different classes ere expected found significantly different contexts word equivalence test embeddings corresponding synonyms antonyms abbreviations porque abbreviated partial references slb benfica almost equal since alternatives supposed used interchangeable contexts either maintaining inverting meaning therefore tests two words considered distinct cosine corresponding embeddings lower belong class cosine embeddings higher equivalent cosine embeddings higher report results using different thresholds cosine similarity noticed cosine similarity skewed higher values embedding space observed related work used following sources data testing class membership data data collected evaluation data provided correspond semantic classes framework online reputation monitoring collected manually authors checking top frequent words dictionary expanding classes include following sets number elements brackets smileys months countries names surnames portuguese cities class distinction test pair element gold standard classes elements classes removing duplicate pairs since ordering matter generate pairs words supposed belong different classes word equivalence test manually collected equivalente pairs focusing abbreviations popular twitters quanto lisboa frequent acronyms slb benfica total compiled equivalence pairs tests computed coverage metric embeddings necessarily contain information words contained tests tests compute coverage metric measures fraction goldstandard pairs could actually tested using different embeddings produced test pairs actually covered obtain success metrics tests computing ratio pairs able correctly classified distinct cosine belonging class cosine iii equivalent cosine worth making final comment gold standard data although expect gold standard data sufficient evaluation resulting embeddings enough providing clues regarding areas embedding process capturing enough semantics still provide valuable indications planning produce much larger database word embeddings results analysis run training process performed corresponding evaluation combinations size vocabulary embedded volume training data available used table presents overall statistics training epochs average time per epoch increases first size vocabulary embed model parameters volume training data using testbed section total time learning experiments varied minimum seconds learning word embeddings orm table overall statistics combinations models learned varying volume training data results observed training epochs embeddings training data tuples avg training loss validation loss data data data data data data data data data data data data data maximum hours using training data available extracted tweets numbers give approximate figure time consuming would train embeddings complete twitter corpus consisting tweets analyze learning process plot training set loss validation set loss different values figure left epochs using available data expected loss reducing epoch validation loss although slightly higher following trend using see model overfitting also observe higher higher absolute values loss sets surprising number words predict becomes higher problem tend become harder also keep dimensionality embedding space constant dimensions becomes increasingly hard represent differentiate larger vocabularies believe specially valuable indication future experiments deciding dimensionality final embeddings distribute right side figure show number training validation examples affects loss fixed varied amount data used training three trends apparent train data obtain better validation losses expected second trend using less data available model tends overfit data indicated consistent increase validation loss epochs check framework online reputation monitoring fig continuous line represents loss training data dashed line represents loss validation data left side effect increasing using training data right side effect varying amount training data used dashed lines right side figure suggests future try drastic reduction training data save training time finally overfitting validation loss seems stabilize around epochs observed effects model seems simple enough showing type behavior indicates practical way safely deciding stop training model intrinsic evaluation table presents results three different tests described section first expected result coverage metrics increase size vocabulary embedded word equivalence test set specifically created evaluating embedding embedding words achieve almost test coverage hand class distinction test set created taking cross product test cases class class membership test set obtain low coverage figures indicates always possible previously compiled data important compile data directly twitter content want perform precise evaluation effect varying cosine similarity decision threshold class membership test shows percentage test cases classified correct drops significantly however drop accentuated training learning word embeddings orm portion available data differences using two alternative thresholds values even higher word equivalence test word equivalence test consider two words equivalent word cosine embedding vectors higher revealed extremely demanding test nevertheless results far superior much larger coverage lower happens class membership test hand class distinction test shows different trend larger values coverage values low would make sense hypothesize reduced values true negatives percentage obtained largest would necessary confirm behavior even larger values one might hypothesize ability distinguish classes requires larger thresholds large also speculate need increasing number dimensions able encapsulate different semantic information many words table evaluation resulting embeddings using class membership class distinction word equivalence tests different thresholds cosine similarity embeddings data class membership coverage acc acc class distinction word equivalence coverage acc acc coverage analysis regarding evaluation metrics despite already providing interesting practical clues goal trying embed larger vocabulary using training data available results also framework online reputation monitoring revealed intrinsic evaluation metrics using overly sensitive corresponding cosine similarity thresholds sensitivity poses serious challenges systematic exploration word embedding architectures corresponding also observed recent works using absolute thresholds criteria deciding similarity words create dependency evaluation metrics geometry embedded data see embedding data graph means metrics change apply scaling operations certain parts graph even structure relative position embedded words change practical purposes including training downstream models absolute distances little meaning fundamental resulting embeddings able capture topological information similar words closer words dissimilar various criteria similarity care independently absolute distances involved clear key aspect future work developing additional performance metrics based topological properties line recent work proposing shift evaluation absolute values exploratory evaluations focusing weaknesses strengths embeddings much generic scores example one metric could consist checking whether given word words known belong class closer words belonging different classes independently actual cosine future work necessarily include developing type metrics concluding remarks producing word embeddings tweets challenging due specificities vocabulary medium implemented neural word embedding model embeds words based information extracted sample portuguese twitter stream seen flexible baseline experiments field work reported paper preliminary study trying find parameters training word embeddings twitter adequate evaluation tests data results show using less available training examples vocabulary size might result overfitting resulting embeddings obtain reasonable performance intrinsic evaluation tests trained vocabulary containing frequent words twitter sample relatively small size nevertheless results exhibit skewness cosine similarity scores explored summary contributions future work specifically class distinction test set revealed challenging opens door evaluation similarity words also dissimilarities words different semantic classes without using absolute score values therefore key area future exploration better evaluation resources metrics made initial effort front however believe developing new intrinsic tests agnostic absolute values metrics concerned topological aspects embedding space expanding data cases tailored content fundamental importance progress line work furthermore plan make public available word embeddings trained large sample tweets collected portuguese twitter stream require experimenting producing embeddings higher dimensionality avoid cosine skewness effect training even larger vocabularies also room experimenting model activation functions dimensions layers know impact final results summary contributions work reported chapter makes following contributions framework supports research entity retrieval text mining tasks context online reputation monitoring framework composed two major components act independent frameworks relink texrep relink framework supports comprehensive research work retrieval supporting creating test queries well early fusion based approaches retrieval texrep framework able collect texts online media twitter online news identify entities interest classify sentiment polarity intensity framework supports multiple data aggregation methods well visualization modeling techniques used descriptive analytics analyze political polls evolve time predictive analytics predict elections framework online reputation monitoring study practical aspects namely vocabulary size training data size intrinsic evaluation training publishing word embeddings portuguese twitter stream later used orm related tasks chapter conclusions thesis addressed two computational problems online reputation monitoring entity retrieval text mining entities gravitational force drives orm process consequently work reported thesis gravitates around entities occurrences across web researched developed methods extraction retrieval analysis prediction information spread across web main objectives thesis achieved resulting several contributions problem online reputation monitoring several competitive baselines developed believe represent significant progress research area open source work scarce however still many issues addressed future recent developments deep neural networks create opportunities improve performance several tasks addressed thesis access larger quantities training data possible easily adapt research framework include techniques summary main contributions retrieval established orm benefits entity retrieval capabilities constrained classic data analytics reports users ought able search information social media online news furthermore reputation isolated asset depends also reputation neighboring entities studied problem retrieval using perspective made several contributions line research conclusions generalization problem search cover entity types relationships represented attribute predicate respectively rather predefined set general probabilistic model retrieval using bayesian networks proposal two design patterns support retrieval approaches using model proposal dependence model builds basic sequential dependence model sdm provide extensible representations dependencies suitable complex queries proposal indexing method supports retrieval approach problem method generating test collections resulted relink query collection comprising queries results experiments scale comprehensive set queries corpora retrieval complex case entity retrieval goal search multiple unknown entities relationships connecting contrary entity retrieval structured knowledge graphs approaches retrieval adequate context orm happens due dynamic nature data sources much transient stable sources information wikipedia used general entity retrieval consequently developed retrieval methods rely fixed predefined entity types relationships enabling wider range queries compared semantic approaches started presenting formal definition queries assume query decomposed sequence containing keywords related specific entity relationship adopted probabilistic formulation retrieval problem creating specific representations entities context terms pairs entities relationships possible create graph probabilistic dependencies entity plus relationship representations use bayesian network depict dependencies probabilistic graphical model best knowledge represents first probabilistic model retrieval summary main contributions however conditional probabilities computed directly raw documents collection fact condition inherent problem entity retrieval documents serve proxies entities relationship representations consequently need fuse information spread across multiple documents able create representations proposed two design patterns early fusion late fusion inspired model model balog however context orm interested early fusion early fusion aggregates context terms entity relationship occurrences create two dedicated indexes entity index relationship index two indexes possible apply retrieval method compute relevance scores entity relationship documents representations given joint probability retrieve final entity tuples computed using factorization conditional probabilities individual relevance scores hand late fusion consists matching directly standard document index alongside set entity occurrence document compute individual relevance scores document given aggregate entity occurrences top results compute final joint probability using traditional retrieval models language models design patterns used create unsupervised baselines retrieval since objective explore early fusion approach retrieval developed novel supervised early model retrieval entityrelationship dependence model erdm uses markov random field model term dependencies documents erdm seen extension sequential dependence model sdm document retrieval way relies query term dependencies creates complex graph structure connects terms multiple queries multiple documents compute probability mass function mrf one difficulties faced researching retrieval lack test collections therefore decided contribute research problem creating method creating test collections realized web tabular data often include implicit relationships entities belong row table developed table parser extracts tuples related entities wikipedia tables extract metadata table title column name provide editors together list entity tuples conclusions asked editors create queries list entity tuples could serve relevance judgments process resulted creation publication relink query collection comprising queries believe relink foster research work retrieval performed experiments scale using web corpus extracted indexed million entity relationship occurrences evaluated methods using four different query sets comprising total queries far know largest experiment retrieval considering size query set data collection results show consistently better performance erdm model three proposed baselines comparing language models feature functions observed variance performance depending query set furthermore using unsupervised early fusion proved competitive compared erdm suggesting used application scenarios overhead computing sequential dependencies might unfeasible entity filtering sentiment analysis entity filtering sentiment analysis two fundamental text mining problems orm participated two well known external benchmark competitions tasks resulting performance made following contributions two problems supervised learning approach entity filtering tweets achieving performance using relatively small training set created made available word embeddings trained financial texts supervised learning approach sentiment analysis financial texts entity filtering seen targeted named entity disambiguation developed supervised method classifies tweets relevant given target entity task fundamental orm downstream tasks prediction highly affected noisy input data implemented large set features generated describe relationship tweet mentioning entity reference entity representation summary main contributions relied metadata entity categories text represented similarity tweets wikipedia entity articles freebase entities disambiguation feature selection terms based frequency feature matrix transformation using svd although approach perceived relatively simple low cost achieved first place accuracy filtering task replab test set containing thousand tweets different target entities regarding sentiment analysis decided focus efforts well explored namely financial texts participated semeval task focused sentiment analysis financial news microblogs task consisted predicting real continuous variable representing polarity intensity sentiment concerning mentioned short texts modeled regression analysis problem previous work domain showed financial sentiment often depicted implicit way created word embeddings order obtain domain specific syntactic semantic relations words context combined traditional features train regressor sentiment intensity results showed different combination features attained different performances nevertheless able obtain cosine similarities mean average errors scale range representing less maximum possible error prediction explored two prediction problems context orm performing analysis predictive power information news predict entity popularity twitter well study sentiment aggregate functions predict political opinion made following contribution research area analysis predictive power online news regarding entity popularity twitter entities frequently mentioned news analysis combine different sentiment aggregate functions serve features predicting political polls aware entity popularity social media influenced endogenous exogenous factors interested exploring interplay conclusions online news social media reactions could useful anticipating public relations damage control even editorial purposes maximize attention consequently revenue explored different sets signal extracted online news mentioning entities frequently mentioned news politicians footballers signals could influence least correlated future popularity entities twitter results show performance varies depending target entity general results better case predicting popularity politicians due high unpredictability live events associated sports general conclusion study online news predictive power live events twitter reactions happen quickly publication news cases results also show time prediction affects performance models instance case politicians score higher time prediction occurs lunch time evidence politics news events trigger social media reactions reported morning news second predictive studied carried consisted using sentiment polarity extracted tweets predict political polls consensus previous research work sentiment aggregate functions adequate predict political results explored several sentiment aggregate functions described literature assess one combination would effective predicting polls portuguese bailout study achieved lowest mean average error using combination buzz aggregation functions predict monthly poll variations instead absolute values hand important individual feature aggregate function consisting logarithm ration positive negative classified tweets framework orm also created framework specifically tailored orm puts together tackled throughout thesis believe framework represents significant contribution paves way future research computational problems inherent process monitoring reputation online precisely make following contributions framework supports research entity retrieval text mining tasks context online reputation monitoring framework composed two major components act independent frameworks relink texrep summary main contributions relink framework supports comprehensive research work retrieval supporting creating test queries well early fusion based approaches retrieval texrep framework able collect texts online media twitter online news identify entities interest classify sentiment polarity intensity framework supports multiple data aggregation methods well visualization modeling techniques used descriptive analytics analyze political polls evolve time predictive analytics predict elections study practical aspects namely vocabulary size training data size intrinsic evaluation training publishing word embeddings portuguese twitter stream later used orm related tasks framework divided two distinct components one dedicated entity retrieval text mining practice two components act two separate frameworks adaptable reused different application scenarios computational journalism finance politics relink framework designed facilitate experiments retrieval query collections texrep designed two main challenges mind able cope text mining problems underlying orm flexible adaptable reusable order support specificities different application scenarios also presented two use cases framework orm first use relink context computational journalism second described design implementation popmine system use case proposed framework scope popstar project furthermore presented study practical aspects learning word embeddings twitter stream goal try assess feasibility producing publishing general purpose word embeddings orm results showed using less available training examples vocabulary size might result obtained interesting performance intrinsic evaluation trained vocabulary containing frequent words twitter sample relatively small size proposed set gold standard data intrinsic evaluation word embeddings user generated content nevertheless realized evaluation metrics using absolute values thresholds might suitable due cosine skewness effect large dimensional embedding spaces propose develop topological intrinsic evaluation metrics future work conclusions limitations future work one major obstacles faced course thesis limited availability labeled data training evaluation different tasks tackled common limitation scope online reputation monitoring due obstacle chance perform extensive experimentation using one data source language task aspect reduces generalization results obtained since might biased towards available datasets access therefore leave future work experimentation task multiple datasets using different data sources languages perform comparable evaluations also recognize tried address many different tasks reduced capability addressing every task level depth nevertheless believe exploring several new tasks scope orm constitutes strong contribution foster future research work area course thesis possibility performing user studies assess global usefulness framework orm would like leave future work objective applying retrieval online news social media represent natural data sources orm possible evaluate approaches using type data sources research work retrieval still early stages believed necessary first contribute general retrieval leave future work specific evaluation context orm implemented created demo early fusion approach since unsupervised however possible apply erdm online news due lack training queries relevance judgments parameter tuning either cases aim conduct user experience near future collect queries relevance judgments context orm recent work deep neural networks makes opportunity beat baselines created thesis however tasks addressed enough labeled data use techniques one interesting avenues would like explore would use neural networks feature functions erdm model since dataset million entity relationship extractions represents ideal scenario deep learning propose use window based prediction task similar cbow model training word embeddings given fixed window size one would learn neural network would provide ranked list given input query believe approach would reduce limitations future work computational costs current erdm feature functions since would need keep two huge indexes query time would like also explore different priors entity relationship documents within erdm instance creating source time sensitive rankings would useful using transient information sources another promising avenue transfer learning specially due lack training resources context orm possibility bilingual training politics finance transfer knowledge would constitute major progress area references cees van riel charles fombrun essentials corporate communication implementing practices effective reputation management routledge mats atvesson organization substance image organization studies diana maynard kalina bontcheva dominic rout challenges developing opinion mining tools social media proceedings nlp tag usergeneratedcontent gianluca demartini claudiu firan tereza iofciu ralf krestel wolfgang nejdl finding entities wikipedia difficult sometimes information retrieval jeffrey pound peter mika hugo zaragoza object retrieval web data proceedings international conference world wide web pages acm charles fombrun cees van riel fame fortune successful companies build winning reputations press stacks practioner guide public relations research measurement evaluation business expert press krisztian balog fang maarten rijke pavel serdyukov luo expertise retrieval foundations information retrieval tom heath christian bizer linked data evolving web global data space synthesis lectures semantic web theory technology mohamed yahya denilson barbosa klaus berberich qiuyue wang gerhard weikum relationship queries extended knowledge graphs proceedings ninth acm international conference web search data mining pages acm anastasia giachanou fabio crestani like survey twitter sentiment analysis methods acm comput june issn doi references michela nardo marco naltsidis walking wall street tablet survey stock market predictions using web journal economic surveys jasmina miha nada martin streambased active learning sentiment analysis financial domain information sciences pedro saleiro eduarda mendes rodrigues carlos soares oliveira texrep text mining framework online reputation monitoring new generation doi pedro saleiro natasa eduarda mendes rodrigues carlos soares relink research framework test collection retrieval proceedings international acm sigir conference research development information retrieval shinjuku tokyo japan august pages doi pedro saleiro natasa eduarda mendes rodrigues carlos soares early fusion strategy retrieval proceedings first workshop knowledge graphs semantics text retrieval analysis international acm sigir conference research development information retrieval sigir shinjuku tokyo japan august pages pedro saleiro sarmento eduarda mendes rodrigues carlos soares oliveira learning word embeddings portuguese twitter stream study practical aspects progress artificial intelligence epia conference artificial intelligence epia porto portugal september proceedings pages doi pedro saleiro eduarda mendes rodrigues carlos soares oliveira feup task predicting sentiment polarity intensity financial word embeddings proceedings international workshop semantic evaluation pages association computational linguistics doi pedro saleiro carlos soares learning news predicting entity popularity twitter advances intelligent data analysis international symposium ida stockholm sweden october proceedings pages doi pedro saleiro jorge teixeira carlos soares oliveira timemachine search visualization news archives advances information retrieval european conference research ecir padua italy march proceedings pages doi pedro saleiro gomes carlos soares sentiment aggregate functions political opinion polling using microblog streams proceedings references ninth international conference computer science software engineering porto portugal july pages doi pedro saleiro silvio amir silva carlos soares popmine tracking political opinion web ieee international conference computer information technology cit ieee international conference ubiquitous computing communications iucc ieee international conference dependable autonomic secure computing dasc ieee international conference pervasive intelligence computing picom liverpool united kingdom october pages doi pedro saleiro luis rei arian pasquali carlos soares jorge teixeira pinto mohammad nozari zarmehri catarina pedro strecht popstar replab name ambiguity resolution twitter working notes clef conference valencia spain september theo poiesz image concept place consumer psychology journal economic psychology gary jones beth jones philip little reputation reservoir buffering loss times economic crisis corporate reputation review stephen newell ronald goldsmith development scale measure perceived corporate credibility journal business research charles fombrun reptrak system presented anniversary conference reputation image identity competitiveness pages kurniawati kurniawati graeme shanks nargiza bekmamedova business impact social media analytics ecis page matt kaufmann portmann madjid fathi concept semantics extraction web data induction fuzzy ontologies technology eit ieee international conference pages ieee edy portmann fora framework fuzzy grassroots ontology online reputation management springer science business media julio gonzalo monitoring reputation wild online west proceedings spanish conference information retrieval page acm carrillo albornoz chugur corujo gonzalo meij rijke spina overview replab evaluating online reputation monitoring systems clef references marija kristina zdravko dovedan academia care online reputation management monitoring mipro proceedings international convention pages ieee sina samangooei trevor cohn nicholas gibbins mahesan niranjan trendminer architecture real time analysis social media text icwsm ali khalili auer ngonga ngomo text analytics using linked data european semantic web conference pages springer pedro saleiro silvio amir silva carlos soares popmine tracking political opinion web computer information technology ubiquitous computing communications dependable autonomic secure computing pervasive intelligence computing ieee international conference pages ieee christopher manning prabhakar raghavan hinrich introduction information retrieval volume cambridge university press cambridge gerard salton automatic information organization retrieval karen sparck jones statistical interpretation term specificity application retrieval journal documentation stephen robertson steve walker susan jones micheline mike gatford okapi nist special publication fissaha adafre maarten rijke tjong kim sang entity retrieval recent advances natural language processing ranlp haiqiang chen huawei shen jin xiong songbo tan xueqi cheng social network structure behind mailing lists trec expert finding track trec national institute standards technology nist gianluca demartini tereza iofciu arjen vries overview inex entity ranking track focused retrieval evaluation pages springer krisztian balog pavel serdyukov arjen vries overview trec entity track technical report dtic document krisztian balog arjen vries pavel serdyukov wen first international workshop search eos acm sigir forum volume pages acm krisztian balog leif azzopardi maarten rijke formal models expert finding enterprise corpora proceedings annual international acm sigir conference research development information retrieval pages acm references leif azzopardi krisztian balog maarten rijke language modeling approaches enterprise tasks trec citeseer nick craswell arjen vries ian soboroff overview trec enterprise track trec volume pages zhao yuehua chen weiran jun guo trec enterprise track experiments bupt trec desislava petkova bruce croft document representation named entity retrieval proceedings sixteenth acm conference conference information knowledge management pages acm marc bron krisztian balog maarten rijke example based entity search web data european conference information retrieval pages springer nansu zong sungin lee kim discovering expansion entities entity search linked data journal information science nikita zhiltsov alexander kotov fedor nikolaev fielded sequential dependence model entity retrieval web data proceedings international acm sigir conference research development information retrieval pages acm jeffrey pound alexander hudek ihab ilyas grant weddell interpreting keyword queries web knowledge bases proceedings acm international conference information knowledge management pages acm christina unger lorenz jens lehmann ngonga ngomo daniel gerber philipp cimiano question answering rdf data proceedings international conference world wide web pages acm xiaonan chengkai cong queries wikipedia acm transactions intelligent systems technology tist michael schmitz robert bart stephen soderland oren etzioni open language learning information extraction proceedings joint conference empirical methods natural language processing computational natural language learning pages association computational linguistics jeffrey qin lijun chang keyword search databases synthesis lectures data management references shady elbassuoni maya ramanath ralf schenkel marcin sydow gerhard weikum ranking queries proceedings acm conference information knowledge management pages acm tao cheng xifeng yan kevin chang entityrank searching entities directly holistically proceedings international conference large data bases pages vldb endowment jack conrad mary hunter utt system discovering relationships feature extraction text databases sigir pages springer jason rennie tommi jaakkola using term informativeness named entity detection proceedings annual international acm sigir conference research development information retrieval pages acm donald metzler bruce croft markov random field model term dependencies proceedings annual international acm sigir conference research development information retrieval pages acm fei song bruce croft general language model information retrieval proceedings eighth international conference information knowledge management pages acm donald metzler bruce croft linear models information retrieval information retrieval samuel huston bruce croft comparison retrieval models using term dependencies proceedings acm international conference conference information knowledge management pages acm fedor nikolaev alexander kotov nikita zhiltsov parameterized fielded term dependence models entity retrieval knowledge graph proceedings international acm sigir conference research development information retrieval pages acm paul ogilvie jamie callan combining document representations knownitem search proceedings annual international acm sigir conference research development informaion retrieval pages acm faegheh hasibi krisztian balog svein erik bratsberg exploiting entity linking queries entity retrieval proceedings acm international conference theory information retrieval pages acm sayali kulkarni amit singh ganesh ramakrishnan soumen chakrabarti collective annotation wikipedia entities web text sigkdd acm references damiano spina enrique julio gonzalo filter keywords majority class strategies company name disambiguation twitter clef springer delgado munoz raquel unanue alberto fresno unsupervised company name disambiguation twitter icwsm workshop analysis mining social streams pages maria christoforaki ivie erunse cong searching social updates entities vlds pages viktor hangya farkas filtering polarity detection reputation management tweets clef working notes amparo elizabeth cano basave andrea varga matthew rowe milan stankovic dadzie making sense microposts concept extraction challenge leon derczynski diana maynard niraj aswani kalina bontcheva noise impact semantic annotation accuracy proceedings acm conference hypertext social media pages acm xiaohua liu yitong haocheng ming zhou furu wei entity linking tweets acl pages mark greenwood niraj aswani kalina bontcheva reputation profiling gate clef online working leon derczynski diana maynard giuseppe rizzo marieke van erp genevieve gorrell troncy johann petrak kalina bontcheva analysis named entity recognition linking tweets information processing management edgar meij wouter weerkamp maarten rijke adding semantics microblog posts proceedings fifth acm international conference web search data mining pages acm alexandre davis adriano veloso altigran silva wagner meira alberto laender named entity disambiguation streaming data acl long pages association computational linguistics wei shen jianyong wang ping luo min wang linking named entities tweets knowledge base via user interest modeling proceedings acm sigkdd international conference knowledge discovery data mining pages acm kwak park lee moon twitter social network news media www pages acm references habib van keulen twitterneed hybrid approach named entity extraction disambiguation tweet natural language engineering fabian suchanek gjergji kasneci gerhard weikum yago core semantic knowledge proceedings international conference world wide web pages acm paolo ferragina ugo scaiella tagme annotation short text fragments wikipedia entities proceedings acm international conference information knowledge management pages acm andrea moro alessandro raganato roberto navigli entity linking meets word sense disambiguation unified approach transactions association computational linguistics francesco piccinno paolo ferragina tagme wat new entity annotator proceedings first international workshop entity recognition disambiguation pages acm zhengyan shujie liu ming zhou longkai zhang houfeng wang learning entity representation entity disambiguation acl pages wei fang jianwen zhang dilin wang zheng chen ming entity disambiguation knowledge text jointly embedding conll page jose moreno romaric romain beaumont eva hondt ligozat sophie rosset xavier tannier brigitte grau combining word entity embeddings entity linking european semantic web conference pages springer bing liu sentiment analysis opinion mining synthesis lectures human language technologies sara rosenthal preslav nakov svetlana kiritchenko saif mohammad alan ritter veselin stoyanov task sentiment analysis twitter proceedings saif mohammad svetlana kiritchenko xiaodan zhu building sentiment analysis tweets semeva pages atlanta georgia usa association computational linguistics efthymios kouloumpis theresa wilson johanna moore twitter sentiment analysis good bad omg icwsm david bamman noah smith contextualized sarcasm detection twitter proceedings international conference web social media pages aaai menlo park references bing liu sentiment analysis subjectivity handbook natural language processing mike thelwall kevan buckley georgios paltoglou sentiment strength detection social web journal american society information science technology yoshua bengio deep learning representations looking forward statistical language speech processing pages springer tomas mikolov ilya sutskever kai chen greg corrado jeff dean distributed representations words phrases compositionality nips andrew maas raymond daly peter pham dan huang andrew christopher potts learning word vectors sentiment analysis proceedings annual meeting association computational linguistics human language pages association computational linguistics igor labutov hod lipson words acl pages yaming sun lei lin nan yang zhenzhou xiaolong wang radicalenhanced chinese character embedding neural information processing pages springer duyu tang furu wei nan yang ming zhou ting liu bing qin learning word embedding twitter sentiment classification acl pages gerard salton anita wong yang vector space model automatic indexing communications acm david blei andrew michael jordan latent dirichlet allocation journal machine learning research auer christian bizer georgi kobilarov jens lehmann richard cyganiak zachary ives dbpedia nucleus web open data springer scott deerwester susan dumais george furnas thomas landauer richard harshman indexing latent semantic analysis journal american society information science jeffrey pennington richard socher christopher manning glove global vectors word representation emnlp volume pages yoshua bengio ducharme pascal vincent christian jauvin neural probabilistic language model journal machine learning research feb references ronan collobert jason weston unified architecture natural language processing deep neural networks multitask learning proceedings international conference machine learning pages acm omer levy yoav goldberg neural word embedding implicit matrix factorization advances neural information processing systems pages tomas mikolov yih geoffrey zweig linguistic regularities continuous space word representations volume sanjeev arora yuanzhi yingyu liang tengyu andrej risteski latent variable model approach word embeddings arxiv preprint preslav nakov alan ritter sara rosenthal fabrizio sebastiani veselin stoyanov task sentiment analysis twitter proceedings semeval pages rodrigues branco steven neale silva distributional semantics models portuguese international conference computational processing portuguese language pages springer bandari huberman pulse news social media forecasting popularity icwsm yang patterns temporal variation online media wsdm pages acm weerkamp tsagkias rijke predicting volume comments online news stories cikm pages acm xiangnan ming gao kan yiqun liu kazunari sugiyama predicting popularity web items based user comments proceedings international acm sigir conference research development information retrieval pages acm swapna gottipati jing jiang finding thoughtful comments social media coling volume pages annie louis ani nenkova makes writing great first experiments article quality prediction science journalism domain transactions association computational linguistics carlos castillo mohammed pfeffer matt stempeck characterizing life cycle online news stories using social media reactions proceedings acm conference computer supported cooperative work social computing pages acm references riley crane didier sornette robust dynamic classes revealed measuring response function social system proceedings national academy sciences janette lehmann bruno ramasco ciro cattuto dynamical classes collective attention twitter proceedings international conference world wide web pages acm daniel romero brendan meeder jon kleinberg differences mechanics information diffusion across topics idioms political hashtags complex contagion twitter proceedings international conference world wide web pages acm mikalai tsytsarau themis palpanas malu castellanos dynamics news events social media reaction proceedings acm sigkdd international conference knowledge discovery data mining pages acm harold dwight lasswell comparative study symbols introduction number stanford university press maxwell mccombs donald shaw function mass media public opinion quarterly matthew moen ronald reagan social issues rhetorical support christian right social science journal daniel riffe alan freitag content analysis content analyses years journalism quarterly journalism mass communication quarterly kimberly neuendorf content analysis guidebook sage daniel hopkins gary king method automated nonparametric content analysis social science american journal political science justin grimmer brandon stewart text data promise pitfalls automatic content analysis methods political texts political analysis bermingham smeaton using twitter monitor political sentiment predict election results workshop international joint conference natural language processing ijcnlp november andranik tumasjan timm oliver sprenger philipp sandner isabell welpe predicting elections twitter characters reveal political sentiment icwsm micol nathanael chambers learning microblogs distant supervision political forecasting twitter proceedings conference european chapter association computational linguistics eacl association computational linguistics references pawel sobkowicz michael kaschesky guillaume bouchard opinion mining social media modeling simulating forecasting political opinions web government information quarterly social media government selections annual international conference digital government research avishay livne matthew simmons eytan adar lada adamic party structure content election icwsm andranik tumasjan timm sprenger philipp sandner isabell welpe predicting elections twitter characters reveal political sentiment proceedings fourth international aaai conference weblogs social media wanted predict elections twitter got lousy paper balanced survey election prediction using twitter data arxiv preprint brendan connor ramnath balasubramanyan bryan routledge noah smith tweets polls linking text sentiment public opinion time series proceedings international aaai conference weblogs social media jessica chung eni mustafaraj collective sentiment expressed twitter predict political elections proceedings aaai conference artificial intelligence san francisco usa panagiotis metaxas eni mustafaraj dani predict elections ieee third int conference privacy security risk trust ieee third int conference social computing october doi daniel gayo avello panagiotis metaxas eni mustafaraj limits electoral predictions using twitter proceedings international conference weblogs social media daniel electoral prediction twitter data social science computer review page pang lillian lee opinion mining sentiment analysis found trends inf efthymios kouloumpis theresa wilson johanna moore twitter sentiment analysis good bad omg proceedings international conference weblogs social media preslav nakov sara rosenthal zornitsa kozareva veselin stoyanov alan ritter theresa wilson task sentiment analysis twitter proceedings international workshop semantic evaluation semeval references johnson shukla shukla classifying political sentiment tweets nicholas diakopoulos david shamma characterizing debate performance via aggregated twitter sentiment proceedings sigchi conference human factors computing systems chi acm eric sanders antal van den bosch relating political party mentions twitter polls election results dir pages marko skoric nathaniel poor palakorn achananuparp lim jing jiang tweets votes study singapore general election system science hicss hawaii international conference pages ieee juan soler fernando cuartero manuel roblizo twitter tool predicting elections results proceedings international conference advances social networks analysis mining asonam pages ieee computer society erik tjong kim sang johan bos predicting dutch senate election results twitter proceedings workshop semantic analysis social media pages association computational linguistics chen wenbo wang amit sheth twitter users equal predicting elections study user groups predicting republican presidential primaries social informatics pages springer joseph digrazia karissa mckelvey johan bollen fabio rojas tweets votes social media quantitative indicator political behavior plos one colin fink nathan bos alexander perrone erwu liu jonathon kopecky twitter public opinion nigerian presidential election social computing socialcom international conference pages ieee manish gaurav amit srivastava anoop kumar scott miller leveraging candidate popularity twitter predict election outcome proceedings workshop social network mining analysis page acm nicholas thapen moustafa ghanem towards passive political opinion polling using twitter sma pages citeseer lei shi neeraj agarwal ankur agrawal rahul garg jacob spoelstra predicting primary elections twitter workshop social network social media analysis methods models applications michael jensen nick anstead psephological investigations tweets votes unknown unknowns republican nomination process policy internet references fabio franch wisdom crowds election prediction social media journal information technology politics nick beauchamp predicting interpolating polling using twitter textual data new directions analyzing text data workshop danish contractor tanveer afzal faruquie understanding election candidate approval ratings using social media data proceedings international conference world wide web companion pages international world wide web conferences steering committee vasileios lampos daniel trevor cohn model voting intention social media acl pages micol nathanael chambers learning microblogs distant supervision political forecasting twitter proceedings conference european chapter association computational linguistics pages association computational linguistics mohamed yahya klaus berberich shady elbassuoni maya ramanath volker tresp gerhard weikum natural language questions web data proceedings joint conference empirical methods natural language processing computational natural language learning pages association computational linguistics uma sawant soumen chakrabarti learning joint query interpretation response ranking proceedings international conference world wide web pages acm judea pearl bayesian networks model memory evidential reasoning proceedings conference cognitive science society pages shuo zhang krisztian balog design patterns object retrieval european conference information retrieval pages springer chandra sekhar bhagavatula thanapon noraset doug downey methods exploring mining tables wikipedia proceedings acm sigkdd workshop interactive data exploration analytics pages acm oliver lehmberg dominique ritze robert meusel christian bizer large public corpus web tables containing time context metadata proceedings international conference companion world wide web pages international world wide web conferences steering committee evgeniy gabrilovich michael ringgaard amarnag subramanya freebase annotation clueweb corpora krisztian balog robert neumayer test collection entity search dbpedia proceedings international acm sigir conference research development information retrieval pages acm references gustavo laboreiro sarmento jorge teixeira oliveira tokenizing messages using text classification approach proceedings fourth workshop analytics noisy unstructured text data pedro saleiro luis rei arian pasquali carlos soares jorge teixeira pinto mohammad nozari zarmehri catarina pedro strecht popstar replab name ambiguity resolution twitter clef working notes enrique julio gonzalo felisa verdejo general evaluation measure document organization tasks proceedings sigir july claudiu musat stefan impact valence shifters mining implicit economic opinions international conference artificial intelligence methodology systems applications springer marjan van kauter diane breesch hoste analysis explicit implicit sentiment financial news articles expert systems applications keith cortis andre freitas tobias daudert manuela huerlimann manel zarrouk brian davis task sentiment analysis financial microblogs news proceedings semeval tomas mikolov kai chen greg corrado jeffrey dean efficient estimation word representations vector space arxiv preprint gimpel schneider connor das mills eisenstein heilman yogatama flanigan noah smith tagging twitter annotation features experiments acl hlt short papersvolume andriy bodnaruk tim loughran bill mcdonald using text gauge financial constraints journal financial quantitative analysis theresa wilson janyce wiebe paul hoffmann recognizing contextual polarity sentiment analysis emnlp deriu gonzenbach uzdilli lucchi luca jaggi swisscheese task sentiment classification using ensemble convolutional neural networks distant supervision proceedings semeval reis olmo benevenuto kwak prates breaking news first impressions matter online news icwsm matko boanjak eduardo oliveira martins eduarda mendes rodrigues sarmento twitterecho distributed focused crawler support open research twitter data proceedings international conference companion world wide web pages acm references kohut keeter doherty dimock directors christian assessing representativeness public opinion surveys joao filgueiras silvio amir popstar replab polarity reputation classification fourth international conference clef initiative clef volume brendan connor ramnath balasubramanyan bryan routledge noah smith tweets polls linking text sentiment public opinion time series icwsm oliveira martins rodrigues sarmento twitterecho distributed focused crawler support open research twitter data acm gianluca demartini malik muhammad saad missen roi blanco hugo zaragoza taer time aware entity retrieval cikm toronto canada acm michael matthews pancho tolchinsky roi blanco jordi atserias peter mika hugo zaragoza searching time new york times humancomputer interaction information retrieval pages krisztian balog maarten rijke raymond franz hendrike peetz bart brinkman ivan johgi max hirschel sahara discovering associations online news iswc omar alonso klaus berberich srikanta bedathur gerhard weikum timebased exploration news archives hcir jorge teixeira luis sarmento eugenio oliveira creation reference news corpus scenarios cisti sarmento nunes jorge teixeira oliveira propagating topic labels news snippets carla abreu jorge teixeira oliveira encadear encadeamento linguistica informatica traducao mundos que cruzam oslo studies language pedro saleiro sarmento piaf adele classifying encyclopedic queries using automatically labeled training data oair jorge teixeira sarmento oliveira bootstrapping approach training ner conditional random fields progress artificial intelligence mathieu jacomy tommaso venturini sebastien heymann mathieu bastian continuous graph layout algorithm handy network visualization designed gephi software plos one references silvio amir miguel almeida bruno martins filgueiras mario silva tugas exploiting unlabelled data twitter sentiment analysis proceedings international workshop semantic evaluation semeval pages dublin ireland august association computational linguistics url http omer levy yoav goldberg ido dagan improving distributional similarity lessons learned word embeddings transactions association computational linguistics chollet keras https diederik kingma jimmy adam method stochastic optimization arxiv preprint georgiana dinu angeliki lazaridou marco baroni improving learning mitigating hubness problem arxiv preprint manaal faruqui yulia tsvetkov pushpendre rastogi chris dyer problems evaluation word embeddings using word similarity tasks acl page anna gladkova aleksandr drozd computing center intrinsic evaluations word embeddings better acl page
| 2 |
international journal computing business research ijcbr issn online volume issue september time efficient approach offline hand written character recognition using associative memory net tirtharaj dash final year student department information technology national institute science technology india abstract paper efficient offline hand written character recognition algorithm proposed based associative memory net amn amn used work basically auto associative implementation carried completely language make system perform best minimal computation time parallel algorithm also developed using api package openmp characters mainly english alphabets small capital collected system different persons characters collected system used train amn characters collected different persons used testing recognition ability net detailed analysis showed network recognizes hand written characters recognition rate average case however best case recognizes collected hand written characters developed network consumes sec average serial implementation sec average parallel implementation using openmp keywords offline hand written character associative memory net openmp serial parallel introduction recent years hand written character recognition challenging interesting research area field pattern recognition image processing impedovo mori contributes mainly interaction improves interface two pradeep international journal computing business research ijcbr issn online volume issue september human cognition methods viz face speech thumb print recognitions also great area research imtiaz fattah khurana singh kurian balakriahnan generally character recognition broadly characterized two types offline online offline method pattern captured image taken testing purpose case online approach point pattern function time pressure slant strokes etc methods best based application field yielding best accuracy minimal cost time crucial precondition pattern recognition system therefore hand written character recognition continuously broad area research work approach offline character recognition proposed using associative memory network amn fact make time efficient parallel algorithm developed implementation amn using openmp open multiprocessing amn neural network store patterns memories network tested key pattern corresponds producing one stored patterns closely resembles key pattern based testing pattern amn two types memory net memory net networks contains two layers input layer output layer case memory net input target pattern sivanandam deepa case memory net two patterns different work uses character tested stored character characters considered work english alphabets small capital letters paper organized follows section presented general introduction character recognition systems methods section gives brief literature review methods proposed character recognition section describes proposed methodology work section result discussion section gives detailed analysis work paper concluded section note future works literature review international journal computing business research ijcbr issn online volume issue september available literatures convey various algorithms techniques used order accomplish task character recognition studies described source literature google scholar scopus ieee library neural network backend character classification methods due faster reliable computation methods used front end could statistical approaches kernel methods support methods hybrid fuzzy logic controllers multilayer perceptron mlp used bangla alphabet recognition basu accuracy achieved work samples training testing respectively manivannan neil proposed demonstrated optical network architecture pattern recognition english alphabet used patterns training testing process pal singh proposed based english character recognition system work mlp one hidden layer used testing carried test performance design best case accuracy obtained work perwej chaturvedi worked english alphabet recognition using work binary pixels alphabets used train accuracy achieved found pal proposed modified quadratic classifier approach handwritten numerals six popular indian scripts high level recognition accuracy dinesh used horizontal vertical strokes end points feature handwritten numerals method reported accuracy rate best case however method used thinning method resulting loss features yanhua chuanjun recommended novel chinese character recognition algorithm based minimum distance classifier algorithm attempted work two classes feature statistics statistic feature decided primary class structure feature used identify chinese characters good method character recognition proposed huiqin work proposed distribution based algorithm based image segmentation international journal computing business research ijcbr issn online volume issue september distribution pixels deflection correction method adopted flexibility well reduction matching error work avoided burden extracting skeleton character method gave excellent result robust methodology methodology proposed demonstrated figure figure proposed methodology collection english alphabets small capital system persons hand written extraction pixels characters implementation auto amn training testing using serial parallel algorithms comparison results serial parallel processing respect time execution generation english alphabets english alphabets small capital designed system using paint version arial font bold bmp file format dimension bmp file bit depth alphabets given figure international journal computing business research ijcbr issn online volume issue september figure english alphabets system hand written english alphabets collected one different persons characters given figure figure english alphabets collected different persons extraction pixel characters pixels extracted character images bitmap files using standard image function matlab version function imread function extracts decimal values associated pixel pixels stored text file experiment purpose memory net implementation serial algorithm initialize weight set target pattern system pattern international journal computing business research ijcbr issn online volume issue september input handwritten pattern first layer amn calculate weight wij new old end end calculate net input output node wij yinj else end end parallel algorithm initialize weight set target pattern system pattern input handwritten pattern first layer amn pragma omp paralle shared yin chunk private tid pragma omp schedule static chunk calculate weight wij new old end end international journal computing business research ijcbr issn online volume issue september pragma omp schedule static chunk calculate net input output node wij yinj else end end end system specification computer system ram four processors used complete work operating system ubuntu linux however auto optimization compiler tag used compilation command results discussion contribution work detailed analysis recognition accuracy handwritten english alphabets total time computation noted serial parallel algorithm compare decision making speed recognition accuracy table shows result testing developed amn set hand written characters noted network trained machine alphabets tested hand written alphabet however reliability issue hand written character checked times matching percentage average results table recognition accuracy amn offline hand written character recognition international journal computing business research ijcbr issn online volume issue september system alphabet hand written alphabet highest match achieved recognition accuracy time computation sec serial parallel international journal computing business research ijcbr issn online volume issue september viewed detailed analysis performance developed auto amn offline hand written english alphabet recognition network recognizes handwritten character highest matching however network recognize alphabets like alphabets recognized respectively matching error level matching alphabet plot given figure view level english alphabet matched amn alphabets recognized awarded matching international journal computing business research ijcbr issn online volume issue september level matching english alphabet figure plot shows level matching alphabet time efficiency already mentioned network developed two algorithms serial parallel good idea check timing variation cases plot given figure shows speed achieved execution parallel algorithm international journal computing business research ijcbr issn online volume issue september serial parallel decision making speed sec alphabet serial number figure decision making speed serial parallel algorithm conclusion paper offline english character recognition system proposed system developed using auto associative memory net make developed system faster reliable parallel algorithm developed tested successfully experimental study showed system recognizes characters average recognition rate character recognized highest accuracy rate average time required serial algorithm recognize character sec parallel algorithm takes sec average however automatic checking sequence character network play great role world character recognition author currently working issue references basu handwritten bangla alphabet recognition using mlp based classifier proceeding national conference computer processing bangla international journal computing business research ijcbr issn online volume issue september dinesh isolated handwritten kannada numeral recognition using structural feature cluster iisn huiqin research algorithm handwritten character recognition correcting assignment system proceeding international conference image graphics icig impedovo optical character recognition international journal pattern recognition artificial intelligence vol imtiaz fattah local dominant feature selection scheme face recognition international journal computing business research khurana singh model human cognition international journal computing business research kurian balakriahnan continuous speech recognition system malayalam language using plp cepstral coefficient journal computing business research manivannan neil optical network hybrid system many patterns recognition proceeding international symposium intelligent systems informatics sisy mori historical review ocr research development proceedings ieee vol pal handwritten numeral recognition six popular scripts international conference document analysis recognition vol pal singh handwritten english character recognition using neural network international journal computer science communication perwej chaturvedi neural networks handwritten english alphabet recognition international journal computer applications vol pradeep diagonal based feature extraction handwritten alphabet recognition system using neural network international journal computer science information technology sivanandam deepa principles soft computing publisher edition yanhua chuanjun recognition algorithm chinese character based minimum distance classifier proceeding international workshop computer science engineering
| 9 |
oct analysis yiyuan department statistics florida state university tallahassee abstract modern data analysis optimization methods usually favored obtain sparse estimators high dimensions paper performs theoretical analysis class iterative thresholding based estimators defined way oracle inequalities built show nearly minimax rate optimality estimators new type regularity conditions moreover sequence iterates found able approach statistical truth within best statistical accuracy geometrically fast results also reveal different benefits brought convex nonconvex types shrinkage introduction big data naturally arising machine learning biology signal processing many areas call need scalable optimization computation although problems newton methods converge fast efficient implementations typically scale well high dimensional data contrast optimization methods recently attracted great deal attention researchers statistics computer science engineering iterate based gradient subgradient objective function iteration step high dimensional statistics algorithm typically proceeds following manner operator easy compute denotes gradient loss function gives stepsize simple iterative procedure suitable optimization converges arbitrarily high dimensions provided properly small motivated perspective statistical shrinkage regularization necessary achieve good accuracy dimensionality moderate high example proximity operator parikh boyd associated convex penalty function problems interest may always convex quite often taken certain thresholding rule statistical learning scad fan resulting estimators call fixed points study behavior regardless sample size dimensionality establish oracle inequalities last decade people performed rigorous analysis many estimators defined globally optimal solutions convex nonconvex bunea zhang huang bickel lounici zhang zhang among many others pose new questions first although nicely associated optimization criterion constructed given objective may convex estimator may correspond functional local global minimum second various types due abundant choices comparative study regarding statistical performance high dimensions lacking literature third usually computed inexact way big datasets indeed practitioners terminate full computational convergence disconnects theory practice using iterative thresholdings motivate work rest paper organized follows section introduces associated iterative necessary notation section presents main results including oracle inequalities sequential analysis iterates generated tisp section provides proof details background notation thresholding functions definition thresholding function thresholding function real valued function defined iii vector version still denoted defined componentwise either replaced vector definition sup must monotonically derivative defined almost everywhere given critical number introduced almost every ess inf ess inf essential infimum perhaps popular functions sgn equals respectively arbitrarily given construct penalty function follows sup penalty used make proper objective function threshold may equal general ease notation writing always assume threshold parameter unless otherwise specified important fact given thresholding rule satisfies due property follows particular discontinuities ambiguity may arise definition avoid issue assume quantity thresholded never corresponds discontinuity assumption mild practically used thresholding rules discontinuity points discontinuities rarely occur real applications assume model design matrix response vector unknown coefficient vector random vector mean zero scale bounded definition section detail driven computational procedure defined solution scaling parameter depend appropriately large crucial guarantee convergence computational procedure popularly used penalty functions associated thresholdings scad fan mcp zhang capped zhang elastic net zou hastie berhu owen name table lists examples shrinkage perspective thresholding rules usually suffice statistical learning equation terms scaled deign corresponding coefficient vector show scaled form adjust sample size advantageous regularization parameter tuning simple iterative procedure defined based called iterative selection procedure tisp theorem given arbitrary tisp ensures following descent property energy function objective function constructed penalty defined generally arbitrary function satisfying furthermore show limit point necessarily fixed point thus see detail therefore necessarily unique example penalties like capped associated mapping penalty functions thresholding functions iterating thresholding rule perhaps convenient solving nonconvex penalized optimization problem indeed penalties like scad designed thresholding viewpoint following theorem shows thatpthe set include locally optimal solutions theorem let local minimum point minimum point continuous must satisfy converse necessarily true namely may guarantee functional local optimality let alone global optimality raises difficulties statistical analysis give novel unified treatment yield nearly optimal error rate various thresholdings table examples thresholding functions associated quantities soft ridge hard elastic net berhu min capped scad mcp sgn sgn sgn sgn sgn max otherwise set singleton main results address problems arbitrary dimensions possibly large aim establish oracle inequalities donoho johnstone define recall convenience use denote ambiguity used similarly denote inequality holds multiplicative constant unless otherwise specified study scaled satisfying equation abuse notation still write mentioned previously always assume continuous sections similarly section assumes continuous past works lasso show certain incoherence requirement must assumed obtain sharp error rates theorems also need make similar assumptions prevent design matrix collinear state new type regularity conditions called comparison regularity conditions oracle inequalities sequential statistical error bounds obtained oracle inequalities subsection use make bound prediction error regularity condition stated follows ssumption given exist following inequality holds roughly means dominate help theorem let satisfying log constant sufficiently large following oracle inequality holds provided satisfied constants theorem applicable let examine two specific cases first consider indicates convex due concavity zhang zhang always satisfied corollary suppose satisfies holds corresponding without requiring regularity condition case scad thresholding depend magnitude get finite complexity rate oracle inequality also slightly relaxed replacing denote modified version corollary suppose corresponds bounded nonconvex penalty satisfying constant setting theorem log provided satisfied constants remark side oracle inequalities involves bias term complexity term letting say bias vanishes obtain prediction error bound order log omitting constant factors denotes number nonzero components hand existence bias term ensures applicability results approximately sparse signals example many small nonzero components use reference much smaller support get lower error bound benefit tradeoff remark holds proof theorem shows multiplicative constant small corresponding oracle inequalities called sharp works koltchinskii also applies theorem proof scheme also deliver highprobability form results without requiring upper bound remark corollary applies like worth mentioning error rate log significantly improved minimax sense fact gaussian noise contamination regularity conditions exist constants inf cpo denotes arbitrary estimator log see lounici proof bound achieves minimax optimal rate mild logarithm factor oracle inequalities part uses instead make oracle bound show another type comparison regularity conditions thresholdings attain essentially optimal error rate given corollary also show case condition relaxed many assumptions literature ssumption given exist following inequality holds theorem let log sufficiently large constant holds satisfied constants remark fusion thresholdings like associated elastic net berhu table involve additional shrinkage situation complexity term oracle inequality involve modify regularity conditions obtain bounds using proof scheme details however reported paper addition results extended stepsize parameter given suppose introduced fixed point analogous result obtained change replaced give intuitive regularity conditions suppose concave examples include mcp scad concavity implies complement subvector indexed implied given ssumption given exist easy verify sufficient condition simper form following give definitions compatibility condition bickel van geer make comparison ssumption given say satisfies positive numbers restrictively satisfying assume holds holds trivially otherwise indicates intuitively following relationship particular less demanding next let compare regularity conditions required achieve nearly optimal error rate recall theorem corollary respectively implies indeed hand corollary studies optimal performance guarantee practically one may initialize carefully chosen starting point theorem given exists minimizes holds without requiring regularity condition particular corresponds bounded nonconvex penalty described corollary exists holds free regularity conditions theorem place requirement seems applying may advantages practice efficiently pick estimator completely remove regularity conditions however beyond scope current paper possible idea relaxing conditions see remark finally make discussion scaling parameter results far obtained performing prediction error invariant transformation affects regularity conditions seen related stepsize appearing also known learning rate machine learning literature computational results section must large enough guarantee tisp convergent larger value smaller stepsize slower convergence based machine learning literature slow learning rates always recommended training nonconvex learner artificial neural networks perhaps interestingly addition computational efficiency reasons statistical analyses caution using extremely large scaling example unscaled reads becomes difficult hold large makes statistical error bound break easily therefore good idea appropriately large mildly greater sequential analysis iterates next part also supports point sequential algorithmic analysis perform statistical error analysis sequence iterates defined tisp starting point study motivated fact applications seldom computed exactly indeed bother run tisp till computational convergence statistical accuracy improve deteriorate increases lately key advances topic example agarwal showed convex problems necessarily strongly convex proximal gradient algorithms geometrically fast approach globally optimal solution within desired statistical precision set conditions however care statistical error genuine work introduce two comparison regularity conditions analogous present error bounds hereinafter denote positive matrix ssumption given exist following inequality holds ssumption given exist following inequality holds require bit respectively due theorem corollary perform sequential analysis iterates reveal explicit roles often treated constants theorem suppose satisfied log sufficiently large following error bound holds probability least universal positive constants similarly choice regularity parameter satisfied true probability least corollary setting theorem initial point tively probability least remark get sufficient conditions similar discussions made section strictly less relaxed proof section also gives results additional additive term upper bounds similar remark also study stepsize case weighting matrix changes factor replaced remark theorem still applies dependent example use varying threshold sequence becomes allows much larger values used earlier iterations attain accuracy relaxes regularity condition required applying fixed threshold level end results get intuition implications general unscaled reads set number slightly larger know prediction error decays geometrically fast log high probability viewed constants similar conclusion true estimation error simply due accordingly need run tisp till terminate algorithm earlier say tmax log without sacrificing much statistical accuracy formula also reflects quality initial point affects required iteration number related results literature mentioned previously broad convex setting agarwal proved geometric decay optimization error desired statistical precision convergent point loh wainwright extended conclusion family nononvex optimization problems showed regularity conditions hold every local minimum point close authentic comparison results derived toward statistical error directly without requiring local minimum points statistically accurate zhang showed similar statistical error bound elegant regularization procedure however procedure carries expensive optimization step instead involves simple cheap thresholding analysis covers acknowledgement author would like thank editor associated editor two anonymous referees careful comments useful suggestions improve quality paper author also appreciates florentina bunea encouragement work supported part nsf grant proofs throughout proofs use denote universal constants necessarily occurrence given matrix use denote column space denote orthogonal projection matrix onto stands moorepenrose pseudoinverse let given use denote column submatrix indexed definition called random variable exist constants scale defined inf exp called random vector scale bounded marginals satisfying examples include gaussian random variables bounded random variables bernoulli note assumption vec imply components must begin two basic facts special cases lemma lemma respectively state without proofs lemma given arbitrary thresholding rule let function satisr fying sup nonnegative always globally optimal solution unique optimal solution provided continuous lemma let denote unique minimizer proof theorem let assume local minimum point proof minimum point follows lines write simplicity let denote gateaux differential definition increment exists let consider following directional vectors xtj sgn due local optimality obtain sgn summarize achieves local minimum minimum generally local minimum sgn xtj xtj continuous xtj implies xtj hence must satisfying proofs theorem theorem given let vector first result constructs useful criterion basis lemma lemma lemma satisfies following inequality handle introduce another lemma lemma suppose let log exist universal constants constants following event sup occurs probability exp lemma plays important role bounding last stochastic term proof based following results lemma suppose exists globally optimal solution either lemma given define let log sup exp universal constants let given lemma starting value substituting gives exp know let set regularity condition implies choose satisfy combining last two inequalities gives last inequality due proof theorem follows lines proof theorem replaced replaced details omitted proof theorem proof lemma exists minimizes means term dropped following lines section holds modified version replaces using know design matrix satisfies proof theorem corollary let lemma let following triangle inequality holds letting lemma moreover combining last two inequalities gives let log define event complement given sup lemma exists universal constant clearly implies take get desired statistical accuracy bound bound similarly proved noticing holds corollary immediately true proofs lemmas proof lemma let define given expressed depends let satisfying based lemma lemma follows holds proof lemma let prove occurrence implies defined arg min lemma exists least one global minimizer states satisfying thus means sup suffices prove occurs high probability specifically exp given define let use lemma bound tail probability let log claim sup exp indeed last inequality due inequality follows lemma set write noticing basic facts due stirling log approximation iii log log get sup sup sup lpo exp exp log exp exp log exp exp last inequality due sum geometric series proof lemma similar proof lemma set construct let globally optimal solution gives second inequality due lemma therefore must also global minimizer definition demonstrates threshold gap desired proof lemma definition stochastic process increments induced metric euclidean bound metric entropy log smallest cardinality covers notice jdimensional number balls pxj denotes unit ball standard volume argument see vershynin log log log log universal constant conclusion follows dudley integral bound talagrand proof lemma use notation proof lemma defined lemma lemma obtain namely cancel term give two inequalities based secondorder bounds adding three inequalities together gives triangle inequality references agarwal negahban wainwright fast global convergence gradient methods statistical recovery ann bickel ritov tsybakov simultaneous analysis lasso dantzig selector annals statistics pages bunea tsybakov wegkamp sparsity oracle inequalities lasso electronic journal statistics donoho johnstone ideal spatial adaptation via wavelet shrinkages biometrika fan variable selection via nonconcave penalized likelihood oracle properties journal american statistical association stationary sparse causality network learning mach learn koltchinskii lounici tsybakov penalization optimal rates noisy matrix completion ann loh wainwright regularized nonconvexity statistical algorithmic theory local optima mach learn lounici pontil tsybakov van geer oracle inequalities optimal inference group sparsity annals statistics owen robust hybrid lasso ridge regression prediction discovery contemporary mathematics parikh boyd proximal algorithms foundations trends optimization iterative selection procedures model selection shrinkage electronic journal statistics iterative algorithm fitting nonconvex penalized generalized linear models grouped predictors computational statistics data analysis selective factor extraction high dimensions arxiv preprint talagrand generic chaining upper lower bounds stochastic processes springer monographs mathematics springer van geer conditions used prove oracle results lasso electronic journal statistics vershynin introduction analysis random matrices compressed sensing zhang nearly unbiased variable selection minimax concave penalty ann zhang huang sparsity bias lasso selection linear regression ann statist zhang zhang general theory concave regularization high dimensional sparse estimation problems statist zhang analysis convex relaxation sparse regularization mach learn zou hastie regularization variable selection via elastic net jrssb
| 10 |
sep compatible actions tensor products valeriy bardakov mikhail neshchadim abstract pair groups study pairs actions pairs compatible tensor products defined introduction brown loday introduced tensor product pair groups following works miller lue investigation tensor product group theoretical point view started paper brown johnson robertson tensor product depends groups also action action moreover actions must compatible see definition section present paper study following question actions compatible paper organized follows section recall definition tensor product formulate properties give answer question thomas proving nilpotent group group derivative subgroup equal section study following question let group acts group automorphisms possible define action pair actions compatible necessary conditions compatibility actions given cases prove formula second action first one given section construct pairs compatible actions arbitrary groups nilpotent groups give particular answer question section section study groups form describe compatible actions preliminaries article use following notations elements group conjugation commutator write derived subgroup gab abelianized group second hypercenter subgroup date march mathematics subject classification primary secondary key words phrases tensor product compatible action nilpotent group bardakov neshchadim center group recall definition tensor product groups see defined pair groups one acts right conjugation way situation say act compatibly tensor product group generated symbols subject relations particular conjugation action group compatible tensor square group may always defined also tensor product defined two normal subgroups group actions conjugations following proposition well known give proof fullness proposition let abelian groups independently action group abelian see proposition let arbitrary groups actions trivial group gab abelian tensor product proof equality action commutator conjugation abelian analogously hence abelian previous formula triviality actions analogously hence abelian remind presentation tensor product central extension see derivative subgroup called following subgroup map defined homomorphism kernel ker central subgroup acts rule exists short exact sequence compatible actions tensor products case viewed via conjugation action induced setting following proposition gives answer following question nonabelian tensor product thomas formulated letter authors proposition let free nilpotent group rank aut automorphism group proof let free group rank basis free nilpotent group let acts trivially elements act automorphisms easy see actions compatible let show case let prove lies take aut acts generators rules hence generator lies analogously lie completes proof actions compatible section study question let group acts group automorphisms possible define action pair actions compatible consider examples example let take dependence actions three cases action action trivial second part proposition abelian tensor product let acts action trivial difficult check act compatibly find calculate hence definition generated elements using defining relations bardakov neshchadim find side hence case result let acts acts case act indeed hence equality hold let groups actions defined homomorphisms aut aut definition actions compatible case say pair compatible rewrite equalities form inner automorphism induced conjugation analogously inner automorphism induced conjugation compatible actions tensor products theorem pair defines compatible actions following inclusions hold naut inn naut inn inn inn subgroups inner automorphisms aut embedding naut inn defining aut formula get compatible actions proof first claim immediately follows relations prove second claim enough check equivalent equality using definition rewrite left side rewrite right side using homomorphism last equality used formula hence equality holds question inclusions naut inn naut inn sufficient compatibility pare bardakov neshchadim compatible actions nilpotent groups first recall following definition definition let groups normal subgroups say comparable respect pare homomorphisms mod mod note mutually inverse isomorphisms following theorem holds theorem let groups exist homomorphisms mod mod action action rules compatible following equalities hold proof let prove following relation holds denote left hand side relation transform since commutator lies center hence denote right hand side relation transform see first relation definition compatible action holds checking second relation similar compatible actions tensor products theorem particular answer question nilpotent groups corollary nilpotent groups pare homomorphisms define compatible action problem let free nilpotent groups corollary pair homomorphisms hom hom defines tensor product give classification groups note arbitrary groups corollary hold indeed let free groups rank define homomorphisms rules conditions compatible actions hold tensor products note group aut trivial hence group acts trivially section devoted answer following question question let group aut automorphism order let aut conditions pare compatible aut trivial automorphism second part proposition gab abelian tensor product general case proposition let group cyclic group order two generator aut homomorphism aut trivial homomorphism pare actions compatible holds central element particular center trivial gab bardakov neshchadim proof since inn normalizes every holds using equality arbitrary element get since arbitrary element central element applying equality arbitrary abelian group know following proposition analog property tensor product proposition let abelian group cyclic group order acts elements following manner tensor product defined isomorphism proof difficult check defined actions compatible since acts trivially abelian defining relations tensor product form relations give one relation follows since set relations full system relations exists natural isomorphism defined formular acknowledgement authors gratefully acknowledge support also thank ivanov lavrenov thomas interesting discussions useful suggestions compatible actions tensor products references brown loday excision homotopique basse dimension acad sci paris ser math brown loday van kampen theorems diagrams spaces topology appendix zisman brown johnson robertson computations tensor products groups algebra donadze larda thomas tensor product bogomolov multiplier preprint lue ganea map nilpotent groups london math soc miller second homology group group relations among commutators proceedings ams sobolev institute mathematics novosibirsk russia novosibirsk state university novosibirsk russia novosibirsk state agrarian university dobrolyubova street novosibirsk russia address bardakov sobolev institute mathematics novosibirsk state university novosibirsk russia address neshch
| 4 |
spanning tree congestion computation generalized partition sunil chandran feb yun kuen cheung davis department computer science automation indian institute science india sunil max planck institute informatics saarland informatics campus germany ycheung dissac abstract study natural problem graph sparsification spanning tree congestion stc problem informally stc problem seeks spanning tree routing many original edges root problem dates back least years ago motivated applications network design parallel computing circuit design variants problem also seen algorithmic applications preprocessing step several important graph algorithms general connected graph vertices edges show stc asymptotically optimal since also demonstrate graphs stc least present time algorithm computes spanning tree congestion log also present another algorithm computing spanning tree congestion algorithm runs time log achieving results important intermediate theorem generalized theorem chen gave proof give first elementary constructive proof providing local search algorithm running time key ingredient time algorithm discuss consequences theorem concerning graph partitioning might independent interest also show graph satisfies certain expanding properties stc corresponding spanning tree computed polynomial time use show random graph stc high probability introduction graph generally describes transformation large input graph graph preserves certain feature work done author visiting max planck institute informatics germany supported alexander von humboldt fellowship part work done author visitor courant institute nyu visit funded part new york university sunil chandran yun kuen cheung davis issac distance cut congestion flow either exactly approximately algorithmic value clear since smaller graph might used preprocessed input algorithm reduce subsequent running time memory requirement paper study natural problem graph sparsification spanning tree congestion stc problem informally stc problem seeks spanning tree routing many original edges problem network design applications designers aim build sparse networks meet traffic demands ensuring connection edge congested indeed root problem dates back least years ago name load factor natural motivations parallel computing circuit design applications stc problem formally defined ostrovskii since number results presented probabilistic version stc problem coined probabilistic capacity mapping also finds applications several important graph algorithm problems problem two canonical goals graph sparsification problems understand sparsity output graph well feature preserved devise efficient algorithms computing sparser graph also goals stc problem focus two scenarios general connected graphs vertices edges graphs exhibit certain expanding properties show spanning tree congestion stc factor better trivial bound present algorithm computes spanning tree congestion log also present another algorithm computing spanning tree congestion algorithm runs time almost ranges average degree also demonstrate graphs stc least show expanding properties permit devise polynomialtime algorithm computes spanning tree congestion using result together separate argument show random graph stc high probability achieving results important intermediate theorem generalized theorem first proved chen proof uses advanced techniques topology homology theory definition graph connected theorem theorems let graph let weight function let given distinct terminal vertices positive brevity say henceforth spanning tree congestion integers exists one main contributions give first elementary constructive proof providing local search algorithm running time theorem algorithm given graph computes satisfying conditions stated theorem time need instead input graph remains assumed algorithm running time improves log make three remarks first log algorithm algorithm computing spanning tree congestion second since theorem guarantees existence partition problem computing partition decision problem search problem local search algorithm shows problem complexity class pls raise completeness pls open problem third running times depend weights stc problem related problems results given connected graph let spanning tree edge detour respect unique path let denote set edges detour stretch respect length detour dilation edge number edges whose detours contain congestion cong spanning tree congestion stc graph stc mint cong runs spanning trees note equivalent definition use proofs removing results two connected components let denote one components various types congestion stretch dilation problems studied computer science discrete mathematics problems one typically seeks spanning tree structure minimum congestion dilation mention problems minimization done spanning trees given graph low stretch spanning tree lsst problem find spanning tree minimizes total stretch edges easy see minimizing total stretch equivalent minimizing total selected spanning tree stc problem find spanning tree minimum congestion notation hides polynomial factors input size sunil chandran yun kuen cheung davis issac tree spanner problem find spanning tree minimum dilation general spanner problem find sparser subgraph minimum distortion congestion dilation problems seek spanning tree structure famous among bandwidth problem cutwidth problem see survey details among problems mentioned several strong results published connection lsst problem alon shown lower bound max log upper bounds derived many efficient algorithms devised current best upper bound log since total stretch identical total best upper bound lsst problem automatically implies log upper bound average stc problem concern maximum shall see graphs maximum factor larger average comparison many strong general results stc problem though studied extensively past years problem formally proposed ostrovskii prior simonson studied parameter different name approximate cut width graph number results presented topic complexity results also presented recently results concern special classes graphs general result regarding stc general graphs upper bound rautenbach regen matching lower bound ostrovskii note upper bound interesting graph sparse since also trivial upper bound paper come strong improvement bounds years theorem informal connected graph vertices edges spanning tree congestion terms average degree davg state upper bound davg matching lower bound proof achieving upper bound constructive runs exponential time general graphs edges runs time using algorithm chen computing confluent flow splittable flow improve running time polynomial slightly worse upper bound guarantee log motivated open problem raised ostrovskii concerning stc random graphs formulate set expanding properties prove graph satisfying properties stc devise polynomial time algorithm computing spanning tree congestion graphs result together separate argument permit show random graph log spanning tree congestion small constant stc high probability thus resolving open problem raised ostrovskii completely graph partitioning generalized theorem looks clear powerful theorem make impact graph partitioning discuss number consequences might wider interest graph prominent topic graph wide range popular goal partition vertices sets number edges across different sets small objective minimizing total number edges across different sets widely studied various applications natural objective objective minimizing maximum number edges leaving set objective focus depending applications additional constraints sets partition two natural constraints balancedness sets approximately balanced sizes set induces connected subgraph balancedness constraint appears application domain decomposition parallel computing constraint motivated algorithms spanning tree construction imposing constraints simultaneously feasible every graph instance consider star graph vertices one wants thus natural ask graphs partitions satisfying constraints exist theorem implies simple sufficient condition existence partitions setting weight vertex degree using elementary fact maximum degree graph vertices edges proposition graph edges exists total degree vertices part consequently objective also due expander graphs bound optimal small constant factor proposition together lemma implies following crucial lemma achieving results lemma let graph edges stc proposition generalized include approximate balancedness terms number vertices setting weight vertex plus degree proposition given fixed graph edges vertices exists total note stc problem relevant connected graphs since threshold function graph connectivity logn result applies almost relevant range values sunil chandran yun kuen cheung davis issac degree vertices part number vertices part related work concerning stc problem okamoto gave algorithm computing exact stc graph probabilistic version stc problem coined probabilistic capacity mapping important tool several graph algorithm problems problem showed probabilistic setting distance capacity interchangeable briefly says general upper bound one objective implies general upper bound thus due results lsst upper bound log maximum average congestion result also implies log approximation algorithm problem improving upon approximation algorithm feige krauthgamer however deterministic setting interchanging phenomenon hold simple tight bound dilation congestion high precise definitions background key results concepts discussed recommend writing andersen feige graph prominent research topic wide applications comes surprise lot work done various aspects topic refer readers two extensive surveys schaeffer teng kiwi spielman teng formulated problem gave bounds classes graphs small separators improved steurer algorithmic side many related problems focus devising approximation algorithms sparkled seminal work arora rao vazirani sparsest cut spielman teng local clustering graph algorithms various constraints attracted attention across theory practice refer readers fairly recent account development objective extensively studied objective striking natural objective applications received much less attention algorithmic work objective variants svitkina tardos bansal none work addresses constraint classical version theorem vertex weights uniform proved independently proof uses homology theory proof elementary constructive implicitly analyze running time polynomial time algorithms constructing devised algorithm known general graphs recently hoyer thomas provided clean presentation proof paper journal computer science technology claimed algorithm however according recent study algorithm fall endless loop also said algorithm wrong see spanning tree congestion introducing terminology use constructive proof theorem notation given graph edge set disjoint vertex subsets let technical overview prove generalized theorem constructively follow framework proof borrow terminology recent presentation hoyer thomas emphasized proving generalized theorem since proof stage single vertex moved one set make progress making sure former set remains connected setting addition also ensure weights partitions exceed specified limit hence vertex moved one set another need candidate transferred proof presented section discussed crucial ingredient upper bound results lemma direct corollary generalized theorem lemma takes care cases cases provide recursive way construct low congestion spanning tree see section details showing lower bound general graphs challenge maintain high congestion keeping density small achieve combine three expander graphs little overlapping make overlapped vertices high degree force adjacent centroid spanning tree high congestion see section details formulate set expanding properties permit constructing spanning tree better congestion guarantee polynomial time basic idea simple start vertex high degree root try grow tree keep attaching new vertices keeping invariant subtrees rooted neighbours roughly balanced size subtree called branch trying grow tree balanced way soon realize tree grow remaining vertices may seen adjacent number heavy branches help balanced growth algorithm identify transferable vertex heavy branch descendants tree transferred lighter branch another technique use multiple rounds matching vertices tree remaining vertices attach new vertices tree tend make sure subtrees grow uncontrolled showing random graph satisfies expanding properties appropriate parameters show random graph stc high probability generalized theorem prove theorem section observe classical theorem follows theorem taking sunil chandran yun kuen cheung davis issac note perfect generalization one requires possible think vertex weights even integers odd let graph vertices edges weight function subset let wmax key combinatorial notions first highlight key combinatorial notions used proving theorem see figures illustrations notions fitted partial partition first introduce notion fitted partial partition fpp fpp tuple subsets subsets pairwise disjoint connected wmax say set fitted satisfying inequality say fpp strict fitted partial partition sfpp proper subset say set light say heavy otherwise note exists least one light set sfpp otherwise means also note taking fpp hence least one fpp exists configuration set fpp vertex define reservoir respect denoted vertices connected component note heavy set sequence vertices called cascade cascade called null cascade cascade empty note light set need define cascade since use proof see figure configuration defined pair fpp set cascades consists exactly one cascade possibly null cascade heavy set vertex cascade configuration called cascade vertex given configuration define rank level inductively follows vertex light set said level cascade vertex said rank edge vertex edge vertex vertex said level cascade vertex cascade vertex rank less vertex cascade vertex said level configuration called valid configuration heavy set rank defined cascade vertices rank strictly increasing spanning tree congestion fig given configuration heavy set figure shows cascade heavy set several reservoirs cascade vertices cascade vertex disconnected removal lead least two connected components connected component containing reservoir identify clarify terminal vertex never cascade epoch also epoch subset vertices connected note general possible vertex last cascade vertex cascade cascade rank rank note taking taking null cascade heavy set case heavy get valid configuration see figure configuration vectors total ordering vertex define neighborhood level smallest level vertex adjacent vertex level said satisfy maximality property vertex adjacent either cascade vertex level one terminals valid configuration called configuration vertices level satisfy maximality property note definition valid configuration configuration configuration define edge said bridge level sunil chandran yun kuen cheung davis issac rank rank rank rank rank rank vertices light sets level fig instance valid configuration every blue represent edge cascade vertex vertex reservoir light set every cascade vertex connected light set rank vertices epoch immediately rank cascade vertex level inductively every cascade vertex connected vertex level rank vertices epoch immediately rank cascade vertex level vertices last cascade vertex cascade level valid configuration said highest rank cascade vertex exactly cascade vertices take highest rank bridges level note taking taking null cascade heavy set gives configuration configuration define configuration vector nan number light sets total number vertices next define ordering configuration vectors let configurations say spanning tree congestion say say say say say say strictly better proof theorem use two technical lemmas configuration vectors orderings prove theorem proof theorem follows closely proof theorem makes use observation rank vertex local search algorithm give improved bound number configuration vectors navigated algorithm lemma given configuration bridge find configuration polynomial time proof since vertex level satisfies maximality property satisfying need worry vertices level let set vertices adjacent vertex level level highest rank cascade vertex cascade vertex rank claim exists least one empty case exhibit cut set size heavy set cascade let highest ranked cascade vertex heavy set null cascade let let set heavy set note sfpp hence least one light set let set vertices level remaining vertices since sfpp since vertices level empty exists least one light set vertices light set level show edge suppose exists edge bridge contradiction assumption bridge hence note heavy set otherwise level cascade vertex cascade vertices level level also level otherwise assumed empty level level sunil chandran yun kuen cheung davis issac edge means contradiction thus exists least one empty least one vertex give configuration follows set heavy set take cascade cascade appended heavy set take cascade cascade easy see vertex edge vertices either rank cascade vertex vertex also notice new cascade vertices introduce rank least one rank cascade vertex empty since bridges bridges vertex level hence vertices level retained levels least one vertex became vertex cascade vertex rank becomes vertex least one set since vertices means lemma given configuration bridge find polynomial time valid configuration one following holds configuration bridge level configuration proof let bridge let set containing note level keep modify get described maintain heavy set also heavy set hence maintain case light set take heavy set cascade taken null cascade light set wmax hence fitted also connected hence fpp either became heavy set case light set case easy see case heavy set case wmax take heavy set also heavy set cascade taken cascade clearly connected fitted assumption case hence indeed fpp observe vertices level still level since level also level level hence also easy see remains case wmax let cascade vertex rank note cascade vertex level let spanning tree congestion set vertices level initialize delete vertices one one specific order becomes fitted choose order deleting vertices remains connected consider spanning tree least one leaf delete leaf repeat process single vertex becomes fitted fitted even single vertex delete still fitted delete note point hence fitted also note remains connected hence fpp become light set became fitted last vertex deleted vertex deleted fitted hence weight least wmax deletion since last vertex deleted weight wmax weight least hence heavy set branch two subcases defining cascades case deleted process heavy set cascade taken cascade since new level vertex added vertices level retain level also easy see remains case deleted heavy set cascade taken cascade rank cascade vertex deleted vertices level smaller retain levels observe bridges vertices level vertices level still maintain maximality property introduce cascade vertices hence remains prove bridge level know since rank cascade vertex edge level observe level well hence taking completes proof proof theorem always maintain configuration fpp sfpp point done assume sfpp start configuration cascades heavy sets null cascades current configuration configuration bridge use lemma get configuration take new current configuration current configuration configuration bridge get configuration repeatedly applying lemma times either case get strictly better configuration polynomial time call iteration algorithm notice number iterations possible number distinct configuration vectors possible easy see distinct configuration vectors highest rank since rank sunil chandran yun kuen cheung davis issac number iterations algorithm since iteration runs polynomial time guaranteed two lemmas required running time algorithm terminates fpp given current configuration sfpp gives required partition proof theorem since graph also connected algorithm give required partition due theorem need prove better running time claimed theorem show highest rank attained vertex algorithm since number distinct uration vectors highest rank log running time claimed hence remains prove highest rank observe configuration union vertices level set terminals together forms cutset since graph means number vertices level least required bound rank easily follows upper bounds spanning tree congestion first state following easy lemma together proposition implies lemma lemma graph let vertex let neighbours suppose exists sum degree vertices let arbitrary spanning tree let edge let spanning tree defined congestion theorem connected graph algorithm log computes spanning tree congestion time theorem connected graph polynomial time algorithm computes spanning tree congestion log two algorithms follow framework depicted algorithm recursive algorithm parameter global parameter number edges input graph first level recursion let denote number vertices graph difference two algorithms line step executed running time step guarantee proving theorem use theorem spanning tree congestion proposition yielding log proving theorem make use algorithm chen yields log poly algorithm findlcst input connected graph vertices edges output spanning tree return arbitrary spanning tree end global minimum vertex cut smallest connected component see figure findlcst findlcst connected global min cut return arbitrary edge else arbitrary vertex pick neighbours graph denote let denote edge see figure compute denoted total degree graph vertices let time needed arbitrary spanning tree return end rest section first discuss algorithm chen prove theorem proof theorem almost identical deferred appendix confluent flow algorithm chen confluent flow problem input includes graph demand function sinks flow amount routed one sinks restriction every vertex outgoing flow must leave edge outgoing flow unsplittable problem seek flow satisfying demands minimizes node congestion maximum sunil chandran yun kuen cheung davis issac fig scenario algorithm graph low connectivity vertex set global minimum vertex cut graph vertex set smallest connected component removal union connected components fig scenario algorithm graph high connectivity incoming flow among vertices since incoming flow maximum one sinks equivalent minimize maximum flow received among sinks assume flow entering sink leave splittable flow problem almost identical confluent flow problem except restriction dropped outgoing flow split along multiple edges note maximum incoming flow might sink known splittable flow solved polynomial time brevity drop phrase theorem section suppose given graph demand sinks splittable flow node congestion exists spanning tree congestion polynomial time algorithm computes confluent flow node congestion input corollary let graph edges vertices exists polynomial time algorithm computes total degrees vertices corollary follows theorem proposition see appendix details congestion analysis view whole recursion process recursion tree endless loop since every path recursion tree number vertices input graphs strictly decreasing hand note leaf recursion tree pis resulted either input graph call satisfies lines executed internal node appears input graph low makes two recursion calls prove following statement induction graph input call thep recursion tree returned spanning tree call congestion log first handle two basis cases case findlcst returns arbitrary spanning tree congestion bounded case lemma pfindlcst returns tree congestion log log next let input graph call represented internal node recursion tree recall definitions algorithm let note induction hypothesis congestion returned spanning tree max congestion congestion log viewing real variable taking derivative easy see expression maximized thus congestion log log desired theorem runtime analysis every internal node recursion tree algorithm makes two recursive calls two strictly smaller vertex size inputs dominating knitting cost line computing global minimum vertex cut done polynomial time since every leaf recursion tree running time polynomial standard analysis algorithms running time whole algorithm polynomial completes proof theorem sunil chandran yun kuen cheung davis issac lower bound spanning tree congestion give lower bound spanning tree congestion matches upper bound theorem sufficiently large satisfying max log exists connected graph vertices edges spanning tree congestion least start following lemma states random graph sufficiently large edge expansion high probability proof lemma uses fairly standard arguments deferred appendix lemma integer logn let denote random graph vertices edge occurs independently probability probability least random graph connected number edges random graph iii subset vertices number edges leaving least particular sufficiently large integer log setting exists connected graph vertices edges subset vertices number edges leaving least denote graph discuss construction see figure delving proof vertex set union three vertex subsets disjoint embed edge sets denoted respectively point construction similar ostrovskii except use instead complete graph new component construction adding following edges vertex add edge every vertex set edges denoted similarly vertex add edge every vertex set edges denoted new component crucial without could prove lower bound proof theorem let graph constructed whole graph vertices number edges least due edges sufficiently large well known tree vertices exists vertex called centroid tree removing decomposes tree connected components size consider spanning tree spanning tree congestion fig construction spanning tree congestion three vertex subsets size subsets embed expander small overlap disjoint vertex add edges vertex similarly vertex add edges shown figure vertex given graph let centroid tree without loss generality assume otherwise swap roles removal adjacent edges tree decomposes tree number connected components components intersects must contain pat least one vertex thus number components hence exists one denoted let denote connects three cases case due property congestion least min case let note case assumption due edge subset congestion least case let let note suppose contradiction assumption sunil chandran yun kuen cheung davis issac centroid thus due edge subset congestion least graphs expanding properties vertex subset let denote set vertices adjacent vertex let definition graph vertices expanding graph following four conditions satisfied vertex subset vertex subset vertex subset subset vertex subset theorem connected graph graph polynomial time algorithm computes spanning tree congestion max next present polynomial time algorithm theorem analysis algorithm let graph condition every vertex degree least let vertex degree let neighbours maintain tree rooted trees rooted respectively call branches see figure start branch order minimize congestion grow balanced way maintain tin roughlyoof size branch saturated contains least max vertices point time let set vertices vertices often move subtree saturated branch unsaturated branch ensure balance let denote subtree rooted vertex saturated branch called transferable branch neighbour tree unsaturated see figure spanning tree congestion fig tree branches fig transfer subtree saturated branch unsaturated branch algorithm divided two phases described throughout algorithm whenever branch gets modified gets modified accordingly whenever gets modified gets modified accordingly phase repeatedly one following two actions prove precondition least one actions satisfied exists neighbour unsaturated branch add vertex edge branch exists least one transferable vertex see figure find transferable vertex smallest let sunil chandran yun kuen cheung davis issac branch currently containing branch transferable arbitrarily chosen neighbour remove subtree add child pick neighbour arbitrarily chosen many either show analysis exists add vertex edge branch containing phase repeat find maximum matching bipartite graph formed edges let matching add edges analysis say tree saturated contains least vertices determine appropriate value end analysis analysis phase claim phase precondition either step step satisfied also show existence vertex specified step whenever step reached given fact vertex moved either step step round phase phase runs correctly terminates linear number rounds phase also maintain invariant branch vertices thus saturated branch exactly vertices call invariant balancedness note balancedness violated due step new vertex added unsaturated branch violated step branches defined step become unsaturated end step define hidden vertices denoted follows vertices adjacent vertices outside tree vertex unsaturated branch vertex clearly precondition step satisfied let assume vertices unsaturated branches hidden case show precondition step satisfied argue case otherwise take subset cardinality condition contained cardinality least contradiction since number saturated branches ensure least one unsaturated branch exists set let denote set vertices unsaturated branches since vertices hidden vertices condition note vertices saturated branches principle exists saturated branch containing least vertices setting calculation guarantees existence saturated branch containing least vertices let branch spanning tree congestion pick vertex contain vertex except size let vertex adjacent branch containing since vertices transferable vertex thus precondition step satisfied set saturated branch least one unhidden vertex particular unhidden vertex adjacent vertex either adjacent vertex vertex required step analysis phase since connected iteration phase hence phase terminates linear number rounds end phase since empty clearly spanning tree remains estimate congestion spanning tree towards state following modified hall theorem easy corollary standard hall theorem lemma bipartite graph vertex let denote neighbours let suppose exist bipartite graph admits matching size least recall phase consists multiple rounds finding matching long condition plus modified hall theorem guarantees round least number vertices matched thus log rounds matching reaching condition plus modified hall theorem guarantees one round matching vertices left end phase branch vertices round matching cardinality branch doubled thus maximum possible number vertices branch running whole algorithm log hence stc recall need satisfy set max thus sunil chandran yun kuen cheung davis issac random graph let log following lemmas show high probability graph hence proof lemmas deferred appendix lemma probability least lemma probability least lemma probability least least neighbors lemma cut size probability least plugging bounds lemmas theorem together separate lower bound argument theorem appendix following theorem appendix also present proof theorem theorem log probability least stc discussion open problems paper provide thorough understanding combinatorially algorithmically spanning tree congestion general graphs random graphs course also provide first constructive proof generalized theorem might independent interest following natural open problems finding spanning tree minimum congestion indeed bodlaender showed stc problem constant factor approximation polynomial time algorithm exist present algorithm computing spanning tree achieving congestion algorithm runs time polynomial time algorithm constructing spanning tree graph connected parts size log found polynomial time due algorithm chen improve sizes parts finding partition polynomial time solvable spanning tree congestion references ittai abraham yair bartal ofer neiman nearly tight low stretch spanning trees focs pages ittai abraham ofer neiman using build low stretch spanning tree stoc pages noga alon richard karp david peleg douglas west game application problem siam ingo gautam das david dobkin deborah joseph soares sparse spanners weighted graphs discrete computational geometry reid andersen uriel feige interchanging distance capacity probabilistic mappings corr sanjeev arora satish rao umesh vazirani expander flows geometric embeddings graph partitioning acm nikhil bansal uriel feige robert krauthgamer konstantin makarychev viswanath nagarajan joseph naor roy schwartz graph partitioning small set expansion siam sandeep bhatt fan chung frank thomson leighton arnold rosenberg optimal simulations tree machines preliminary version focs pages hans bodlaender fedor fomin petr golovach yota otachi erik jan van leeuwen parameterized complexity spanning tree congestion problem algorithmica hans bodlaender kyohei kozawa takayoshi matsushima yota otachi spanning tree congestion graphs discrete mathematics random graphs cambridge university press andrew thomason random graphs small order northholland mathematics studies leizhen cai derek corneil tree spanners siam discrete jiangzhuo chen robert kleinberg rajmohan rajaraman ravi sundaram adrian vetta almost tight bounds existence theorems confluent flows acm michael elkin yuval emek daniel spielman teng lowerstretch spanning trees siam uriel feige robert krauthgamer polylogarithmic approximation minimum bisection siam division graphs connected subgraphs colloq math soc janos bolyai ludovic hofer thibaud lambert study article algorithm find graph alexander hoyer robin thomas theorem arxiv url http david johnson christos papadimitriou mihalis yannakakis easy local search comput syst marcos kiwi daniel spielman teng domain decomposition theor comput sunil chandran yun kuen cheung davis issac ioannis koutis gary miller richard peng nearly log time solver sdd linear systems focs pages kyohei kozawa yota otachi spanning tree congestion rook graphs discussiones mathematicae graph theory kyohei kozawa yota otachi koichi yamazaki spanning tree congestion graphs discrete mathematics hiu fai law siu lam leung mikhail ostrovskii spanning tree congestions planar graphs involve homology theory spanning trees graph acta math acad sci hungaricae christian dieter rautenbach friedrich regen spanning tree congestion discrete nakano saidur rahman takao nishizeki algorithm planar graphs inf process yoshio okamoto yota otachi ryuhei uehara takeaki uno hardness results exact exponential algorithm spanning tree congestion problem graph algorithms ostrovskii minimal congestion trees discrete ostrovskii minimum congestion spanning trees planar graphs discrete ostrovskii minimum congestion spanning trees bipartite random graphs acta mathematica scientia harald optimal hierarchical decompositions congestion minimization networks stoc pages raspaud ondrej imrich vrto congestion dilation similarities differences survey sirocco pages satu elisa schaeffer graph clustering computer science review shai simonson variation min cut linear arrangement problem mathematical systems theory daniel spielman teng local clustering algorithm massive graphs application nearly linear time graph partitioning siam david steurer tight bounds boundary decomposition cost weighted graphs spaa pages hitoshi suzuki naomi takahashi takao nishizeki linear algorithm bipartition biconnected graphs inf process zoya svitkina tardos multiway cut pages teng scalable algorithms data network analysis foundations trends theoretical computer science koichi wada kimio kawaguchi efficient algorithms tripartitioning triconnected graphs graphs concepts computer science international workshop utrecht netherlands june proceedings pages spanning tree congestion missing proofs sections proof corollary first set demand vertex flow problem degree vertex sinks flow problem proposition exists total degrees vertices routing demand vertex via arbitrary path construct splittable flow node congestion theorem one construct confluent flow node congestion polynomial time obviously confluent flow flow originating one vertex goes completely one sink set set vertices flows originating vertices routine check desired proof theorem instead giving full proof point differences proof theorem first handling basis case theorem proposition lemma havep improvedp upper bound congestion returned tree thus improved viewing real variable taking derivative easy see expression maximized bound desired concerning running time clear worst case dominated calls algorithm theorem note number calls since call algorithm disjoint set vertices remains one concern connectedness suppose contrary connected let one connected components contains least number vertices contains vertices note vertex cut set graph thus contradicting global minimum vertex cut set sunil chandran yun kuen cheung davis issac proof lemma well known requirements satisfied probability subset chernoff bound since logn probability union bound probability iii satisfied spanning tree congestion random graphs proof theorem first present simple proof random graph stc high probability theorem gives upper bound theorem gives lower bound proof theorem uses lemma fact random graphs minimum degree equal high probability theorem give efficient algorithm theorem log spanning tree congestion probability least proof known threshold probability random graph threshold probability minimum degree least since log using chernoff bound taking union bound vertices gives minimum degree least probability least hence probability least also number edges probability least using lemma probability least spanning tree congestion theorem log spanning tree congestion probability proof using chernoff bounds applying union bound easy show probability every vertex degree sufficiently large constant also lemma probability properties iii lemma holds proof conditioned mentioned highly probable events take spanning tree gives minimum congestion let centroid tree connected component vertices connected component number vertices least spanning tree congestion define connected component else connected components vertices case let forest formed union minimum number connected components easy see also number edges degg property iii lemma number edges edges contribute congestion least one edges since sends tree edges parts follows exists one edge congestion least claimed random graph satisfies expanding properties constants easy reference list constants used proof lemma let probability fixed vertex edge since log expected value least hence using chernoff bound probability since number lemma applying union bound proof lemma let since log sufficiently large divide groups size probability group edge expected number groups edge least thus chernoff bound probability log log number sets size log hence taking union bound get required lemma proof lemma first prove exist least one edge high probability probability edge fixed number pairs hence taking union bound probability claim holds least using claim prove least neighbors high probability suppose violates claim note assume otherwise claim vacuously true let edges also least least hence using previous claim edge probability least hence get contradiction hence claim true probability least sunil chandran yun kuen cheung davis issac proof lemma let denote fixed vertex subset expected value therefore probability log using chernoff bounds probability sets size least using union bound using probability vertex subsets least using union bound
| 8 |
nov seamless single shot object pose prediction bugra tekin epfl sudipta sinha microsoft research pascal fua epfl abstract detection pipeline made one cnn coarsely segment object another predict locations projections object bounding box given segmentation used compute pose using pnp algorithm method effective slow due nature different pipeline relies ssd architecture predict bounding boxes rough estimate object orientation single step followed approximation predict object depth size bounding box image lift detections require pose refinement step improved accuracy increases running times linearly number objects detected propose approach simultaneously detecting object rgb image predicting pose without requiring multiple stages examine multiple hypotheses unlike recently proposed technique task predicts approximate pose must refined accurate enough require additional result much faster fps titan pascal gpu suitable processing key component method new cnn architecture inspired directly predicts image locations projected vertices object bounding box object pose estimated using pnp algorithm single object multiple object pose estimation ine cclusion datasets approach substantially outperforms recent approaches used without postprocessing pose refinement step used boost accuracy two methods fps less much slower method paper propose deep cnn architecture takes image input directly detects projections bounding box vertices trainable accurate even without posteriori refinement since need refinement step also need precise detailed textured object model needed methods need bounding box object shape training derived easier acquire approximate shape representations demonstrate accuracy ine dataset become facto standard benchmark pose estimation however much faster competing techniques factor five dealing single object furthermore pay virtually handling several objects running time remains constant whereas methods grow proportional number objects demonstrate cclusion dataset introduction object detection pose estimation crucial augmented reality virtual reality robotics currently methods relying depth data acquired rgbd cameras quite robust however active depth sensors power hungry makes object detection methods passive rgb images attractive mobile wearable cameras many fast keypoint methods effective textured objects however difficulty handling weakly textured untextured objects processing video streams quite common dealing cameras wearable devices therefore contribution architecture yields fast accurate pose prediction without requiring extends single shot cnn architectures detection seamless natural way detection task implementation based yolo approach amenable singleshot detectors ssd variants deep learning techniques recently used address limitations object related work review existing work pose estimation ranging classical feature template matching methods newer trainable methods classical methods traditional rgb object instance recognition pose estimation works used local keypoints feature matching local descriptors needed methods designed invariance changes scale rotation illumination viewpoints methods often fast robust occlusion scene clutter however reliably handle textured objects high resolution images related methods include registration hausdorff matching oriented chamfer matching edges chamfer matching aligning models images methods advent commodity depth cameras spawned many object pose estimation methods example hinterstoisser proposed template matching algorithms suitable color depth images rios extended work using discriminative learning cascaded detections higher accuracy efficiency respectively methods used indoor robots object recognition pose estimation grasping manipulation brachmann proposed using regression forests predict dense object coordinates segment object recover pose dense correspondences also extended method handle uncertainty inference deal rgb images zach explored fast dynamic programming based algorithms images methods recent years research pose estimation tasks dominated cnns techniques viewpoints keypoints render cnn cast object categorization pose estimation classification tasks specifically discretizing pose space contrast posenet proposes using cnn directly regress rgb image pose albeit camera pose estimation slightly different task since posenet outputs translational rotational component two associated loss terms balanced carefully tuning training avoid problem newer posecnn architecture trained predict object pose single rgb image multiple stages decoupling translation rotation predictors geodesic loss function suitable optimizing rotations suggested another way address issue recently emerged cnns directly predict object pose instead output coordinates masks discrete orientation predictions pose inferred predictions image problem weighting different loss terms goes away also training becomes numerically stable resulting better performance ine dataset also adopt philosophy work parallel developments object detection task progressive trend towards single shot cnn frameworks alternative methods first find candidate locations image classifies objects background recently single shot architectures yolo ssd shown fast accurate ssd extended predict object identity bounding box image discrete estimate object orientation paper beyond methods extending architecture directly predict coordinates full object pose accurately recovered approach goal designing trainable network predicts pose inspired impressive performance single shot object detectors yolo led design cnn architecture shown fig designed network predict projections corners bounding box around objects main insight yolo originally designed regress bounding boxes predict projections bounding box corners image points predicted object instance image given coordinates ground control points bounding box corners pose calculated algebraically efficient pnp algorithm takes similar approach however first find segmentation mask around object present cropped image second network predicts eight corners image describe network architecture explain various aspects approach details model formulate pose estimation problem terms predicting image coordinates virtual control points associated models objects interest given coordinate predictions calculate object pose using pnp algorithm parameterize model object control points control points select corners tight bounding box fitted model similar addition use centroid object model point parameterization general figure overview proposed cnn architecture example input image four objects grid showing cells responsible detecting four objects cell predicts locations corners projected bounding boxes image output tensor network represents cell vector consisting corner locations class probabilities confidence value associated prediction used rigid object arbitrary shape topology addition control points guaranteed well spread image could semantically meaningful many objects model takes input single full color image processes architecture shown figure divides image regular grid containing cells shown figure model grid location output tensor associated multidimensional vector consisting predicted image locations control points class probabilities object overall confidence value test time predictions cells low confidence values objects interest present pruned output target values network stored tensor size visualized fig target values object specific spatial cell location placed cell tensor form dimensional vector objects present different cells vectors tensor train network predict target values control points case object model center bounding box corners could defined ways well train work need know bounding box object detailed mesh associated texture map yolo crucial trained network able predict precise locations also high confidence values regions object present low confidence present case object detection yolo uses confidence values intersection union iou score associated predicted true rectangles image case objects compute equivalent iou score two arbitrary cuboids would need calculate convex hull corresponding intersections would tedious would slow training therefore take different approach model predicted confidence value using confidence function shown figure confidence function returns confidence value predicted point denoted based distance ground truth target point formally define confidence function follows dth dth otherwise distance defined euclidean distance image space achieve precise localization points constrain network output points allowed fall outside cell predicted control point defined confidence distance figure confidence function distance predicted point true point function choose sharp exponential function value dth instead monotonically decreasing linear function sharpness exponential function defined parameter practice apply confidence function control points calculate mean value assign confidence mentioned earlier also predict conditional class probabilities cell class probability conditioned cell containing object overall output tensor depicted figure dimension spatial grid corresponding image dimensions cells cell dimensional vector control points class probabilities one confidence value network architecture follows fully convolutional yolo architecture thus network convolutional layers layers similar yolo choose spatial grid make predictions also allow higher layers network use features adding passthrough layer specifically bring features earlier layer resolution apply batch normalization resize input image training network downsamples image factor change input resolution multiple randomly chosen set robust objects different size training procedure final layer outputs class probabilities coordinate locations control points overall confidence score training confidence value computed fly using function defined measure distance current coordinate predictions predict offsets coordinates respect corner associated grid cell centroid constrain offset lie however corner chosen sigmoid function case centroid identity function case eight corner points effect forcing network first find approximate cell location object later refine eight corner locations minimize following loss function train complete network lpt lconf lid terms lpt lconf lid denote coordinate loss confidence loss classification loss respectively use error coordinate confidence losses cross entropy classification loss suggested improve model stability downweight confidence loss cells contain objects setting cells contain objects set multiple objects located close scene likely appear close together images occluded cases certain cells might contain multiple objects able predict pose multiple objects lie cell allow candidates per cell therefore predict five sets control points per cell similarly precompute five anchor boxes define size width height rectangle tightly fitted masked region around object image training assign whichever anchor box similar size current object responsible one predict coordinates object pose prediction detect estimate pose objects invoking network test time estimate confidence scores object multiplying class probabilities score returned confidence function grid cell produces predictions one network evaluation cells predictions low confidence pruned using confidence threshold large objects objects whose projections lie intersection two cells multiple cells likely predict highly confident detections obtain robust well localized pose estimate inspect cells neighborhood cell maximum confidence score combine individual corner predictions adjacent cells computing weighted average individual detections weights used confidence scores associated cells network gives projections object centroid corners bounding box along object identity estimate pose correspondences points using pnp pose estimation method case pnp uses control point correspondences provides estimate rotation translation object camera coordinates implementation details initialize parameters network training original network imagenet classification task pose estimates early stages training inaccurate confidence values computed using initially unreliable remedy pretrain network parameters setting regularization parameter confidence subsequently train network setting cells contain object otherwise reliable confidence estimates early stages network practice set sharpness confidence function distance threshold pixels use stochastic gradient descent optimization start learning rate divide learning rate every epochs avoid overfitting use extensive data augmentation randomly changing hue saturation exposure image factor also randomly scale translate image factor image size implementation based pytorch make code publicly available sake reproducibility experiments first evaluate method estimating pose single objects evaluate case multiple objects present image use datasets evaluation protocols review present compare results state art methods datasets test approach two datasets designed explicitly benchmark object pose estimation algorithms describe briefly linemod become facto standard benchmark object pose estimation textureless objects cluttered scenes central object rgb image assigned rotation translation full mesh representing object also provided method object ape benchvise cam cat driller duck eggbox glue holepuncher iron lamp phone average refinement brachmann refinement brachmann table comparison approach algorithms linemod terms reprojection error report percentages correctly estimated poses bold face numbers denote best overall methods bold italic numbers denote best methods among use refinement opposed ones use different note even though rely knowledge detailed object model method consistently outperforms baselines occlusion detection pose estimation dataset contains additional annotations objects subset linemod images name suggests several objects images severely occluded due scene clutter makes pose estimation extremely challenging exception primarily used test algorithms require depth images evaluation metrics use three standard metrics evaluate pose accuracy namely reprojection error average distance model vertices referred add metric iou score cases calculate accuracy percentage correct pose estimates certain error thresholds using reprojection error consider pose estimate correct mean distance projections object mesh vertices using estimate ground truth pose less pixels measures closeness true image projection object obtained using estimated pose metric suitable augmented reality applications comparing poses using add metric take pose estimate correct mean distance true coordinates mesh vertices estimated given pose less object diameter objects approximately threshold smaller objects ape threshold drops rotationally symmetric objects whose pose computed one degree rotational freedom modify slightly metric method compute min rotation translation predicted ones vertex set model use metric evaluating pose accuracy rotationally invariant objects eggbox glue compute iou metric measure overlap projections model given predicted pose accept pose correct overlap larger single object pose estimation first estimate pose central object rgb linemod images without reference depth ones compare approach operate similar conditions dataset training images selected relative orientation corresponding pose annotations larger threshold avoid influenced scene context segment training images using segmentation masks provided dataset replace background random image pascal voc dataset use exactly splits report results terms reprojection error table pose error table provide example pose predictions approach figure refinement brachmann object ape benchvise cam cat driller duck eggbox glue holepuncher iron lamp phone average refinement brachmann table comparison approach algorithms linemod terms add metric report percentages correctly estimated poses bold face numbers denote best overall methods bold italic numbers denote best methods among use refinement opposed ones use different threshold object ape benchvise cam cat driller duck eggbox glue holepuncher iron lamp phone average table comparison approach without refinement using different thresholds pose metric comparative accuracy accuracy terms projection error table compare results brachmann competing methods involve pipeline comprises detection step followed pose prediction refinement since refinement stage show table results without cases achieve better pose estimation accuracies table perform similar comparison whose authors report projection accuracy terms iou metric method also requires posteriori refinement results better cases even though relies large training set rendered images sampled wide range viewpoints locations competing methods refinement outperform methods significant margin least refinement pose estimates still better brachmann assuming additional knowledge full cad model using refine pose boost pose estimation accuracy without bells whistles approach achieves pose estimation accuracy metrics without refinement compared methods rely additional knowledge full cad models pose refinement still achieves performance projection error iou metrics yields comparable accuracy add metric approach could used conjunction refinement strategies increase accuracy however comes heavy computational cost describe accuracy terms add metric tables compare methods terms average distances described section table give numbers refinement authors report results without refinement however provided accuracy numbers reported table authors able provide accuracy numbers without refinement metric made code publicly available ran code provided pretrained models obtain pose errors method object ape benchvise cam cat duck glue holepuncher iron lamp phone average driller eggbox refinement refinement table comparison approach linemod using iou metric authors able provide results approach refinement accuracy speed table report computational efficiency approach single object pose estimation comparison approaches approach runs performance contrast existing approaches fall short particular algorithm runs least times faster techniques single object pose estimation seen table pose refinement brachmann increase accuracy significantly additional miliseconds per object also gets substantial improvement accuracy additional miliseconds per object even without correcting pose error approach outperforms brachmann yields close accuracy times faster single object pose estimation discussed also unrefined poses computed bounding boxes ssd object detector rather approximate confirmed running publicly available code provided pretrained models report accuracy numbers without refinement using add metric table different thresholds providing good initialization subsequent pose processing pose estimates without refinement much less accurate approach refinement increases pose estimation accuracy significantly however computational time miliseconds per object moreover contrast approach refinement requires knowledge full object cad model figure show example results method ine include visual results method supplementary material method overall speed object refinement runtime fps fps fps fps brachmann rad lepetit kehl table comparison overall computational runtime approach single object comparison provide computational runtime induced pose refinement stage report pose estimation accuracy identity objects assumed known priori guessed end method assumes access image crops based bounding boxes make assumptions instead jointly detect object estimate identity predict pose generate training images approach explained section augment linemod training data adding images objects extracted training sequences report pose estimation accuracy figure demonstrate even without assuming information case method yields satisfactory pose accuracy case severe occlusions object detection purposes consider estimate correct detection iou larger note detection iou corresponds overlap bounding boxes object rather overlap projected masks case iou metric defined sec table report mean average precision map similar accuracy reported outperforms ones reported method map hinterstoisser brachmann kehl table detection experiment occlusion dataset left plot right multiple object pose estimation approach provides accurate poses performance upon one network invocation computational overhead efficient pnp algorithm operates points per object furthermore require full colored object models refine initial pose estimates approach therefore scalable handle multiple objects shown figure negligible computational overhead pnp competing approaches linear runtime growth use occlusion dataset compare approach brachmann detection explicitly stated authors confirmed private email communication figure pose estimation results approach note method recover pose challenging scenarios involve significant amounts clutter occlusion orientation ambiguity last column show failure cases due motion blur severe occlusion specularity figure best viewed computer screen tions decrease accuracy reach runtime fps runtime virtually remains estimating pose multiple objects method figure percentage correctly estimated poses function projection error different objects occlusion dataset kehl runtime miliseconds fps fps fps fps table method ine dataset accuracy reported percentage correctly estimated poses respect projection error network model used four input resolutions timings nvidia titan pascal gpu conclusion projection metric speed number objects figure runtime approach increasing number objects compared also evaluated accuracy speed approach different input resolutions explained section adopt training procedure change input resolution training randomly allows able change input resolution predict images higher resolution especially useful predicting pose small objects robustly initial step object detection produce image crops resized higher resolutions pose prediction approach requires better handling small objects table compare accuracy computational efficiency approach different input proposed new cnn architecture fast accurate pose prediction naturally extends single shot object detection paradigm object detection network predicts locations projections objects bounding box corners involves predicting points bounding box regression given predicted corner projections pose computed via efficient pnp method high accuracy existing object detectors refine pose estimates postprocessing step requires accurate object model also incurs runtime overhead per detected object contrast single shot predictions accurate alleviates need refinement due method dependent access object models virtually overhead estimating pose multiple objects method runs fps depending image resolution makes substantially faster existing methods acknowledgements would like thank mahdi rad vincent lepetit fruitful discussions providing results method table also thank wadim kehl fabian manhardt slobodan ilic helpful discussions help evaluating algorithm without postprocessing table references brachmann krull michel gumhold shotton rother learning object pose estimation using object coordinates eccv brachmann michel krull ying yang gumhold pose estimation objects scenes single rgb image cvpr choi christensen textureless object detection tracking approach iros choi christensen object pose estimation unstructured environments robotics autonomous systems collet martinez srinivasa moped framework object recognition pose estimation manipulation international journal robotics research everingham van gool williams winn zisserman pascal visual object classes voc challenge ijcv hinterstoisser holzer cagniart ilic konolige navab lepetit multimodal templates detection objects heavily cluttered scenes iccv hinterstoisser lepetit ilic holzer bradski konolige navab model based training detection pose estimation objects heavily cluttered scenes accv huttenlocher klanderman rucklidge comparing images using hausdorff distance tpami kehl manhardt tombari ilic navab making detection pose estimation great iccv kehl milletari tombari ilic navab deep learning local patches object detection pose estimation eccv kendall grimes cipolla posenet convolutional network camera relocalization iccv lai ren fox hierarchical object dataset icra lai ren fox scalable approach joint object pose recognition aaai lepetit fua monocular tracking rigid objects survey foundations trends computer graphics vision lepetit fua epnp accurate solution pnp problem ijcv kanade robustly aligning shape model application car alignment unknown pose tpami liu tuzel veeraraghavan chellappa fast directional chamfer matching cvpr liu anguelov erhan szegedy reed berg ssd single shot multibox detector eccv lowe fitting parameterized models images tpami lowe object recognition local features iccv mahendran ali vidal pose regression using convolutional neural networks cvprw michel kirillov brachmann krull gumhold savchynskyy rother global hypothesis generation object pose estimation cvpr poirson ammirato liu kosecka berg fast single shot detection pose estimation rad lepetit scalable accurate robust partial occlusion method predicting poses challenging objects without using depth iccv ramnath sinha szeliski hsiao car make model recognition using curve alignment wacv redmon divvala girshick farhadi look unified object detection cvpr redmon farhadi better faster stronger cvpr ren girshick sun faster towards object detection region proposal networks nips tuytelaars discriminatively trained templates object detection real time scalable approach iccv rothganger lazebnik schmid ponce object modeling recognition using local image descriptors spatial constraints ijcv sock kasaei lopes kim object pose estimation camera motion planning using rgbd images iccv guibas render cnn viewpoint estimation images using cnns trained rendered model views iccv tulsiani malik viewpoints keypoints cvpr wagner reitmayr mulloni drummond schmalstieg pose tracking natural features mobile phones ismar xiang schmidt narayanan fox posecnn convolutional neural network object pose estimation cluttered scenes arxiv preprint zach pham dynamic programming approach fast robust object pose recognition range images cvpr zhang cao combined holistic local patches recovering object pose iccv zhu derpanis yang brahmbhatt zhang phillips lecce daniilidis single image object detection pose estimation grasping icra supplemental material seamless single shot object pose prediction supplemental material provide details training images prepared proposed confidence weighted prediction step also present qualitative results cclusion ine training images discussed main paper segment foreground object images training set using segmentation masks provided paste segmented image random image taken pascal voc dataset examples images given input network training time shown figure operation removing actual background prevents network learning scene context essential order achieve proper generalization prediction final step method compute weighted sum multiple sets predictions corners centroid using associated confidence values weights ine gave improvement accuracy projection metric first step involves scanning full grid find cell highest confidence potential object consider neighborhood around grid prune cells confidence values lower detection threshold remaining cells compute average associated predicted vectors eight corner points centroid stacked form vector averaged coordinates used pnp method refinement grid usually improves pose somewhat large objects occupy several adjoining cells grid figure shows example ape object lies two adjoining cells confidence weighting improves pose accuracy figure left grid image middle confidence values predictions ape object grid right cropped view pose estimate shown blue ground truth shown green three cells next best cell good predictions combination gives accurate pose best prediction alone best viewed color figure top using segmentation masks given ine extract foreground objects training images composite random images pascal voc bottom also augment training set combining images multiple objects taken different training images qualitative results show qualitative results cclusion ine datasets figures examples show method robust severe occlusions rotational ambiguities appearance reflections viewpoint change scene clutter figure results cclusion dataset method quite robust severe occlusions presence scene clutter rotational pose ambiguity symmetric objects input images pose predictions multiple objects magnified view individual pose estimates six different objects shown clarity case bounding box rendered input image following color coding used gold enchvise green red purple riller cyan uck black lue orange olepuncher blue addition objects cclusion dataset also visualize pose predictions benchvise object ine dataset evaluate eggbox object close poses seen training sequence image best viewed computer screen figure results cclusion dataset method quite robust severe occlusions presence scene clutter rotational pose ambiguity symmetric objects input images pose predictions multiple objects magnified view individual pose estimates six different objects shown clarity case bounding box rendered input image following color coding used gold enchvise green red purple riller cyan uck black lue orange olepuncher blue addition objects cclusion dataset also visualize pose predictions benchvise object ine dataset evaluate eggbox object close poses seen training sequence image best viewed computer screen figure example results ine dataset left middle enchvise right projected bounding boxes rendered image cropped resized ease visualization blue cuboid rendered using pose estimate whereas green cuboid rendered using ground truth object pose note input image dimension pixels objects often quite small noticeable scene clutter occlusion makes examples challenging figure example results ine dataset left middle right riller projected bounding boxes rendered image cropped resized ease visualization blue cuboid rendered using pose estimate whereas green cuboid rendered using ground truth object pose note input image dimension pixels objects often quite small noticeable scene clutter occlusion makes examples challenging figure example results ine dataset left uck middle ggbox right lue projected bounding boxes rendered image cropped resized ease visualization blue cuboid rendered using pose estimate whereas green cuboid rendered using ground truth object pose note input image dimension pixels objects often quite small noticeable scene clutter occlusion makes examples challenging figure example results ine dataset left ole uncher middle ron right amp hone projected bounding boxes rendered image cropped resized ease visualization blue cuboid rendered using pose estimate whereas green cuboid rendered using ground truth object pose note input image dimension pixels objects often quite small noticeable scene clutter occlusion makes examples challenging
| 1 |
deep residual text detection network scene text xiangyu zhu yingying jiang shuli yang xiaobing wang wei pei hua wang zhenbo luo machine learning lab samsung institute china beijing beijing china text detection challenging problem computer vision paper propose novel text detection network based prevalent object detection frameworks order obtain stronger semantic feature adopt resnet feature extraction layers exploit feature combining hierarchical convolutional networks vertical proposal mechanism utilized avoid proposal classification regression layer remains working improve localization accuracy approach evaluated dataset achieves outperforms previous results scene text detection text detection deep ctpn residual networks introduction text detection important part text content analysis especially reading natural text wild scene text detection becoming increasing attractive researchers development smart phone tremendous demands text recognition augmentation reality unlike traditional documental text detecting scene text seems much challenging task due illuminations perspective distortion complex background last decades series methods proposed deal problem achieved considerable performance methods categorized sliding window based methods connected component based methods sliding window based method utilizes sliding windows search image densely candidate text regions classifies regions traditional machine learning tools kind method quite slow consequence densely search multiscale windows comparing previous method base method draws attention recently involved several steps typically three first ccs extracted images character candidates second character classifier trained remove ccs finally remained ccs going grouped clustering rules maximally stable extremal regions mser one popular methods reported outstanding performance benchmark however following limitations constrain improvement performance words constituent single character ignored grouping rules sake precision characters low color contrast extracted mser another disadvantage complex convolutional neural networks cnn approach led great breakthrough object detection region proposal cnn first attempt classify proposals cnn faster proposed subnetwork named rpn designed generate proposals autonomously feature maps additional convolution layers faster used baseline feature map extraction proposal classification deep residual network resnet presented resnet reported better performance pascal voc ilsvrc comparing googlenet moreover structure resnet designed fully convolutional without heavy fully connected layers resnet version faster observed better performance inspired great progress object detection cnn based methods proposed address scene text detection connectionist text proposal network ctpn novel framework based faster benefits additional recurrent neural network vertical proposal mechanism paper came framework called residual text detection network rtn rtn inspired resnet ctpn vertical proposal mechanism first resnet used generate strong semantic feature instead traditional networks like rather naively layer replacement combine features produce hierarchy residual feature outstanding performance mainly contributed stronger semantic feature second vertical proposal mechanism adopted additional regression part used improve localization accuracy step implemented two stage training strategy achieved related work object detection success deep convolutional network image recognition inspired classify region proposal via cnn proposed related object detection approaches developed rapidly sppnet fast faster faster mature prevalent framework trained tested end end framework constitutes three parts feature map generation feature maps representing semantic information extracted deep convolutional network used faster proposal generation simple resnet vertical mechanism rpn conv blstm box coordinates regression hierarchy residual feature psroi pooling pool architecture residual text detection network rtn convolutional network name region proposal networks rpn designed generate candidate regions input feature maps region classification regression sharing features regions proposals projected location feature maps following fast structure outputted final results classification regression influenced latest progress image recognition deeper convolutional networks transplant framework instead including googlenet resnet resnet proved superior convolutional network googlenet imagenet classification task completely fully convolutional architecture combines resnet faster rcnn together fpn feature pyramid network exploits pyramid resnet framework using fpn champion coco detection challenge besides faster based pipeline single shot multibox detector ssd look yolo two representative promising works ssd one first attempts utilizing convolutional networks yolo extremely faster methods mentioned however get superior performance significant margin comparing faster rcnn pipeline cnn based text detection general object detection pipeline transplant text detection realm barrier free cnn based text detection gradually becomes promising approach zhang proposed fully convolutional network text detection arbitrary orientation instead semantic segmentation achieved deeptext proposed pooling based framework faster achieved inspired ssd liao presented approach called textboxes jointly predictions word recognition utilized ctpn unique network abandoned fast classification regression treated novel individual rpn recurrent neural network rnn achieved previous fmeasure among published papers nevertheless prototype detection using rnn fixed width proposal harmful localization accuracy iii residual text detection network architecture residual text detection network rtn shown fig consists three parts hierarchy residual feature map feature extraction vertical mechanism rpn proposal prediction bounding box regression part higherlocalization accuracy hierarchy residual feature map framework use resnet derive feature map original images feature map serial features formation similar handcraft feature fed rpn regression part resnet consists concatenate blocks stride output pixels rfcn region proposals predicted believed feature maps semantic strong enough comparable feature maps differs resnet structure thus simple replacement resnet would work properly unlike typical resnet based detection share feature map rpn regression parts utilized generate proposals rpn regression kind methods rpn unable use deeper semantic feature visualizing feature maps find contains many low level features competitive first glance carried series experiments faster baseline using respectively framework using detected edges lines instead objects required much computation due larger feature map sizes strong evidence contained many low level features used directly contrary baselines using detected text correctly however framework using fails detecting small text due coarse resolution feature maps although represents deeper feature resolution half comparing even adopt trous algorithm compensate stride difference performance still unsatisfactory using feature maps might insufficient text detection abandon deeper representations seems unwise choice believe using proper way contribute proposal prediction rational come naive idea predicting proposals respectively like previous approaches ssd textboxes way detect fine scale text robust scale invariance also utilizing deeper feature representations nevertheless inconvenient identify reliability proposals without additional classification introduced vertical mechanism rpn seems rather complicated problem deal combine hierarchy feature maps together produce new hierarchy feature map way use feature maps simultaneously task identify feature maps reliable assigned convolution layers shown fig input size original images several convolutional layers get feature maps size corresponding pixels pixels stride deconvolution layer used upsample make sure shapes match exactly attach convolution layer kernel aim work learnable weights combining experiment shows hierarchy feature lead improvement precision recall deconv hierarchy feature hierarchy residual network architecture first upsampled make sure shapes match second attached convlutional layers kernal finally hierarchy feature produced element wise addition vertical mechanism rpn faster serial cnn used classify proposals structure called fast however ctpn abandoned fast structure namely rpn output vertical proposals directly without classification regression know rpn treated general object detection system detection task distinguish one category background two categories total seems rpn already competent text detection depending vertical proposal mechanism recurrent neural network ctpn able detect text without mechanism makes final model much smaller approach adopt vertical mechanism rpn anchors ground truth divided fixed width pixels boxes shown fig particularly spaces ground truths treated negative samples enable method output result word level sequences vertical proposals predicted rpn threshold applied remove vertical proposals therefore remained adjacent text proposals connected together produce text line proposals yellow box ground truth vertical proposals green box space words treated negative samples bounding box regression connecting vertical proposals obtain proposals result nevertheless fixed width proposal might lead inaccurate localization beginning end vertical proposals exactly fit text small text case problem becomes serious unlike general object detection inaccuracy influence recognition tremendously parts characters included bounding box might omitted wrongly recognized contrary loose bounding box contains much background could recognized additional characters conclusion tight exact bounding box significant text detection recognition achieve goal introduce bounding box regression get exact coordinates faster framework paper refer fast structure text line proposals obtained section bounding box offset every proposal calculated however classification contained part regression remained classification unnecessary experiments show harmful performance recurrent neural networks adopted rpn tendency connect words text lines set word level network learning goal text line level proposal might classified negative result bounding box regression loss defined functions ground truth bounding box predicted coordinates stand coordinate width smooth function used regression loss function almost used fast except two coordinates offsets predicted instead four coordinates unnecessary regression coordinate height done rpn layers every single vertical proposal develop two stage training strategy implement regression stage one hierarchy residual feature vertical mechanism rpn trained learning rates regression parts set stage two regression parts trained individually learning rates resnet hierarchy residual feature rpn set normal rpn presented faster used generate anchors train regression parts used test model training testing details model trained natural images collected labeled images labeled word level resized scale overlap images kind public dataset available internet condition extremely similarity training set testing set training set included prevent testing set training ground truth labeled word level divided vertical ground truth fixed width pixels proposal layer corresponding vertical proposals mentioned space words labeled negative samples anchor iou overlap space samples signed negative label negative samples space adding space sample networks tend output word level proposal rather level experiments evaluation hierarchy residual feature ctpn natural images collected label training much less order prove improvement consequence stronger semantic feature map rather much training data implement version ctpn training images experiments carried trained amount images experiment used backbone feature extraction feature map generated different layers evaluated including hierarchy residual feature map used rtn table shows performances use ctpn framework baseline different feature maps mentioned evaluated parameters following processing evaluated methods two scales respectively namely scale means shortest side images pixels longest side exceed pixels scale one observation feature maps competitive scale however comes scale margins methods becoming considerable run open source test code provided author ctpn marked larger scale benefit performance contrary degraded moreover ctpn implementation improved slightly conclusion larger test scale always helpful detection localization nevertheless simply replacing improved proves superior feature representation comparing papers mentioned furthermore baseline hierarchy residual feature map achieved best performance improve points recall comparing original ctpn results shows baseline hierarchy residual feature achieves best performance recall precision scale could convincing evidence stronger semantic feature evaluated rtn icdar benchmarks consists focused text images taken wild evaluation criteria provided robust reading competition previous works first effectiveness hierarchy residual feature map verified comparing prevalent feature extraction layers additional regression layers proved helpful localization accuracy finally method compared published methods achieved performance table evaluating baseline different feature map method ctpn ctpn ctpn ctpn rtn rtn http https backbone scale feature map precision recall score example detection results rtn benchmark first row images result connection regression second row images result vertical proposal connection regression table regression improvement additional convolutional layers regression precision recall table comparison publications method yin faster baseline seglink deeptext textboxes cctn ctpn proposed rtn score regression improvement proposals connected fixed width vertical proposals inaccurate beginning end sides moreover evaluation criteria extremely strict detection bounding box judged false positive sample boundary exceed ground truth slightly means inaccuracy degrade performance recall precession even texts detected correctly bounding box regression able deal problem properly shown table rtn regression improved recall precision benefit additional regression evaluation rtn proving effectiveness hierarchy residual feature additional regression compare rtn published methods single model approach utilize training testing running time image gpu shows examples detection results first compared rtn methods mentioned recent publications cnn based text detection methods compared including textboxes deeptext fcn cctn seglink ctpn prevalent object detection frameworks like faster also evaluated table shows rtn achieved best performance great margin second submitted results robust reading competition website compared rtn competitors challenge task also evaluated dataset rtn single model ranked third performance slightly margin compared tencent youtu precision recall score conclusions paper deep residual text detection network proposed based prevalent object detection framework first stronger semantic feature obtained using deep residual networks combining feature different convolutional networks vertical proposal mechanism introduced inrpn inspired ctpn last additional regression system used improve localization accuracy table comparison submissions competition websites method precision recall tencent youtu rtn baidu idl score references yin yin huang hao robust text detection natural scene images ieee transactions pattern analysis machine intelligence vol sun huo jia robust approach text detection natural scene images pattern recognitiom vol yin yin effective text localization natural scene images mser grouping adaboost international conference pattern recognition ieee karatzas shafait uchida iwamura gomez bigorda robles mestre mas fernandez mota almaz almaz las heras icdar robust reading competition international conference document analysis recognition icdar girshick ross convolutional networks accurate object detection segmentation ieee transactions pattern analysis machine intelligence vol ren girshick sun faster towards realtime object detection region proposal networks advances neural information processing systems nips simonyan zisserman deep convolutional networks image recognition iclr deep residual learning image recognition ieee conference computer vision pattern recognition cvpr everingham van gool williams winn pascal visual object classes voc challenge ijcv szegedy liu going deeper convolutions ieee conference computer vision pattern recognition cvpr szegedy vanhoucke ioffe rethinking inception architecture computer computer vision pattern recognition cvpr zhi weilin tong pan detecting text natrual image connectionist text proposal network eccv feng deeptext unified framework text proposal generation text detection natural images yao text detection fully convolutional computer vision pattern recognition cvpr liu textboxes fast text detector single deep neural aaai huang qiao yao accurate text localization natural image cascaded convolutional textnetwork technical report march shi baoguang bai xiang belongie serge detecting oriented text natural images linking segments computer vision pattern recognition cvpr jifeng jian object detection via fully convolutional conference neural information processing systems nips feature pyramid networks object detection computer vision pattern recognition cvpr lin maire belongie hays perona ramanan zitnick microsoft coco common objects context liu anguelov erhan szegedy ssd single shot multibox detector eccv farhadi look unified object detection ieee conference computer vision pattern recognition cvpr mallat wavelet tour signal processing academic press girshick fast ieee international conference computer vision iccv
| 1 |
fiducial confidence objective bayesian posterior distributions multidimensional parameter dec piero veronese eugenio melilli bocconi university milano italy abstract propose way construct fiducial distributions multidimensional parameter using conditional procedure related inferential importance components parameter discrete models nonuniqueness fiducial distribution well known propose use geometric mean extreme cases show good behavior respect traditional arithmetic mean connections generalized fiducial inference approach developed hannig confidence distributions also analyzed suggested procedure strongly simplifies statistical model belongs subclass natural exponential family called conditionally reducible includes multinomial models furthermore fiducial inference objective bayesian analysis attempts derive distributions unknown parameter without prior information natural discuss relationships particular reference posteriors also depend importance ordering parameters natural terms comparison show fiducial reference posterior distributions coincide models characterize conditionally reducible natural exponential families happens discussion classical examples closes paper keywords confidence distribution jeffreys prior parameter model multinomial model natural exponential family reference prior introduction fiducial distributions introduced fisher widely discussed criticized subsequent years facto brushed aside long time recently obtained new vitality original idea fisher construct distribution parameter includes information given data without resorting bayes theorem obtained transferring randomness observed quantity given statistical model parameter originally fisher considered continuous sufficient statistic distribution function depending real parameter let denote quantile order let realization increasing decreasing statement equivalent thus fisher assumes quantile order distribution names fiducial set quantiles establishes fiducial distribution function course density must properly modified increasing fisher also provides examples multivariate fiducial distributions obtained procedure never develops general rigorous theory fact along problem cover discrete models presence inconsistencies fiducial distribution marginalization paradox see dawid stone difficulties interpretation gave rise quite strong negative attitude towards fisher proposal renewed interest fiducial approach relevant role played generalized fiducial inference introduced developed hannig see also hannig review provides formal mathematically rigorous definition quite general applicability crucial element definition equation links unknown parameter observed data random element known distribution roughly speaking shifting randomness inverting respect fixed distribution given statistical model leads distribution parameter contrary original idea fisher generalized fiducial distribution hannig widely discusses point applications different statistical models found instance hannig hannig iyer wandler hannig recent contributions topic fiducial distributions given taraldsen lindqvist martin liu veronese melilli henceforth last paper authors derive fiducial distributions parameter discrete continuous real natural exponential family nef discuss properties particular emphasis frequentist coverage fiducial intervals past fiducial distributions often associated confidence distributions even latter different meaning modern definition confidence distribution given schweder hjort singh see book schweder hjort complete updated review confidence distributions connections fiducial inference important emphasize confidence distribution must regarded function data reasonable properties purely frequentist point view confidence distribution conceptually similar point estimator exist several unbiased estimators several confidence distributions provided parameter choosing among done resorting optimality criteria thus confidence distribution theory allows compare quite general setting formal distributions parameter derived different statistical procedures paper suggest way construct unique distribution multidimensional parameter indexing discrete continuous models following procedure similar used fisher examples call fiducial distribution look simply distribution parameter space spirit confidence distribution theory construction procedure conditioning distribution data factorized product laws fiducial density real parameter component possibly conditional components obtained joint fiducial density parameter defined product conditional fiducial densities well known fisher fiducial argument presents several drawbacks higher dimensions essentially one recover fiducial distribution function parameters starting joint fiducial distribution see schweder hjort approach applied presents advantage construct sequentially fiducial distribution directly parameters interest different fiducial distributions obtained focusing different parameters interest also noticed general definition confidence distribution multidimensional parameter exist attention given construction approximate confidence curves specific nested families regions see schweder hjort sec interestingly joint fiducial distribution coincides many cases bayesian posterior obtained using reference prior fact motivates second goal paper investigate relationships objective bayesian posteriors suggested fiducial distributions objective bayesian analysis see berger essentially studies perform good bayesian inference especially moderate sample size one unwilling unable assess subjective prior approach prior distribution derived directly model thus labeled objective reference prior introduced bernardo developed berger bernardo successful default prior proposed literature multidimensional parameter reference prior depends grouping ordering components general longer coincides jeffreys prior reference prior real parameter unsatisfactory otherwise well known lindley first discuss connections fiducial posterior distributions real parameter real continuous sufficient statistic exists extend result real discrete nefs characterizing families admitting fiducial prior prior leading posterior coinciding fiducial distribution prior strictly related jeffreys prior show parameter multidimensional relationship longer holds new one established reference prior particular prove results parameter models conditionally reducible nefs subclass nefs defined consonni veronese paper structured follows section reviews basic facts fiducial confidence distributions real nefs generalized fiducial distributions proposal constructing multivariate fiducial distribution presented section also discusses relationships confidence distributions section use geometric mean fiducial densities solving problem discrete models section connections generalized fiducial inference consistency sufficiency principle section section studies fiducial distributions conditionally reducible nefs provides explicit expression particular subclass includes multinomial negativemultinomial model section analyzes relationships fiducial distributions reference posteriors particular parameter models section nefs section characterizing admit fiducial prior section discusses examples fiducial reference posteriors coincide section concludes paper presenting possible asymptotic extensions finally appendix collects useful technical results conditionally reducible nefs appendix includes proofs results stated paper preliminary results modern definition confidence distribution real parameter interest see schweder hjort singh formulated follows definition let parametric model data parameter interest nuisance parameter function confidence distribution distribution function uniform distribution true parameter value relevant requirement previous definition uniformity distribution ensures correct coverage confidence intervals seen section confidence distribution theory must placed purely frequentist context allows compare distributions parameter space obtained using different approaches finally definition confidence distribution generalized requiring uniformity assumption holds asymptotically strictly linked notion confidence distribution confidence curve defined observed function see schweder hjort function gives extremes confidence vals level allowing fast clear comparison confidence distributions respect interval length parameter interest multidimensional extend definitions confidence distribution confidence curve much less clear various proposals made see schweder hjort singh detailed section hannig proposed notion generalized fiducial distribution based equation several functions generate statistical model resulting fiducial distributions reasonable terms properties computational tractability hannig sec gives hints choice default function particular independent identically distributed random sample absolutely continuous distribution function density suggests use uniform random variables inverse generalized inverse regularity assumptions satisfied generalized fiducial distribution written expression given hannig formula det xid xij numerator ratio determinant matrix whose xij procedure leads fisher definition fiducial density hannig example explicitly recognizes advise wilkinson choice fiducial distribution depend parameter interest uses well known example independent normal distributions parameter interest shows default equations lead fiducial distribution good frequentist properties inference bad ones interest already recognized stein thus hannig suggests hoc alternative equation leads better solution notice general procedure suggested next section constructs fiducial distribution starting directly parameter interest required choice priori data generating function fiducial distributions properties particular emphasis frequentist coverage fiducial intervals discrete continuous real regular nef discussed specifically consider sufficient statistic associated sample size denote support let distribution function exp corresponding density respect measure let inf sup define otherwise petrone veronese proved inf inf sup sup fiducial distribution function natural parameter follows fiducial density important underline simple verify distribution function also confidence distribution asymptotically discrete case according definition notice discrete nefs coincide thus besides one could define left fiducial distribution convenience sometimes called right fiducial distribution standard way overcome referring device see schweder hjort pag amounts consider mixture hsa whose density arithmetic mean instead suggest average using geometric mean suitably normalized show presents better properties section direct connection objective bayesian inference section even operationally difference usually particularly big table provides fiducial distributions obtained important discrete continuous nefs used forthcoming examples also establishes abbreviations used paper standard distributions table fiducial distributions real nefs sufficient fiducial statistic distributions known log xic known known known known known hsg hsg known hsg following notations used gamma distribution shape mean distribution beta distribution parameters binomial distribution trials success probability successes success probability poisson distribuition mean pareto distribution density weibull distribution density exp fiducial distributions multidimensional parameters natural way construct suitable fiducial distribution multidimensional parameter follow procedure used fisher examples proposal stems factorization sampling distribution product conditional laws fiducial density real component parameter possibly conditional components defined well known different factorizations sampling distributions produce different joint fiducial distributions see dempster however consider aspect drawback procedure linked inferential importance ordering parameter components implied factorization example parameter transformed way parameter interest nuisance obvious ordering suitable factorization must defined accordingly see example ctd section illustration crucial role played ordering parameters accordingly inferential importance widely acknowledged objective bayesian inference reference priors different different orderings see section order construct fiducial distribution consider two basic transformations one involving sample data distribution parameterized one involving given consider statistic density summarizes without losing information sufficient statistic transformation split suppose ancillary consequence information provided included conditional distribution given assume exists smooth reparameterization ordered respect importance density corresponding distribution function must interpreted conditional distribution given parameterized assuming known following always assume conditional distribution functions involved analysis monotone differentiable limits tends boundaries domain notice always true belongs nef see assumptions joint fiducial density obtained several applications procedure well known models provided section illustrate interesting features fiducial distribution existence ancillary statistic necessary exists sufficient statistic dimension parameter important case formula reduce original formula suggested fisher one interested follows enough consider depending observations lose sample information typical choice given maximum likelihood estimator thus sufficient consider distribution given ancillary statistic similarly one interested enough consider iii ancillary statistic needed fiducial distribution invariant respect transformation sampling distributions conditional thus transformation establishes constraints see section example construction successive conditioning makes fiducial distribution invariant called lower triangular transformation fixed precisely consider transformation see assuming instance increasing sufficient show follows immediately lead fiducial distribution sufficient conditional distribution given depend fiducial distribution becomes product marginal fiducial distributions consequence used alone make inference fiducial distribution depend inferential ordering parameters important case happens discussed section close section establishing invariance property fiducial distribution lower triangular transformation transformation say maintains decreasing ordering importance components two vectors proposition lower triangular continuously differentiable function fiducial distribution obtained applying model fiducial distribution obtained applying model measurable relationships confidence distributions given real nef exact approximate confidence distribution observations continuous discrete respectively possible verify true marginal fiducial distribution main parameter interest general definition indeed distribution function first requirement definition clearly satisfied thanks assumption distribution function given formula concerns uniformity condition assuming decreasing increasing replace arbitrary construction integrand equal fixed discrete case geometric mean left right fiducial densities mentioned section discrete statistic distribution depending real parameter suggest use geometric mean right left fiducial normalizing constant instead densities arithmetic mean first justification use geometric mean densities suggested berger mention property density closest respect divergence specified following proposition give simple proof fact without resorting calculus variations recall given two densities support dominating measure divergence defined log proposition consider two densities support density minimizes given normalized geometric mean furthermore krishnamoorthy lee observe distribution whose aim give synthesis two fiducial distributions stochastically lie setting extreme distributions property surely satisfied arithmetic mean hsa uniformly respect belonging set defined inequalities true hsg mild assumptions usual assume defined proposition let probability mass function real observation continuous derivative respect assume function decreasing hsg uniformly assumptions required previous proposition satisfied many important models example following corollary probability mass function real nef hsg uniformly discuss relationship hsg hsa proposition let probability mass function real observation satisfying following assumptions addition stated proposition lim lim exists depending hsg hsa hsg hsa result proposition important connection confidence intervals shows hsg gives fixed level confidence interval smaller obtained hsa see figure graph example notice assumptions proposition fulfilled real nef natural parameter space occurs binomial poisson models however assumptions necessary ensure stated behavior hsg hsa conjecture quite general following example shows example consider sample size logarithmic distribution parameter probability mass function sufficient statistic log distributed log stirling number first kind arguments see johnson distribution belongs real nef decreasing fiducial distribution function log model log log seen decreasing lim lim nevertheless fiducial distributions htg hta behave stated proposition see figure graph finally justify preference hsg versus hsa showing confidence risk quadratic penalty defined schweder hjort sec uniformly better important discrete models reported table confidence risk figure graph fiducial distributions sample logarithmic distribution red green yellow blue graph confidence curves green yellow mean parameter confidence fiducial distribution quadratic penalty dhs varhs varhs denotes variance expected value respect distribution mean recalling binomial distribution table assuming simplicity easy verify models poisson model hsg hsa consequence hsa hsg varhs varhs becomes three models respectively values strictly positive uniformly let consider fiducial distribution multivariate parameter defined discrete component product starting possible define right left fiducial distribution respectively hence geometric arithmetic means notice nent involves parameter real observation remaining quantities fixed propositions applied multivariate fiducial distributions discrete observations thus obtained combining various possible way univariate distributions particular consider obtained product right univariate conditional fiducial distributions obtained product left univariate conditional fiducial distributions hta defined product mixtures hta finally htg corresponding density obtained product geometric notice coincides geometric means mean fiducial densities derived described fiducial inference sufficiency principle procedure introduced beginning section gives generalized fiducial distribution according hannig one considers equation random vector completely known distribution functions explicitly obtained iteratively follows interesting observe generalized fiducial distribution given necessarily satisfy sufficiency principle verified immediately looking example hannig uniform distribution considered depend sufficient statistic denotes order statistic despite simple form model highly irregular inconsistency sufficiency principle generalized fiducial distribution also occur standard models particular real continuous sufficient statistic real parameter exists one could derive two different fiducial distributions starting whole sample simple example issue easily constructed considering beta model parameters another interesting example following example let sample truncated exponential density density defined completed continuity setting distribution function thus using obtain figure graph fiducial densities red green blue graph fiducial densities red green blue figure confidence curves red blue depends values specific consider sufficient statistic simplicity assume density generalized fiducial density reduces figure report fiducial densities different values densities symmetric mode dispersion increasing concentrated fiducial density obtained however densities different modes shifted left increases cases fiducial density middle various cases notice good properties discussed particular confidence distribution model belongs nef confidence intervals corresponding slightly smaller corresponding seen confidence curves reported figure instance confidence intervals respectively computation fiducial distribution defined greatly simplified starting sufficient statistic instead whole sample however alternatives feasible seem lead result particular following proposition states sufficiency principle always satisfied exists complete sufficient statistic parameter proposition consider fiducial distribution defined transformation data complete sufficient statistic dimension lower triangular transformation fixed fiducial distribution obtained using instead coincides notice completeness necessary satisfy sufficiency principle following example shows example given sample size uniform distribution immediate verify sufficient statistic complete location parameter ancillary statistic fiducial distribution obtained starting distribution function given thus start directly consider distribution function given omitting tedious calculations max min max max min thus min max min max observing min unless recalling follows similarly coincides given conditionally reducible natural exponential families consider multivariate natural exponential family whose density respect fixed positive measure given exp nef reducible sequel joint density factorized product conditional densities belonging real exponential family precisely exp function onto furthermore shown variation independent notice natural parameter conditional distribution details families emphasis enriched conjugate priors reference bayesian analysis see consonni veronese consonni respectively papers deal particular families simple quadratic variance function named include interesting cases multinomial models see casalis appendix example multinomial model consider random vector distributed according multinomial distribution denote probability outcome well know conditional tribution given whereas marginal distribution since binomial distribution real nef one factorize multinomial distribution log log models belonging construction fiducial distribution proposed section drastically simplifies existence sufficient statistic dimension parameter makes ancillary statistic necessary indexing conditional distribution real parameter implies independence fiducial distribution proposition let sufficient statistic distributed according regular crnef parameterized coinciding natural parameter space conditional distribution satisfying conditions similar given fiducial distribution function density independent thus importance ordering irrelevant fact also justifies simplification index notation adopted notice however definition interpretation depend particular ordering considered seen example recalled section general definition confidence distribution exist however context since constructed product marginal confidence distributions considered multivariate possibly asymptotic confidence distribution examples section reconnected framework consider specific whose variance function given class exclusion secant distribution possible give simple explicit expression fiducial density recalling definition given setting zkk specifications zkk appearing found appendix proposition consider sample size multinomial family denotes sufficient statistic right fiducial distribution density exp family dimensional tive multinomial component right fiducial distribution given exp exp exp exp notice discrete components basic left fiducial distribution obtained previous formulas replacing term thus follows geometric mean structure instead example multinomial family log easily follows formula denotes beta function fiducial distribution always particular interest used starting point construction fiducial distribution alternative relevant parameters consider lower triangular transformation see fiducial distribution directly obtained thanks proposition corollary right fiducial distribution mean parameter relative ordering following density family poisson components exp exp corresponds product densities densities multinomial family family occurrences cell family negative multinomial component occurrences cell exp exp notice density given density given depending example inference multinomial distribution usually performed parameter since fiducial distribution easily derived noting left fiducial density obtained replacing derived aggregating hyperparameters follows geometric mean given dirichlet distribution clearly generalized refers specific order importance change order fiducial distribution change accordingly similarly model occurrences cell easily computed observing log exp connections objective bayesian inference mentioned section look fiducial inference way obtain distribution parameter space model without prior information appears natural compare objective bayesian inference recall fiducial distribution coincides posterior corresponding prior called fiducial prior construction fiducial distribution defined based inferential importance ordering parameter components aspect also crucial procedure adopted construct reference priors see bernardo smith sec reference prior parameter generated successive conditioning established importance ordering compoq nents widely recognized dependence reference prior choice parameter interest necessary obtain good frequentist properties coverage consistency parameter reference prior coincides jeffreys prior denotes fisher information jeffreys prior invariant reparameterization model reference prior thus reference posterior generally invariant unless transformation lower triangular see datta ghosh thus reference posterior invariance property fiducial distribution proved proposition recently berger recognize existence situations one interested simultaneously parameter components model none prior thus posterior distribution necessary perform inferences predictions cases overall prior needed determination open problem highlight exists common reference prior parameters natural choice overall prior similar problem occurs context comment aspect following sections notice fiducial distribution suggested hannig good choice parameter models parameter models fiducial prior exists coincides reference prior assume first one parameter unknown case model admits ancillary statistic particular take location scale parameter respectively proposition let sample density location scale parameter fiducial distribution coincides bayesian posterior obtained jeffreys prior respectively example let sample uniform distribution scale parameter first notice sufficient statistic thus obtain directly fiducial distribution nsn however result obtained without resorting sufficient statistic set max consider distribution function given ancillary statistic means max max expression function equivalent appearing thus provides fiducial distribution immediate verify coincides jeffreys posterior case sufficient statistic thus necessary use ancillary statistic found previous example trivially given coincides bayesian posterior obtained consider model location parameter scale parameter unknown given sample size ancillary statistic example marginally ancillary transformation allows write sampling distribution note specific contexts transformations could appropriate example normal model one could use factorization becomes proposition let sample density location scale parameter respectively fiducial distribution coincides bayesian posterior obtained reference prior notice different obtained jeffreys rule already recalled suitable multidimensional parameters furthermore depend ordering fiducial distribution general allowable ordering reversed however coincides fiducial distribution obtained symmetric approaches see hannig fraser thus inferential ordering importance seems irrelevant model assumed overall fiducial distribution exponential families lindley first study existence fiducial prior analyzing particular case continuous real nefs proving exists gaussian known variance gamma known shape models full characterization real nefs admit fiducial prior given following proposition summarizes results proposition let real nef natural parameter fiducial prior exists affine transformation one following families normal known variance gamma known shape eter binomial poisson three discrete families fiducial prior exists hsg fiducial prior exists belongs family conjugate distributions moreover coincides jeffreys prior continuous nefs discrete nefs choose hsg fiducial distribution iii fiducial distribution hsa discrete case bayesian posterior distribution corresponding jeffreys prior edgeworth expansion term order previous results establish strong connections jeffreys posteriors fiducial distributions real nefs thus two different approaches lead sense objective inference discussion coverage fiducial jeffreys intervals good frequentist properties particular compared standard wald intervals given section consider easy verify fiducial distribution belongs enriched conjugate family defined consonni veronese section fact prove following proposition proposition let sufficient statistic distributed according parameterized fiducial prior exists conditional distribution given affine transformation one following families normal known variance gamma known shape parameter binomial poisson particular basic exclusion hyperbolic secant admit fiducial prior belongs enriched conjugate family moreover discrete components models consider geometric mean product jeffreys priors computed conditional distribution given reference prior fiducial prior equal example multinomial distribution basic thus proposition setting given obtain fiducial prior coincides reference prior product jeffreys priors computed distribution given finally observe fiducial distribution always overall fiducial distribution however often interesting even cases strictly related relevant one example following berger consider multinomial model applied directional data happens outcomes attitude survey case cells naturally ordered meaningful reparameterize model terms conditional probabilities exp exp induces overall fiducial prior product independent distributions coinciding overall reference prior examples examples concerning normal models difference means consider two independent normal samples size known common variance means respectively sufficient statistics sample sums parameter interest reparameterize joint density conditional distribution given depends table fiducial distribution thus arguing fiducial distribution given notice joint fiducial distribution obtained consider ordering even compute marginal fiducial distributions obtain rule thus ordering parameter irrelevant overall fiducial distribution furthermore coincides reference posterior obtained constant prior marginal distribution confidence distributions many normal means neyman scott problem consider samples size two xij independently distributed according let aim make inference common variance nuisance parameter well known example used show maximum likelihood estimator inconsistent obtain fiducial distribution first notice joint distribution sufficient statistics factorized independence using table one easily obtain fiducial distribution hence given derived consequence distribution coincides posterior obtained order invariant reference prior present inconsistency likelihood estimator instead occurs posterior distribution obtained jeffreys prior comparison two poisson rates comparison poisson rates classical problem arising many contexts see example lehmann romano discussion unbiased uniformly powerful test ratio given two samples size two independent poisson distributions sufficient statistics sample sums reparameterizing joint density conditional distribution given marginal distribution thus sampling distribution apply using table fiducial density derived conditional distribution given implies marginal distribution using table follows thus joint fiducial distribution coincides reference posterior according proposition overall distribution notice confidence distribution differs fiducial distribution induced two independent marginal fiducial densities bivariate binomial bayesian analysis bivariate binomial model discussed crowder sweeting connection microbiological application consider spores probability germinate denote random number germinating spores probability one latter spores bends particular direction random number probability distribution given joint distribution called bivariate binomial crowder sweeting observe jeffreys prior satisfactory asymmetry polson wasserman show fact occur using reference prior product two independent jeffreys priors joint fiducial density obtained product derived conditional model given derived marginal model depend thus independent overall fiducial distribution binomial model fiducial prior equal jeffreys prior see proposition follows immediately coincides reference posterior previous conclusions hold even consider alternative parametrization ratio parameters trinomial distribution bernardo ramon perform bayesian reference analysis ratio two multinomial parameters presenting applications particular discuss case distributed according trinomial distribution parameters provide joint reference prior parameter interest derive marginal reference posterior find fiducial distribution reparameterize trinomial model conditional distribution given table fiducial density coincides marginal distribution possible derive fiducial density joint fiducial density coincides joint reference posterior conclusions final remarks suggested way construct fiducial distribution depends inferential importance ordering parameter components proposal appears quite simple apply even general theory suggested hannig advantages connection modern confidence distribution theory strictly related objective bayesian analysis complex models exact analysis generally possible approximate results derived working asymptotic distributions starting sufficient statistic expansion first order fiducial distribution mean parameter real nef provided result extended arbitrary regular models starting maximum likelihood estimator parameter maximum likelihood estimator sufficient better fiducial distribution obtained using ancillary statistic suggested section aim magic formula given provides approximation conditional distribution maximum likelihood estimator given ancillary statistic fruitfully adopted furthermore asymptotic results appear strictly connected theory matching priors priors ensure approximate frequentist validity posterior credible set notice also priors crucially depend inferential ordering parameters see tibshirani datta mukerjee however normal approximation fiducial distribution established enough analysis proved type results discussed forthcoming paper acknowledgements research supported grants bocconi university appendix useful results technical aspects related following nef principal matrix variance function depend fisher information matrix relative diagonal element depending cumulant transform conditional density given akj functions akj conditional expectation given linear gradient parameter depends using checked auk consequence first part exists function course previous formulas hold understanding components lose meaning specific set zero nef simple quadratic variance function sqvf element matrix seen function mean parameter written vij lij cij real constant constant symmetric matrices obtained via nonsingular affine transformation one basic families multinomial positive integer positive integer see casalis detailed description distributions element variance function basic vij zik cij zij zji constants values zii basic nefsqvfs together technical details given proof corollary proofs proof proposition standard rule applied first integral enough show jacobian transformation transformation lower triangular follows last two formulas chain rule distribution function given parameterization equality follows applying model parameterized proof proposition let constant normalizing log log log log cpg log log log depend follows functional achieves minimum equal log proof proposition prove hsg inequality shown way using write hypothesis decreasing thus also decreasing sufficient condition hsg see shaked shanthikumar theorem proof corollary let exp probability mass function respect measure real nef natural parameter fixing write exp exp exp elements sum continuous increasing functions intervals thus decreasing intervals moreover equal zero positive negative denominator positive decreasing proposition result follows proof proposition order prove proposition sufficient show exist otherwise see shaked shanthikumar proof theorem thus analyze sign write sign difference function first notice standard property arithmetic geometric means straightforward algebra seen moreover assumption decreasing exist satisfying sufficient condition stated beginning proof proof proposition first notice use constructing fiducial distribution sufficient statistic dimension parameter need ancillary statistic furthermore transformation thus function since complete stochastically independent basu theorem consequence also independent thus lower triangular transformation invertible respect assuming increasing becomes proves proposition proof proposition conditional distribution given belongs nef natural parameter using distribution function result follows postulated independence among proof proposition formulas derive direct application conditional distributions different families detailed description involved see consonni veronese proof theorem proof corollary first notice fiducial distribution easily obtained via double transformation namely jacobian det exp see smith pag proportionality relationship consonni prop equality consider family family poisson components log exp known variance normal components follows thus using result follows multinomial family using relationships example gives using obtain family log log log log follows thus using result follows family dimensional component log case convenient compute fiducial density directly observing jacobian transformation using previous expression density follows proof proposition let sample size location parameter consider transformation whose jacobian one setting using substitution previous two integrals recalling obtain result relative scale parameter follows recalling model transformed model location parameter setting log log case constant prior equivalent prior proportional proof proposition let sample size notice absolute value jacobian transformation furthermore reference prior written see steel working conditionally thus apply proposition conclude reference posterior fiducial distribution given coincide remains show corresponds fiducial density dtdw wzi dtdw density depend parameters ancillary assuming using transformation implies jacobian fiducial distribution becomes dmdv taking derivative respect immediate see fiducial density coincides posterior distribution given applying integral transformation used previous case wzi dtdw dmdv derivative respect leads following lemma used proof proposition lemma consider diagonal element fisher information matrix given ikk reference prior jeffreys prior obtained conditional distribution given proof lemma first observe transformation information matrix diagonal see appendix points element ikk log assumption proposition write ikk datta ghosh follows reference prior given last product consider jeffreys prior obtained tional square root last equality holds assumption proposition thus product jeffreys priors equal result holds proof proposition due independence fiducial prior exists exists fiducial prior conditional distribution given belongs real nef natural parameter result first part proposition follows proposition first statement second part proposition follows checking directly form conditional distributions basic using proposition second statement follows remark stated proposition lemma references formula distribution maximum likelihood estimator biometrika berger case objective bayesian analysis bayesian analysis berger bernardo ordered group reference priors application multinomial problem biometrika berger bernardo sun overall objective priors bayesian analysis bernardo reference posterior distributions bayesian inference stat soc ser bernardo ramon introduction bayesian reference analysis inference ratio multinomial parameters statistician bernardo smith bayesian theory wiley chichester casalis simple quadratic natural exponential families ann statist consonni veronese conditionally reducible natural exponential families enriched conjugate priors scand stat consonni veronese reference priors exponential families simple quadratic variance function multivariate anal crowder sweeting bayesian inference bivariate binomial distribution biometrika datta ghosh remarks noninformative priors amer statist assoc datta ghosh invariance noninformative priors ann statist datta mukerjee probability matching priors higher order asymptotics lecture notes statistics springer new york dawid stone basis fiducial inference ann statist dempster examples inconsistencies fiducial argument ann statist steel reference priors general model statist prob lett fisher inverse probability proceedings cambridge philosophical society fisher fiducial argument statistical inference ann eugenics fisher statistical methods scientific inference hafner press new york fraser fiducial inference ann math statist smith exponential bayesian conjugate families review extensions discussion test hannig generalized fiducial inference statist sinica hannig generalized fiducial inference via discretization statist sinica hannig iyer fiducial intervals variance components unbalanced normal mixed linear model amer statist assoc hannig iyer wang fiducial approach uncertainty assessment accounting error due instrument resolution metrologia hannig iyer lai lee generalized fiducial inference review new results american statist assoc johnson kemp kotz univariate discrete distributions wiley new york krishnamoorthy lee inference functions parameters discrete distributions based fiducial approach binomial poisson cases statist plann inference lehmann romano testing statistical hypotheses springer new york lindley fiducial distributions bayes theorem stat soc ser martin liu inferential models framework posterior probabilistic inference amer statist assoc petrone veronese feller operators mixture priors bayesian nonparametrics statist sinica polson wasserman prior distributions bivariate binomial biometrika schweder hjort confidence likelihood scand stat schweder hjort confidence likelihood probability london cambridge university press shaked shanthikumar stochastic orders springer new york singh xie strawderman combining information confidence distribution ann statist stein example wide discrepancy fiducial confidence intervals ann math statist taraldsen lindqvist fiducial theory optimal inference ann statist tibshirani noninformative priors one parameter many biometrika veronese melilli fiducial confidence distributions real exponential families scand stat wandler hannig fiducial approach multiple comparisons statist plann inference wilkinson resolving controversy statistical inference stat soc ser
| 10 |
source forager search engine similar source code vineeth david bingham ben david thomas grammatech jun ithaca new york usa email vkashyap melski university usa email bingham liblit reps spend significant amount time searching understand complete correct adapt code new context unfortunately state art code search evolved much beyond text search tokenized source code much richer structure semantics normal text property exploited specialize process better querying searching ranking results present new engine named source forager given query form function source forager searches code database similar functions source forager preprocesses database extract variety simple code features capture different aspects code search returns functions database similar query based various extracted code features tested usefulness source forager using variety queries two domains experiments show ranked results returned source forager accurate functions reliably retrieved even searching large code database contains functions believe source forager first step towards muchneeded tools provide better experience index search similar code program features introduction age software proliferation useful able search large corpora effectively code desired developers routinely use code search learning debugging tool tasks looking existing functionality code base determining use api library gathering information code intended etc search techniques always precise enough code focus purely strings code supported part gift rajiv ritu batra afrl darpa muse award office vice chancellor research graduate education funding wisconsin alumni research foundation opinions findings conclusions recommendations expressed publication authors necessarily reflect views sponsoring agencies reps ownership interest grammatech licensed elements technology reported publication paper term search used sense google namely retrieve documents related specified query search used sense finding occurrence string pattern given document comments complete partial names functions variables text search largely ignores code structure semantics code approach cause searching imprecise relevant code fragments may missed many spurious matches may returned recent search techniques allow users specify certain aspects code semantics addition textual query techniques allow users specify structural requirements search target nested loops others specify context search target implement particular interface yet others specify sets pairs additional semantic information improve search accuracy however existing techniques share following shortcomings techniques provide unified way specifying semantics search query technique specification semantic aspects code uses technique closely married chosen semantic aspect deeply ingrained implementation search technique tight coupling makes hard extend techniques model additional semantic aspects propose search technique finding similar source code addresses shortcomings unified query specification mechanism takes code fragments queries various kinds semantic information extracted query used search approach provides unified mechanism code search searching code using code fragments moreover techniques extracting semantic information used queries elements corpus searched leading greater consistency extensibility technique uses vector extracted elements corpus capture various aspects syntax semantics program aspect called provide unified interface querying approach also makes search technique extensible easy introduce model additional aspects code int binsearch int int int int low high mid low high low high mid low high mid high mid else mid low mid found match else return mid match return various weights results weight determination neighbor search query corpus program elements feature extraction engine code database fig example program implements binary search sorted integer array fig overview source forager architecture addition useful right developer offline phase population source forager database tool search serve important building phase source forager analyzes given code corpus block automated program repair program synthesis populates code database rich information ability find code similar query help functions code corpus source forager automated tools learn similar code fix bugs extracts several different kinds information perform code completion tasks query function refer different kinds information main contributions source forager describes different ability perform code searches using code detail specific value observed fragments queries searches answers source given thus function one featureforager based query formalism close observation example one concepts developers already familiar numeric literals corresponding architecture uses multiple code featureobservation set numeric constants used classes simultaneously architecture extensible althe function implementation code given lowing easy addition new code fig numeric literals set enhances dimensions along code searched mechanism automatically selecting useful code featurea feature extraction engine consists several feature extracclasses employed code search given query given tors collect given function priori domain information query note elements technique relative importance different known sets multisets trees maps etc query belongs specific domain suitable number determines length training data available organization remainder paper organized feature extractors operate code corpus popinto four sections gives overview approach ulate code database element code database algorithms describes methods detail presents consists function corpus along experimental results discusses related work extracted numeric literals employed one one element function overview set numeric constants source forager search engine finding similar source code database also access several similarity code takes input query source text functions one similarity function searches database similar code given takes two returning ranked list results units code belonging returns value source forager reason called program higher value indicates greater similarity elements current incarnation program elements two example similarity function functions queries results numeric literals jaccard index given two sets functions jaccard index given fig provides architectural overview source forager source forager two stages offline phase populate simjacc code database online phase second implementation integrates infrastructure database implemented serialized efficient data structures access similarity functions implemented implements search functions similar query scanning database comparing query maintaining priority queue size keeps track given query relative weights different find functions code database containing functions seconds single machine intel ghz cores ram effort underway developers make distributed version would allow source forager search large code databases without taking big performance hit large code database split smaller units searched parallel sorted results units merged using merge algorithm int bins int key int array int min int max max min return else int midpoint int floor array midpoint key return bins key array midpoint max else array midpoint key return bins key array min midpoint else return midpoint fig example source forager result query fig result recursive implementation binary search online phase search similar code online search phase source forager takes query uses infrastructure obtain corresponds query infrastructure reuse creates consistent representation view code throughout infrastructure featureclass weight assigned determine importance weight determination based configuration source forager run sections provide overview different configurations combined similarity function defined two combining similarity functions weight assignment using weighted average sim simcombined extensible architecture source forager architecture allows easy extension add new one implements feature extractor determines given function corresponding similarity function currently implement feature extractors using however source forager tightly coupled codesonar processing tool used implement feature extractor existing represented container data structures lists maps trees similarity functions work level container data structures thus available reused additional feature extractors furthermore source forager tied functions kind program element underlying architecture also limited thus source forager perform code searches programs written languages two ncl total number length simc similarity function respectively weight assigned query compared code database using combined similarity function functions highest similarity scores query returned results configurable limit fig shows example source forager result code fig used query two implementations source forager first one version code database implemented large json object various similarity functions algorithm implemented python implementation allows easier quicker experimentation new ideas use version experiments reported iii code search section first describe different accompanying similarity functions employed source forager describe two configurations source forager first configuration selects subset basis performing code search configuration useful additional information available regarding code query second configuration relative importance specific domain ahead time using techniques configuration useful domain code query known seq table brief overview different employed source forager marked use jaccard index similarity function similarity functions used remaining accompany descriptions negate loop seq brief description coupling types used operations performed types skeleton tree structure loops conditionals decorated skeleton tree structure loops conditionals operations weighted terms processed natural language terms code graph cfg bfs cfg subgraphs size bfs used generating subgraphs graph cfg bfs cfg subgraphs size bfs used generating subgraphs graph cfg dfs cfg subgraphs size dfs used generating subgraphs graph cfg dfs cfg subgraphs size dfs used generating subgraphs modeled library calls calls made modeled libraries unmodeled library calls calls made unmodeled libraries library calls calls made libraries type signature input types return type local types types local variables numeric literals numeric data constants used string literals string data constants used comments associated comment words seq loop seq cond seq cond seq cond seq cond skeleton tree decorated skeleton tree fig example program fig ast abstracted retaining loops conditionals else switch operationally feature extractor realized tree transducer drops ast nodes loops conditionals sequences loops conditionals encapsulated within sequence node empty sequences dropped intuition behind using code search similar functions tend similar loop conditional structures fig shows skeleton tree example code fig similarity function used skeleton tree featureobservations based tree edit distances let rough approximation distance two trees based sizes similarity functions table summarizes source forager describe associated similarity functions coupling consists types variables operated function coupled operations performed types set type operation pairs primitive types paired builtin arithmetic logical relational operations example int types classes paired operations including direct indirect field accesses method calls example pair bar indicates field foo aggregate data type bar accessed intuition behind including similar functions tend use similar pairs example fig operation coupling extracted set int int int int int int int int skeleton tree featureclass based abstract syntax tree ast function size max size size let fixed distance threshold set obtain approximate distance two trees follows pre pre max post post max size size otherwise pre sequence obtained performing traversal tree post sequence obtained performing traversal tree word edit distance sequences similarity function used skeleton tree computed simtree exact computation quartictime complexity size trees compared instead use fast edit distance gives similarity function complexity overall note also use rough approximation based size trees one two trees compared least twice large found using approximations opposed exact based similarity made discernible difference quality final search results obtained made big difference performance faster tests decorated skeleton tree similar skeleton tree except instead retaining loop conditional structure operations also retained ast discard common operations assignment cause excessive bloat intuition behind including similar functions use similar operations structurally similar locations fig shows decorated skeleton tree featureobservation example code fig similarity function used simtree weighted terms consist various terms source code function name comments local variable names parameter names function terms extraction subjected series standard preprocessing steps splitting words camelcase stemming lemmatization removing singlecharacter strings removal discards typical english stop words well stop words specialized code fixme todo xxx additionally use greedy algorithm splitting terms multiple words based dictionary lookup splitting handle case programmers choose identifiers combine multiple words without camelcase compute term frequencyinverse document frequency score term consider function document compute per project give terms inflated score terms often provide significant information functions purposes intuition behind including similar functions tend similar vocabulary example fig bin search high low found mid match similarity function two observations weighted terms uses cosine similarity simnl fig example corresponding adjacency matrix serializing adjacency matrix entries yields binary digits decimal node ordering adjacency matrix traversal order cfg implement multiple featureclasses based subgraphs control flow graph cfg function given cfg function begin either bfs traversal search dfs traversal node nodes traversed subgraph cfg involving nodes extracted fewer nodes reachable node including thrown away repeat process every node cfg extracting subgraphs size size cfg represent graph size integer representation representation graph obtained concatenating matrix rows order thus function cfg extract multiset shapes fig shows example converting integer manner implement following four based value traversal strategy chosen graph cfg bfs traversal strategy bfs graph cfg bfs traversal strategy bfs graph cfg dfs traversal strategy dfs graph cfg dfs traversal strategy dfs example fig extracted graph cfg bfs multiset intuition behind including similar functions tend similar structures similarity function used based generalized jaccard index two multisets min max iterates unique elements number times appeared multiset calls library functions implement three featureclasses extract calls various kinds library functions modeled library calls codesonar models large range library functions performing static analysis code calls made modeled library functions extracted unmodeled library calls calls made unmodeled library functions extracted calls function modeled codesonar whose definition available source code total number words universe vectors scores index value word library calls calls dynamic selection functions whose definitions available directory difcombining beneficial code search ferent caller function extracted use however useful performing functions heuristic identifying libraries code search may vary one query another example intuition behind including three featureconsider query function containing code classes similar code tends call library significant number functions functions three featuredevoid loops functions look values sets library functions called library function identical query function respect skeleton represented tuple includes name function tree thus performing code search together file name containing function declaraquery including skeleton tree lead tion example function calls strcpy strncpy results hand query function corresponding modeled library unusual loop conditional structure idiomatic calls function strcpy strncpy computation performed skeleton tree would useful code search instances type signature featureof distinctive structure code database would observations consist type signature function high similarity scores query function argument types return type function together thus useful select automatically argument types return type form multiset basis code search configuration source types example code fig forager called intuitively corresponding type signatures int int int int given query selected code search corresponding type signatures define function interface interaction sufficiently rest code similar code tends similar respect overall distribution interfaces therefore type signatures could help code search prepare dynamic selection perthe generalized jaccard index used query basis take following steps offline similarity function code database retrieve random sample local types featureof random sampling gives inexpensive observations consist set types local variables estimate distributions across entire intuition behind using local variable types code search code database similar code creates operates variables similar types example code fig local types calculate similarity threshold computing pairwise similarity scores featureobservation int observations taking sum means constants implement two extract standard deviations similarity scores two featureconstants function observations considered similar similarity numeric literals described score similarity threshold string literals online query posed take following steps set literal strings used function intuition behind using sets constants code search performed parallel compare query similar code typically uses similar constants sample size nsamp comments featureand count number similar observations consist comments associated function comments represented set words select code search tuniq tuniq threshold common intuition behind using comments code search samp comments similar pieces code likely use indicates sufficiently unique similar vocabulary example code fig sample example tuniq indicates comments found match similar less combining using several featuresample considered distinctive enough classes combination allows source forager obtain good warrant inclusion results fairly robust manner using different assigned weight exactly dimensions code example consider exactly based whether selected search implementation fig see variables named process weights used combining mid low high used two conditionals similarities code search knested inside single loop integer division search carried query integer operation performed put together observations hallmarks brief study distributions skeleton tree corpus revealed data point implementation function functions code database described obtain functions similar query table task categories used queries similar gives number similar functions manually found given task category partial reports many function pairs moss considered potential clones significant reports function pairs least code overlap weight generation note need additional knowledge query however know ahead time query belongs specific domain information available regarding constitutes similar code domain use techniques learn good weights domain ahead time use weights code search future queries domain given particular data set labeled similar code generate weights training binaryclassification support vector machine svm train using raw code text even raw sets use svm training process generate relative weights similarity scores train svm similarity scores directly similarity scores two functions assembled similarity vector svm trained examples similarity vectors similar dissimilar functions labeled accordingly technique allows optimize ahead time relatively weighted code search using similarity functions employed code search query svm uses linear classifier allows convenient interpretation internal weights final step extract internal weights normalize relative sum magnitudes truncating negative weights normalized weights used directly weights provides details corpus training process course obvious weights obtained training classification purposes useful ranking results queries measures effectiveness strategy practice moss detected task category binary search edit distance insertion sort knapsack modular exponentiation non recursive depth first search red black tree left rotate similar partial significant task involves searching relevant documents group documents include relevant documents case source forager documents functions documents also known distractors leads naturally following question much source forager performance degrade increase number distractors code base searched experimental setup methodology source forager uses codesonar engine analyze corpora implement feature extractors codesonar handles projects tens millions lines code codesonar also exposes wealth information program apis source forager feature extractors implemented codesonar plugins use apis consequently source forager inherits codesonar requirement programs must compilable analyzable tasks experiments assess source forager performance various configurations tasks set follows query function set known relevant functions similar query relevant functions treated ground truth relevant functions mixed many functions distractors together form code database used experiment source forager searches code database similar functions compute informationretrieval statistics based ranking functions returned results queries use two query sets tasks representing two domains one called represents algorithmic code queries created seven tasks outlined table manually curated total functions accomplish one seven tasks functions mostly obtained github written variety programmers none authors paper functions accomplish specific task manually vetted experimental evaluation section outlines research questions seek answer experiments describes setup methodology used experiments presents results experiments research questions experiments designed answer following research questions individual described perform tasks relative combining using dynamic selection improve source forager performance combining using supervised learning improve source forager performance query domain known similar thus total base queries use sets functions queries desired search results consider appropriate proxy queries performed search results expected users algorithm domain made labeled queries available make sure similar functions found clones ran moss detector given group programs moss reports program pairs may clones along overlap percentage table reports moss findings run using default settings table partial overlap represents pair moss reports possible clones significant overlap counts possible clones least overlap observe many function pairs marked manually similar clones thus recognizing similar function pairs corpus nontrivial challenge second query set use called represents code queries systems programming looked three implementations standard library musl libc diet libc uclibc define function categories corresponding functions three implementations provide assume within function category three libc implementations domain queries example musl libc sprintf labeled similar diet libc sprintf uclibc sprintf dissimilar everything else distractor functions distractor functions taken openly available muse corpus mainly consist code fedora source packages srpms feature extractors currently require compilable code fedora srpms provide due large size distractorfunction corpus manually vetted distractor functions sure irrelevant queries issued possible distractor functions indeed relevant queries retrieval statistics exception experiments reported fig experiments use distractors retrieval statistics compute mean average precision map retrieval statistic common information retrieval map typically used measure quality ranked retrieval results map takes account rank relevant documents retrieved results map provides measure quality across recall levels map mean average precision computed query average query given total number documents searched number documents marked relevant query precision documents requested retrieved document relevant otherwise average precision points new relevant document retrieved ranked result list best map score achieved query relevant documents appear top search results weights applied techniques discussed provide labeled train svm instance training set generated comparing two functions yielding single similarity vector consists similarity scores binary classification training instance implementations function otherwise use liblinear train svm classify function comparisons process takes roughly twenty milliseconds using technique able achieve accuracy svm trained extract normalize internal weights use code search configuration described within domain dataset divided multiple folds pairs weights extracted training set used obtain map scores test set weights trained subset given domain tested using queries different subset domain configuration described used train weights queries source forager configurations experiments run source forager many configurations configuration defined weight assigned featureclasses given table weights used performing code search query weight corresponding weights corresponding set query giving equal importance queries query subset selected given equal weights described dynamic selection adds small overhead query new random configuration used follows random subset selected selected given equal weights repeat process times different random selections report mean results trials query use weights learned domain query belongs described query use weights learned domain query belong naive python implementation adds average overhead seconds per query dynamic selection currently selection decision done sequentially instead parallel suggested url http note unlike configurations configurations permit weights give different importance levels different finding domain query known training data available combining multiple featureclasses using weights derived supervised learning effective strategy code search results discussion configuration tests whether weights learned one domain useful different left side fig shows individual featuredomain rightmost two bars fig show class performs tasks isolation hard derive single set relative weights experiment addresses solo weighted work well queries domains thus absence terms performs best individually domain information query preferred thus fig shows source forager result quality scales increasing sizes experiment addresses finding drive source forager using source forager used one weighted terms best configurations experiment one would expect map option however fig shows performance scores decline distractors proliferate however consider different varies considerably depending relevant sets contain items competing query set variance suggests distractor sets five orders magnitude larger different important different kinds queries finding resilient map scores indicate source forager returns results even distractors outnumber relevant items several orders magnitude asks whether multiple usefully combined whether good way combination manner featureclasses combined configuration represents baseline compare configurations configuration selects different subsets basis sanity check selections performed also compare configuration randomly selects subsets every query right side fig shows performs better compared also outperforms solo configurations left side fig threats validity issue whether evaluation benchmarks appropriate potential threat validity information retrieval system mitigate threat source forager several ways first use benchmark queries two different domains second use moss plagiarism detector show manually labeled set relevant functions trivial clones third draw data sets code written arbitrary programmers artificial programs written combined various ways perform code searches explored part vast space combinations results speak tried find map scores configuration dynselect good designed experiments configurations test whether selections made indeed necessary useful find finding absence additional information query combining multiple dynamically selecting basis effective strategy code search addresses scenario domain query known additional information available regarding domain described configuration tests source forager scenario relative importance given domain form weights also makes code search efficient eliminating overhead selection right side fig shows svmweights outperforms configurations related work engines several popular codesearch tools grep tokenized source code github searchcode open hub etc tools useful fall short many use cases exploit rich semantics code example top search results term dfs code projects github yields function declarations macro names include directives mention dfs actually useful sourcerer engine combines search techniques information relations among programming entities like packages classes methods fields numbers following configuration section indicates map scores configuration respectively library calls unmodeled library calls map score comments string literals numeric literals local types type signatures modeled library calls graph cfg dfs graph cfg dfs graph cfg bfs graph cfg bfs weighted terms decorated skeleton tree skeleton tree coupling fig information retrieval performance distractors left side plot coupling comments uses configuration given feature right side plot uses various source forager configurations leverage multiple simultaneously although map score exactly zero several therefore round labels map score strathcona returns relevant java code examples developers learning use complex frameworks uses several heuristics based hierarchies method calls type uses source forager could also use applicable heuristics sourcerer strathcona featureclasses additionally demonstrates search using complex structures decorated skeleton trees codegenie proposes code search user supplies set unit tests code nent want find codegenie leverages sourcerer perform search test cases refine results source forager could used replacement sourcerer perform similar code search codegenie stollee perform code search based logical characterizations programs behaviors obtained via symbolic execution query consists concrete pairs desired code fragment approach precisely captures semantics corpus elements imnumber distractor functions mediately handle common programming constructs loops global variables also restricts size fig impact number distractor functions map scores using dynprogram elements corpus symbolic execution select queries horizontal axis log scale larger elements may lead path explosion source forager easily extended use pairs additional featureclass scenarios restrictions acceptable sourcerer also uses fingerprints capture xsnippet parseweb specialized structural information code depth loop engines xsnippet looks specifically code instantiates nesting presence absence certain language constructs objects given type given context parseweb similar queries sourcerer powered lucene focus code sequences instantiate objects codify http opposed search extracts stores large amount metadata symbol source forager program provides user interface querying metadata codify aids understanding browsing code goal source forager code search different find source code similar query detection source forager code searches differ typical clone detection problem interested finding code semantic syntactic similarity therefore use range span syntactic semantic source forager notion similarity neatly fall definitions standard clone types search finding similar machine code useful finding known vulnerabilities code source code available primary difference code search machine code poorer syntactic semantic structural information available compared source code result overlap techniques research search focused tackling different problems search across different cpu architectures compiler optimizations compilers operating systems etc rosenblum train svms features extracted source code attempt classify programs author source forager builds idea training svm similarity scores derived extracting internal weights trained svm strengthen combined similarity function used code search references sadowski stollee elbaum developers search code case study found softw linstead bajracharya ngo rigor lopes baldi sourcerer mining searching software data mining knowledge discovery vol apr reiss code search int conf softw stollee elbaum dobos solving search source code trans softw engineering methodology vol may sahavechaphan claypool xsnippet mining sample code conf prog systems languages applications begel codifier search user interface workshop interaction inf retrieval lemos bajracharya ossher morla masiero baldi lopes codegenie using search reuse source code int conf automated softw holmes murphy using structural context recommend source code examples int conf softw crockford introducing json apr online available http jermaine pliny database online available http zhang shasha simple fast algorithms editing distance trees related problems siam vol guha jagadish koudas srivastava approximate xml joins int conf management data acm nltk project stopwords corpus mar online available http feild binkley lawrie empirical comparison techniques extracting concept abbreviations identifiers proc iasted int conf software engineering applications sea citeseer khoo mycroft anderson rendezvous search engine binary code proceedings working conference mining software repositories guyon elisseeff introduction variable feature selection mach learn vol mar schleimer wilkerson aiken winnowing local algorithms document fingerprinting int conf management data eta labs musl libc online available https diet libc contributors diet libc online available https andersen uclibc online available https leidos holdings muse corpus apr online available http fan chang hsieh wang lin liblinear library large linear classification journal machine learning research vol lemos bajracharya ossher masiero lopes approach code search application reuse auxiliary functionality information software technology vol apr stollee gouse brun repairing programs semantic code search int conf automated softw thummalapenta xie parseweb programmer assistant reusing open source code web int conf automated softw roy cordy koschke comparison evaluation code clone detection techniques tools qualitative approach sci comput may david yahav code search executables proceedings acm sigplan conference programming language design implementation ser pldi new york usa acm eschweiler yakdan discovre efficient identification bugs binary code network dist syst security pewny garmany gawlik rossow holz crossarchitecture bug search binary executables security privacy ieee symposium ieee rosenblum zhu miller wrote code identifying authors program binaries proceedings european conference research computer security
| 6 |
conference paper accepted ieee winter conference applications computer vision wacv towards robust deep neural networks bang andras rozsa manuel terrance boult vision security technology vast lab university colorado colorado springs usa jan arozsa mgunther tboult abstract machine learning models including deep neural networks vulnerable small perturbations cause unexpected classification errors unexpected lack robustness raises fundamental questions generalization properties poses serious concern practical deployments perturbations remain imperceptible formed adversarial examples demonstrate inherent inconsistency vulnerable machine learning models human perception prior work casts problem security issue despite significance discovered instabilities ensuing research cause well understood effective method developed address problem paper present novel theory explain unpleasant phenomenon exists deep neural networks based theory introduce simple efficient effective training approach batch adjusted network gradients bang significantly improves robustness machine learning models bang technique rely form data augmentation utilization adversarial images training resultant classifiers resistant adversarial perturbations maintaining even enhancing overall classification performance mnist samples distortions yielding misclassifications samples distortions yielding misclassifications figure mproving robustness via bang figure demonstrates enhanced robustness perturbations generated via adversarial generation method mnist digits samples displayed top rows underneath raw test images show distorted versions formed smallest perturbations change correctly classified class labels test samples second rows present perturbations obtained regularly trained learning models last rows show examples generated networks trained via batch adjusted network gradients bang approach indicated perturbations highly perceptible learning models trained bang become robust adversarial perturbations introduction machine learning broadly used various vision applications recent advances deep learning made deep neural networks powerful learning models successfully applied different vision problems recent performance gain mainly result improvements two fields namely building powerful learning models designing better strategies avoid overfitting advancements leveraged use larger datasets massive computing although deep neural networks dnns achieve performance wide range tasks generalization properties learning models questioned szegedy existence adversarial examples revealed dnns capable learning feature embeddings enable successfully adapted different problems generally considered generalize well hence expected robust moderate distortions inputs surprisingly adversarial examples formed applying imperceptible perturbations otherwise correctly recognized inputs lead machine learning models including art dnns misclassify samples often high confidence highly unexpected intriguing property machine learning models highlights fundamental problem researchers trying solve explain adversarial examples exist several controversial explanations proposed hypothesized adversarial instability exists due dnns acting linear classifiers allow even imperceptibly small perturbations applied inputs spread among higher dimensions radically change outputs belief challenged analyzing experimenting dnns trained recognize objects unconstrained conditions demonstrated classifiers locally linear changes recognized object otherwise dnns act nonlinearly performing various experiments concluded adversarial instability rather related intrinsic deficiencies training procedure objective function model problem addressed paper preventing attacks via adversarial examples focus overall robustness generalizability dnns fundamental problem deep learning recently received increasing attention researchers considering learning models applied computer vision tasks classification many incorrectly uncertainly recognized inputs corrected improved small perturbations naturally occurring problem vision systems paper introduce theory instability machine learning models existence adversarial examples evolutionary stalling training network weights adjusted using gradient loss evolving eventually classify examples correctly ideally prefer broad flat regions around samples achieve good generalization adversarial robustness however training sample correctly classified contribution loss thus forming weight updates reduced evolution local decision surface stalls correctly classified samples flatten extend surroundings improve generalization therefore contributions correctly classified training samples boundary adjustments highly decreased compared batch elements samples end stuck close decision boundaries hence susceptible small perturbations flipping classifications mitigate evolutionary stalling propose batch adjusted network gradients bang training algorithm experimentally evaluate robustness using combination adversarial perturbations random distortions paper explores impact bang parameters architectural variations dropout instability adversarial ness conclusion validate theory experimentally demonstrating bang significantly improves robustness deep neural networks optimized two small datasets trained learning models maintain even improve overall classification performance related work deep neural networks dnns achieve high performance various tasks able learn generalization priors training data szegedy showed machine learning models misclassify samples formed slightly perturbing correctly recognized inputs adversarial examples indistinguishable originating counterparts human observers unexpected existence presents problem authors introduced first technique capable reliably finding adversarial perturbations claimed adversarial examples generalize across different learning models computationally cheaper adversarial example generation algorithm fast gradient sign fgs method presented goodfellow approach also uses inner state dnns efficient fgs requires gradient loss calculated authors demonstrated using adversarial examples generated fgs implicitly enhanced objective function accuracy robustness trained classifiers improved paper focusing adversarial machine learning kurakin proposed new algorithms extending fgs method target specific class calculate apply gradients iteratively instead single gradient calculation via fgs authors compared effect different types adversarial examples used implicit adversarial training found results vary based upon type applied adversarial examples rozsa introduced approach capable efficiently producing multiple adversarial examples input demonstrated using samples explicitly higher magnitudes adversarial perturbations sufficient minimal outperform regular adversarial training authors also presented new metric perceptual adversarial similarity score pass better measure distinguishability original adversarial image pairs terms human perception commonly used norms sensitive small geometric distortions remain unnoticeable pass applicable quantify similarity quality adversarial examples although adversarial training implicit explicit demonstrated decrease instability learning models forming examples still computationally expensive limits application techniques furthermore considering various adversarial generation techniques utilizing certain types samples might lead improved robustness adversarial examples techniques alternatively zheng proposed stability training lightweight still effective method stabilize dnns naturally occurring distortions visual input introduced training procedure uses additional stability objective makes dnns learn weights minimize prediction difference original perturbed images order obtain general robustness rely class perturbations authors applied gaussian noise distort training images conducted experiments different network topologies training procedures improve robustness dnns authors proposed deep contractive network dcn imposes layerwise contractive penalty dnn formulated penalty aims minimize output variances respect perturbations inputs enable network explicitly learn flat invariant regions around training data based positive initial results concluded adversarial instability rather result intrinsic deficiencies training procedure objective function model topologies luo proposed technique selects uses image classification authors demonstrated negative effect foveated perturbations classification scores significantly reduced compared entire perturbations graese showed transformations normal image acquisition process also negate effect carefully crafted adversarial perturbations preprocessing techniques alleviate problem posed adversarial images solve inherent instability dnns words methods treat symptoms disease summary wide variety less efficient approaches proposed literature aim improving robustness generalization properties dnns none proved effective enough approach section first briefly describe intuition unexpected adversarial instability exists machine learning models afterwards present simple straightforward modification training procedure aims optimize weights way resulting dnns become robust distortions inputs intuition training inputs batch correctly others incorrectly classified general calculated loss thus gradient loss misclassified ones larger correctly classified inputs batch therefore training iteration weight updates learning inputs badly predicted hand correctly classified samples significant impact advancing decision boundaries remain positions close obtained becoming correctly classified due evolutionary stalling samples low gradients form flatter invariant region around consequently samples regions remain susceptible adversarial perturbations even small perturbation push back incorrect class increasing contribution correctly classified examples batch weight updates forcing continue improving decision boundaries reasonable think flatten decision space around training samples train robust dnns implementation core concept batch adjusted network gradients bang approach variation batch normalization however rather trying balance inputs layers seek ensure contributions weight updates balanced among batch elements scaling gradients let dive details introduce notations use formulate bang short scale gradients batch elements used compute weight updates training iteration let consider network weights layered structure layers respective weights given input partial derivatives loss respect output layer simplicity leave structure weights layers structure layer outputs either fully connected layers threedimensional convolutional layers bang goal balance gradients batch scaling lower magnitudes order determine highest gradient batch inputs given layer terms norm use basis balancing magnitudes gradients batch weight updates calculated scaling derivative batch learning rate max max key parameter approach specifies degree gradient balancing among batch elements exponent might appear little complex ambiguous sole purpose scale gradients small magnitudes others larger norms assuming regular backward pass combines gradients batch elements calculating normally scaled learning rate used update weights combining previous weight update scaled momentum bang produces second set parameter approach used scaling general acts local learning rate play important role future work throughout experiments keep bang parameters fixed layers actually modify original learning rate note although approach changes actual calculation weight updates layers impact backpropagation original gradient network finally implemented bang applying small modifications regular training procedure negligible computational overhead experiments evaluate approach conducted experiments slightly modified versions lenet quick models distributed caffe namely running preliminary experiments bang added dropout layer model architectures serves multiple purposes observed bang tends cause overfitting trained lenet networks resultant models made confident classifications even misclassified test images additional dropout layer alleviates problems adjusted network architectures also result improved classification performances regular bang training obtaining learning models regular bang training assess compare robustness classifiers two ways important note select best training models based performance validation set evaluations simply use models obtained last training iteration primary goal measure evolving robustness believe decision leads fairer comparison however classification performance selected models optimal finally would like mention conducted experiments discover effectiveness bang used regularly trained models found robustness resultant networks even comparable trained scratch first evaluate adversarial vulnerability two adversarial example generation methods gradientbased fast gradient sign fgs method approach although latter capable forming multiple adversarial perturbations input target similar class approach referred aim form adversarial perturbations every correctly classified image mnist test set respectively consider adversarial example generation attempt successful direction specified either fgs leads misclassification constraint discrete pixel values range course limitation means formed perturbations may may adversarial nature highly perceptible human observers compare adversarial robustness classifiers collecting measures quantify quality produced adversarial examples purpose calculate perceptual adversarial similarity score pass original adversarial image pairs also determine norms adversarial perturbations although norm good metric quantify adversarial quality terms human perception demonstrate far actual perturbed image original sample second quantify robustness learning models evolve training applying general approach given pair classifiers one regularly trained obtained bang training add certain level random noise test images class correctly classified networks tested stages compute proportion perturbed images classified differently originating one previously described test assessing adversarial vulnerability explores two directions specified fgs method approach applying random distortions inspected image every noise level gives general evaluation although experimenting random noise universal rely specific adversarial generation technique small random perturbations cause misclassifications hard find hence collected table raining table highlights difference lenet models obtained using regular bang training accuracy mnist test set achieved success rates fgs adversarial example generation methods pass scores norms produced examples mnist test set listed accuracy accuracy fgs success rate pass figure odels rained bang plots summarize results lenet models trained bang using combinations tested grid two parameters step size step size trained single model combination show obtained accuracy mnist test set achieved success rates using fgs mean pass score adversarial examples mnist test images solid green line represents level regularly trained learning models better visual representation applied interpolation results qualitatively good explicitly forming adversarial perturbations furthermore order evaluate stability trained classifiers distorted images gaussian noise far beyond noise level considered imperceptible adversarial lenet mnist commenced experiments evaluating bang lenet model optimized mnist dataset mnist contains images overall used training validation remaining testing tested network originally four layers two convolutional two fully connected extended one additional dropout layer optimize without changing hyperparameters distributed caffe learning model trained batch size iterations using inverse decay learning rate policy initial learning rate since training procedure two parameters defined equation introduced equation trained lenet models parameter combinations grid evaluated accuracy adversarial vulnerability trained classifiers results conducted experiments visualized figure also show accuracies metrics indicating adversarial robustness table models obtained regular training optimized bang training see table fgs success rates achieved regular training dramatically decreased bang rate drops almost every single failed adversarial example generation attempt due blank gradients gradient loss respect original image label contains zeros means methods utilizing gradient loss succeed increase words balance contributions batch elements scaling gradients lower magnitudes resultant classifiers become resistant adversarial generation methods although success rates obtained method remain relatively high regular training bang training absolute improvement figure robustness andom istortions plots show evolving robustness lenet models obtained regular training table trained bang table displays improvement identifying test images per class correctly classified networks every iterations perturb times adding level gaussian noise specified standard deviation test networks several stages training plots show percentage distortions yielding misclassifications better visual representation applied interpolation ities examples degrade significantly lenet models trained bang compared regular training displayed figure degradation highlighted decreasing pass scores significantly increased norms perturbations listed table respect achieved classification performances find level degradation depending selected values phenomenon seen figure partially due random initializations result overfitting decision evaluate networks training iterations still observe bang yield improved classification performance regular training paired improved robustness listed table additionally conducted experiments quantify compare robustness random perturbations evolves training general approach selected test two classifiers table optimized regular training trained bang see figure regularly trained model initially highly susceptible larger distortions training progresses becomes stable settles approximately respect strongest class gaussian noise formed using standard deviation pixels contrarily classifier trained bang maintains significantly lower rates throughout whole training shown figure iterations strongest distortions alter original classification absolute improvements displayed figure also evaluated training bang quick model caffe trained dataset consists images training images images used validation testing purposes network architecture originally five layers three convolutional two fully connected extended one dropout layer learning model trained batch size iterations epochs use fixed learning rate decrease factor epochs another epochs due different nature training slightly adjusted bang parameters specifically classification performance significantly worse achieved lenet mnist yielding proportionately incorrectly classified samples applied lower local learning rates higher values scaling furthermore found scaling incorrectly classified inputs less correct ones beneficial effects robustness hence applied specified values incorrectly classified batch elements similarly conducted experiments lenet trained classifiers possible combinations parameters grid measured accuracy adversarial vulnerability networks results visualized figure models obtained regular training optimized bang training show accuracies metrics indicating adversarial robustness table see table fgs success rates achieved regular training significantly decreased bang rate drops approximately majority failed adversarial example generation attempts due blank gradients figure shows increase classifiers become resistant adversarial generation methods higher levels success rates comparison lenet might table raining table shows difference classifiers obtained using regular bang training accuracy test set achieved success rates fgs adversarial example generation methods pass scores norms formed examples test images listed accuracy accuracy fgs success rate pass figure bang odels plots summarize results models trained bang using combinations tested grid two parameters step size step size trained single model combination show obtained accuracy test set achieved success rates fgs mean pass score adversarial examples test images solid green line represents level regularly trained learning models better visual representation applied interpolation ply due fact classifiers trained less accurate therefore learning incorrect samples batch still large contribution weight updates success rates achieved remain high quality adversarial examples degrades significantly compared regular training degradation highlighted decreasing pass scores shown figure significantly increased norms adversarial perturbations listed table finally shown table train classifiers bang slightly outperform models regular training terms classification accuracy course achieved overall performance depends chosen parameters depicted figure finally ran experiments better quantify compare robustness trained classifiers random perturbations evolves training similarly experiments lenet selected two classifiers table testing trained regularly optimized bang see figure regularly trained model highly susceptible larger distortions robustness improve training finally achieves respect strongest class gaussian noise formed using standard deviation pixels contrarily model trained bang remains robust throughout training epochs shown figure end strongest distortions change original classification absolute improvements visualized figure conclude although bang enhanced robustness random perturbations results less impressive comparison lenet least respect strongest distortions conclusion paper introduced theory explain intriguing property machine learning models namely regular training procedure prevent samples forming flatter broader regions around evolutionary stalling yields samples remaining close regular training bang training absolute improvement figure robustness andom istortions plots show evolving robustness models obtained regular training table trained bang table displays improvement identifying test images per class correctly classified networks every second epoch perturb times level gaussian noise specified standard deviation test networks different stages training plots show percentage distortions yielding misclassifications better visual representation applied interpolation cision boundaries hence susceptible imperceptibly small perturbations causing misclassifications address problem proposed novel approach improve robustness deep neural networks dnns slightly modifying regular training procedure approach require additional training data neither adversarial examples sort data augmentation achieve improved robustness overall performance trained network maintained even enhanced experimentally demonstrated optimizing dnns batch adjusted network gradient bang technique leads significantly enhanced stability general balancing contributions batch elements forming weight updates bang allows training samples form flatter invariant regions around trained classifiers become robust random distortions demonstrated fast gradient sign fgs method approach targeted closest scoring class also less vulnerable adversarial example generation methods visualize advancement achieved bang training terms improved adversarial robustness figure correctly classified mnist test images presented along adversarial examples formed via approach dnns trained regularly bang bang helps mitigate adversarial instability learning models maintain even improve overall classification performance proposed approach achieves results negligible computational overhead regular training procedure although managed achieve good results two dnns trained different datasets found bang parameters needed adjusted problems obtain better results exploring effect different rameters different layers changing contributions correctly incorrectly classified batch elements considered future work focus better understanding bang enhancing algorithm exploring application training dnns datasets might argue similar balancing effect achieved distillation carlini demonstrated defensive distillation effective improve adversarial robustness effectiveness bang adversarial perturbations obtained via various adversarial example generation techniques likely varies kurakin observed adversarial training research needs explore summary conclude adversarial instability dnns closely related applied training procedures claimed huge potential research area advance generalization properties machine learning models overall performances well acknowledgments research based upon work funded part nsf part office director national intelligence odni intelligence advanced research projects activity iarpa via iarpa contract views conclusions contained herein authors interpreted necessarily representing official policies endorsements either expressed implied odni iarpa government government authorized reproduce distribute reprints governmental purposes notwithstanding copyright annotation thereon references carlini wagner defensive distillation robust adversarial examples arxiv preprint fawzi fawzi frossard fundamental limits adversarial robustness international conference machine learning icml workshop deep learning fawzi frossard robustness classifiers adversarial random noise advances neural information processing systems goodfellow shlens szegedy explaining harnessing adversarial examples international conference learning representation iclr graese rozsa boult assessing threat adversarial examples deep neural networks ieee international conference machine learning applications icmla rigazio towards deep neural network architectures robust adversarial examples international conference learning representation iclr workshops zhang ren sun deep residual learning image recognition ieee conference computer vision pattern recognition cvpr hein andriushchenko formal guarantees robustness classifier adversarial manipulation advances neural information processing systems ioffe szegedy batch normalization accelerating deep network training reducing internal covariate shift international conference machine learning icml jia shelhamer donahue karayev long girshick guadarrama darrell caffe convolutional architecture fast feature embedding international conference multimedia acm keskar mudigere nocedal smelyanskiy tang training deep learning generalization gap sharp minima arxiv preprint krizhevsky hinton learning multiple layers features tiny images kurakin goodfellow bengio adversarial machine learning scale international conference learning representation iclr lai pan liu yan simultaneous feature learning hash coding deep neural networks ieee conference computer vision pattern recognition cvpr lecun cortes burges mnist database handwritten digits lecun jackel bottou cortes denker drucker guyon muller sackinger simard learning algorithms classification comparison handwritten digit recognition neural networks statistical mechanics perspective lin yang hsiao chen deep learning binary hash codes fast image retrieval ieee conference computer vision pattern recognition cvpr workshops long shelhamer darrell fully convolutional networks semantic segmentation ieee conference computer vision pattern recognition cvpr luo boix roig poggio zhao foveationbased mechanisms alleviate adversarial examples arxiv preprint ouyang wang zeng qiu luo tian yang wang loy deformable deep convolutional neural networks object detection ieee conference computer vision pattern recognition cvpr peck saeys goossens roels lower bounds robustness adversarial perturbations advances neural information processing systems rozsa rudd boult facial attributes adversarially robust international conference pattern recognition icpr rozsa rudd boult adversarial diversity hard positive generation ieee conference computer vision pattern recognition cvpr workshops srivastava hinton krizhevsky sutskever salakhutdinov dropout simple way prevent neural networks overfitting journal machine learning research jmlr szegedy liu jia sermanet reed anguelov erhan vanhoucke rabinovich going deeper convolutions ieee conference computer vision pattern recognition cvpr szegedy zaremba sutskever bruna erhan goodfellow fergus intriguing properties neural networks international conference learning representation iclr vinyals toshev bengio erhan show tell neural image caption generator ieee conference computer vision pattern recognition cvpr yang yan lei convolutional channel features ieee international conference computer vision iccv zhang chen saligrama efficient training deep neural networks supervised hashing ieee conference computer vision pattern recognition cvpr zheng song leung goodfellow improving robustness deep neural networks via stability training ieee conference computer vision pattern recognition cvpr
| 1 |
neural domain adaptation biomedical question answering georg dirk mariana hasso plattner institute august bebel strasse potsdam germany language technology lab dfki berlin germany abstract jun factoid question answering recently benefited development deep learning systems neural network models outperform traditional approaches domains large datasets exist squad questions wikipedia articles however systems yet applied specific domains biomedicine datasets generally small train system scratch example bioasq dataset biomedical comprises less factoid single answer list multiple answers instances work adapt neural system trained large dataset squad source biomedical dataset bioasq target employing various transfer learning techniques network architecture based system extended biomedical word embeddings novel mechanism answer list questions contrast existing biomedical systems system rely ontologies parsers entity taggers expensive create despite fact systems achieve results factoid questions competitive results list questions introduction question answering task retrieving answers question given one contexts explored opendomain setting voorhees well settings bioasq biomedical domain tsatsaronis bioasq challenge provides factoid list questions questions one several answers respectively work focuses answering questions example drugs included regimen fluorouracil epirubicin cyclophosphamide restrict focus extractive instances correct answers represented spans contexts contexts relevant documents provided information retrieval system traditionally pipeline consists namedentity recognition question classification answer processing steps jurafsky methods applied biomedical datasets moderate success creation datasets squad rajpurkar recently enabled development neural systems wang jiang xiong seo weissenborn leading impressive performance gains traditional systems however creating datasets specific domains biomedical would expensive need domain experts therefore desirable recent success deep learning based methods datasets raises question whether capabilities trained models transferable another domain via domain adaptation techniques although domain adaptation studied traditional systems blitzer deep learning systems chen ganin bousmalis riemer kirkpatrick knowledge yet applied neural systems bridge gap employ various main adaptation techniques transfer knowledge trained neural system fastqa weissenborn biomedical domain using much smaller bioasq dataset order answer list questions addition factoid questions extend fastqa novel answering mechanism evaluate various transfer learning techniques comprehensively factoid questions show mere reaches results improved forgetting cost regularization riemer list questions results competitive existing systems manual analysis subset factoid questions suggests results even better automatic evaluation states revealing many incorrect answers fact synonyms answer related work traditional question answering traditional factoid list question answering pipelines subdivided recognition question classification answer processing components jurafsky systems also applied biomedical oaqa system besides number features incorporate rich amount biomedical resources including parser entity tagger thesaurus retrieve concepts synonyms logistic regression classifier used question classification candidate answer scoring candidate answer generation oaqa employs different strategies general questions choice questions quantity questions neural question answering neural systems differ traditional approaches algorithm subdivided discrete steps instead single model trained compute answer directly given question context typical architecture systems wang jiang xiong seo summarized follows embedding layer question context tokens mapped vector space example via glove embeddings pennington optionally character embeddings seo encoding layer token vectors processed independently question context usually recurrent neural network rnn interaction layer layer allows interaction question context representations examples wang jiang coattention xiong answer layer layer assigns start end scores context tokens done either statically wang jiang seo dynamic decoding process xiong fastqa fastqa fits schema reduces complexity architecture removing interaction layer maintaining performance weissenborn instead one several interaction layers rnns fastqa computes two simple features token appended embedding vectors encoding layer chose base work architecture performance faster training time reduced number parameters unsupervised domain adaptation unsupervised domain adaptation describes task learning predictor target domain labeled training data exists different source domain context deep learning common method first train autoencoder large unlabeled corpus domains use learned input representations input features network trained actual task using labeled source domain dataset glorot chen another approach learn hidden representations directly target task example training optimizes network computes hidden representations help predictions source domain dataset indistinguishable hidden representations unlabeled target domain dataset ganin techniques straightforwardly applied question answering task require large corpus biomedical pairs albeit answers required supervised domain adaptation contrast unsupervised case supervised domain adaptation assumes access small amount labeled training data target domain simplest approach supervised domain adaptation neural models network data source domain parameters data target domain main drawback approach catastrophic forgetting describes phenomenon neural networks tend forget knowledge performance source domain drops significantly trained new dataset even though directly aim good performance source domain measures catastrophic forgetting serve useful regularizer prevent progressive neural networks combat issue keeping original parameters fixed adding new units access previously learned features rusu method adds significant amount new parameters trained scratch target domain dataset small riemer use add additional forgetting cost term punishes deviations predictions original parameters another approach add loss punishes deviation original parameters kirkpatrick apply loss selectively parameters important source domain model network architecture based fastqa weissenborn neural system network architecture exchangeable treat black box subtle changes input output layer well decoding training procedure changes described following see figure overview system input layer first step words embedded highdimensional vector space use three sources embeddings concatenated form single embedding vector glove embeddings glove vectors pennington start probabilities pstart end probabilitiesp end end probabilities probabilities pend sigmoid softmax endscores scoresee end end scores yend start scores ystart extractive system biomedical embeddings glove embeddings character embeddings question type features context embeddings question embeddings figure network architecture system biomedical question answering core uses extractive neural system black box use fastqa weissenborn embedding layer modified order include biomedical word embeddings question type features output layer adjusted add ability answer list questions addition factoid questions word vectors trained billion tokens web documents vectors updated training character embeddings used fastqa weissenborn proposed originally seo employ convolutional neural network computes word embeddings characters word biomedical embeddings vectors trained using mikolov million pubmed abstracts pavlopoulos vectors specific biomedical domain expect help biomedical optional step add entity tag features token embeddings via concatenation entity tags provided entity tagger based umls metathesaurus entity tag feature vector bit vector umls semantic types states whether current token part entity type step applied explicitly noted finally encoding question type factoid list appended input vectors embedding vectors input invoke fastqa produce start end scores context tokens denote start scores ystart end scores conditioned predicted start position yend start index end index output layer adapted output layer convert start end scores span probabilities computation probabilities independent question type interpretation however depends question type factoid questions list answer spans interpreted ranked list answer candidates list questions answers certain probability threshold interpreted set answers question given start scores ystart ystart end scores yend yend compute start end probabilities follows pistart ystart end softmax yend sigmoid function consequence multiple tokens chosen likely start tokens network expected select single end token given start token hence softmax function finally probability given span answers question span pstart pend extension generalizes fastqa output layer multiple answer spans different start positions high probability allowing retrieve multiple answers list questions decoding given trained model start probabilities obtained running forward pass computing start probability equation top starts compute end probabilities given start end probabilities extract top answer spans ranked span simple step remove duplicate strings retain highest probability factoid questions output likely answer spans ranked list answers list questions learn probability cutoff threshold defines set list answers span choose threshold optimizes list score respective development set domain adaptation training procedure consists two phases phase train model squad using token score training objective weissenborn refer resulting parameters base model phase initialize model parameters base model continue optimization bioasq dataset smaller learning rate forgetting cost regularization avoid catastrophic forgetting means regularize model optionally add additional forgetting cost term proposed riemer defined loss current predictions base model predictions weight regularization also add loss term penalizes deviations base model parameters note advanced approach would apply loss selectively weights particularly important source domain kirkpatrick final loss computed inal loriginal hyperparameters set unless otherwise noted experimental setup datasets squad squad rajpurkar dataset questions relevant contexts answers sparked research interest development neural systems recently contexts excerpts wikipedia articles workers generated pairs large amount training examples squad lends perfectly source dataset bioasq bioasq challenge provides biomedical dataset tsatsaronis consisting questions relevant contexts called snippets pubmed abstracts possible answers question carefully created help biomedical experts work focus task phase bioasq challenge systems must answer questions snippets questions either questions summary questions factoid questions list questions employ extractive system restrict study answering factoid list questions extracting answer spans provided contexts bioasq training dataset contains questions factoid list questions questions snippets average average tokens long found around factoid questions around list questions least one extractable answer questions extractable answers answers spans computed via simple substring search provided snippets questions ignored training treated answered incorrectly evaluation training minimize loss gold standard answer spans however multiple answer spans refer answer synonyms minimize loss span lowest loss use adam kingma optimization squad learning rate starting halved whenever performance drops checkpoints phase continue optimization bioasq dataset smaller learning rate starting phases model regularized variational dropout rate gal ghahramani evaluation official evaluation measures bioasq mean reciprocal rank mrr factoid questions score list questions factoid questions list ranked answers five entries long score measured gold standard list elements measures details found http string matches used check correctness given answer list synonyms provided answers system response matches one answer counts correct evaluation use two different finetuning datasets depending experiment contains questions first three bioasq challenges additionally contains test questions fourth challenge used training dataset fifth bioasq challenge whereas used training fourth challenge datasets small perform report average performance across five folds use larger dataset except evaluating ensemble comparing participating systems previous bioasq challenges models implemented using tensorflow abadi hidden size context bioasq usually comprises multiple snippets processed independently parallel question answers snippets belonging question merged ranked according individual probabilities results domain adaptation section evaluate various domain adaptation techniques results experiments summarized table baseline baseline without transfer learning experiment trains model bioasq bioasq dataset small dropout rate used worked best preliminary experiments observe rather low performance expected applying deep learning small dataset experiments evaluate pure approach base model system trained squad tested bioasq experiment experiment base model training set observe performance increases significantly especially list questions increase expected network trained experiment factoid mrr list training bioasq training squad bioasq bioasq biomedical embeddings bioasq entity features bioasq squad bioasq forgetting cost bioasq loss original parameters table comparison various transfer learning techniques experiment model trained bioasq experiment model trained squad tested bioasq refer base model experiment base model parameters bioasq training set experiments evaluate utility domain dependent word vectors features experiments address problem catastrophic forgetting experiments conducted dataset list questions part squad dataset first time overall performance model question types much higher baseline system without transfer learning features order evaluate impact using biomedical word embeddings repeat experiment without experiment see factoid list performance drop percentage points respectively showing biomedical word embeddings help increase performance experiment append entity features word vector described section even though features provide network knowledge found actually harms performance factoid questions entity features active small dataset conjecture performance decrease due catastrophic forgetting continue study techniques combat catastrophic forgetting means regularize training experiment table base model mixture bioasq squad questions bioasq questions upsampled accordingly form joint training yielded significant performance gains experiment regularizes model via additional forgetting cost term proposed riemer explained section generally found technique increases performance factoid questions performance boost largest fact forgetting loss decreases performance list questions surprising predictions pushed towards predictions base model poor performance list questions experiment adds loss penalizes deviations base model parameters found performance decreases increase value shows technique help sake completeness report results lowest value yielded significant drop performance ensemble model ensembles common method tweak performance machine learning system ensembles combine multiple model predictions example averaging order improve generalization prevent evaluate utility ensemble training five models dataset using crossvalidation models evaluated test data data included application run ensemble averaging start end scores individual models passed sigmoid softmax functions defined table summarize average performance experiment factoid mrr list average best ensemble table performance model ensemble five models trained dataset tested test questions report average best single model performances well ensemble performance five models best performance across five models performance ensemble observe performance gains percentage points factoid questions less percentage point list questions relative best single model demonstrates small performance gain consistent literature comparison competing bioasq systems final results fifth bioasq challenge available time writing compare system best systems last year challenge comparison use best single model model ensemble trained see section evaluate model batches last year challenge using official bioasq evaluation tool batch contains questions factoid list questions note results underestimate system performance competing system responses manually evaluated humans system responses evaluated automatically using string matching potentially incomplete list synonyms fact qualitative analysis section shows many answers counted incorrect synonyms answer results summarized table compared best systems challenge batches question type categories system winning four five batches factoid questions consider biomedical factoid question answering especially considering results might higher manual evaluation results list questions slightly worse still last year results available http competitive surprising given network never saw list question prior finetuning phase due small test set sizes sampling error batch large causing single model outperform model ensemble batches qualitative analysis order get better insight quality predictions manually validated predictions factoid questions batch fourth bioasq challenge given best single model see table total factoid questions gold standard answer span one contexts according official bioasq evaluation questions predicted correctly gold standard answer ranked highest however identified answers counted correct synonyms gold standard answer examples include disease instead cmt disease tafazzin instead tafazzin taz gene instead beta glucocerebrosidase total labeled questions correct questions correct answer top predictions following give examples mistakes made system questions presented italics context underline predicted answers present correct answers boldface identified eight questions semantic type top answer differs question answer type cases completely wrong predictions however category also includes subtle mistakes like following yeast chromosome rdna cluster reside rdna cluster saccharomyces cerevisiae located left end right end chromosome xii predicted yeast species rdna cluster located ignored question asking chromosome another type mistakes top answer somewhat correct missing essential information labeled four predictions category like following example batch factoid mrr best participant single ensemble best participant list single avg ensemble table comparison systems last year fourth bioasq challenge factoid list questions batch question type list performance best competing system single model ensemble note qualitative analysis section suggests factoid performance batch would twice high synonyms contained gold standard answers early pregnancy cffdna testing allow sex determination fetus gold standard answer week gestation first trimester pregnancy given top answer summary judgment questions answered correctly questions answered correctly one top answers surprisingly high numbers considering low mrr score automatic evaluation table poor prior due lack list questions squad believe large scale corpora list questions would enhance performance unsupervised domain adaptation could interesting direction future work biomedical domain offers large amounts textual data might even contain questions corresponding answers believe leveraging resources holds potential improve biomedical paper described deep learning approach address task biomedical question answering using domain adaptation techniques experiments reveal mere combination biomedical word embeddings yield performance biomedical despite small amount training data lack feature engineering techniques overcome catastrophic forgetting forgetting cost boost performance factoid questions overall show employing domain adaptation neural systems trained datasets yield good performance domains large datasets available discussion future work significant result work results biomedical question answering achieved even absence feature engineering competing systems require structured resources biomedical ontologies parsers entity taggers resources available biomedical domain available domains system hand requires large dataset biomedical word embeddings trained unsupervised fashion small biomedical dataset suggests methodology easily transferable domains well furthermore explored several supervised domain adaptation techniques particular demonstrated usefulness forgetting cost factoid questions decreased performance list questions surprising model performance questions conclusion acknowledgments research supported german federal ministry education research bmbf software campus project genie references abadi ashish agarwal paul barham eugene brevdo zhifeng chen craig citro greg corrado andy davis jeffrey dean matthieu devin tensorflow machine learning heterogeneous distributed systems arxiv preprint john blitzer mark dredze fernando pereira biographies bollywood blenders domain adaptation sentiment classification acl volume pages konstantinos bousmalis george trigeorgis nathan silberman dilip krishnan dumitru erhan domain separation networks advances neural information processing systems pages minmin chen zhixiang kilian weinberger fei sha marginalized denoising autoencoders domain adaptation arxiv preprint yarin gal zoubin ghahramani dropout bayesian approximation representing model uncertainty deep learning arxiv preprint yaroslav ganin evgeniya ustinova hana ajakan pascal germain hugo larochelle laviolette mario marchand victor lempitsky training neural networks journal machine learning research xavier glorot antoine bordes yoshua bengio domain adaptation sentiment classification deep learning approach proceedings international conference machine learning pages dan jurafsky speech language processing pearson education india diederik kingma jimmy adam method stochastic optimization arxiv preprint james kirkpatrick razvan pascanu neil rabinowitz joel veness guillaume desjardins andrei rusu kieran milan john quan tiago ramalho agnieszka overcoming catastrophic forgetting neural networks proceedings national academy sciences page tomas mikolov ilya sutskever kai chen greg corrado jeff dean distributed representations words phrases compositionality advances neural information processing systems pages ioannis pavlopoulos aris kosmopoulos ion androutsopoulos continuous space word vectors obtained applying abstracts biomedical articles http jeffrey pennington richard socher christopher manning glove global vectors word representation empirical methods natural language processing emnlp pages http pranav rajpurkar jian zhang konstantin lopyrev percy liang squad questions machine comprehension text arxiv preprint metthew riemer elham khabiri richard goodwin representation stability regularizer improved text analytics transfer learning https andrei rusu neil rabinowitz guillaume desjardins hubert soyer james kirkpatrick koray kavukcuoglu razvan pascanu raia hadsell progressive neural networks arxiv preprint minjoon seo aniruddha kembhavi ali farhadi hannaneh hajishirzi bidirectional attention flow machine comprehension arxiv preprint george tsatsaronis georgios balikas prodromos malakasiotis ioannis partalas matthias zschunke michael alvers dirk weissenborn anastasia krithara sergios petridis dimitris polychronopoulos overview bioasq largescale biomedical semantic indexing question answering competition bmc bioinformatics ellen voorhees question answering track report trec volume pages shuohang wang jing jiang machine comprehension using answer pointer arxiv preprint dirk weissenborn georg wiese laura seiffe making neural simple possible simpler arxiv preprint caiming xiong victor zhong richard socher dynamic coattention networks question answering arxiv preprint yang zhou yue eric nyberg learning answer biomedical questions oaqa bioasq acl page
| 9 |
uncertainty marginal price transmission reserve market clearing robust unit commitment aug hongxing member ieee yinyin mohammad shahidehpour fellow ieee zuyi senior member ieee increasing penetration renewable energy recent years led uncertainties power systems uncertainties accommodated flexible resources upward downward generation reserves paper novel concept uncertainty marginal price ump proposed price uncertainty reserve time energy priced locational marginal price lmp novel market clearing mechanism proposed credit generation reserve charge load uncertainty within robust unit commitment ruc market derive umps lmps robust optimization framework ump helps allocate cost generation reserves uncertainty sources prove proposed market clearing mechanism leads partial market equilibrium find transmission reserves must kept explicitly addition generation reserves uncertainty accommodation prove transmission reserves ramping delivery may lead financial transmission right ftr underfunding existing markets ftr underfunding covered congestion fund collected uncertainty payment proposed market clearing mechanism simulations system ieee system performed illustrate new concepts market clearing mechanism index marginal price cost causation robust unit commitment financial transmission right generation reserve transmission reserve omenclature indices indices generators lines time intervals index buses index bus unit located index worst point uncertainty functions sets cip cii symbol optimal value variable feasible set dispatch uncertainty set cost related dispatch unit lagrangian function set units located bus set indices set indices upward downward umps bus time constants number buses time intervals aggregated equivalent load transmission line flow limit shift factor line respect bus pimin pimax minimum maximum generation outputs riu rid limits sequential intervals riu rid limits uncertainty accommodation bound uncertainty worst uncertainty vector rnd ftr amount bus variables unit status indicators unit indicators generation dispatch inj net power injection uncertainty bus time optimal value problem given generation pos transmission capacity reserve positive direction neg transmission capacity reserve negative direction inj net power injection change lagrangian multipliers lagrangian multipliers marginal prices energy price ump kth uncertainty point upward ump downward ump qdown upward downward generation reserves charge uncertainty source credits generation reserve unit transmission reserve line time work supported national science foundation grant ntroduction early version work available arxiv july titled market clearing uncertainty generation reserve modern power systems uncertainties grow significantly transmission reserve authors galvin center electricity increasing penetration renewable energy innovation illinois institute technology chicago usa email lizu source res wind power generation pose ieee http citation information doi ieee transactions power systems new challenges operation electricity markets market dam unit commitment economic dispatch problems considering uncertainties become focus research recent years objective problem find least cost solution second day respecting constraints fixing variables problem established locational marginal price lmp reserve price obtained byproducts problem considering uncertainties generation uncontrollable res uncertain parameters optimization problem recently robust ruc proposed address issues uncertainty largest merit solution immunized uncertainties predefined set key idea ruc determine optimal first stage leads least cost worst scenario second stage however approach conservative robust red absent authors combined stochastic robust approach using weight factor objective function address conservativeness issue employed affine policy formulate solve red problem ruc proposed incorporate latest information stage also used overcome computational challenge recently reported new approach tries bridge gap ruc red dam main difficulty pricing red absent traditional ruc hand large number works pricing reserves exists within framework considering contingencies stochastic security normally modeled problem reserve cleared zonal levels instead countable contingency scenarios single additional scenario reserve infinite continuous uncertainties considered ruc reserves fully deliverable infinite scenarios paper propose novel mechanism price energy uncertainty flexibility simultaneously based ruc explicit price signal derived pricing uncertainty solution obtained robust marginal impacts uncertainty flexibility reflected prices proposed mechanism reserve costs allocated uncertainty sources generation reserves also called flexibilities paper key factor robust optimization approaches entitled proper credits based contribution uncertainty management according market equilibrium analysis market participants price takers get maximal profit following dispatch instruction generation reserve deliverability main focus definition lmp employed derive energy price new concept uncertainty marginal price ump proposed define marginal cost immunizing next increment uncertainty specific location load generation pair priced lmp uncertainty flexibility generation reserve another pair priced ump lmps umps may vary locations due transmission congestions limited transmission capacity power flow equations sometimes uncertainties certain buses mitigated cheapest generation reserve expensive generation reserve deliverable kept system therefore uncertainty sources charged generation reserves credited based umps corresponding buses transmission reserve kept within ruc framework congestion component may exist energy price reserve price even physical limit line reached yet base case scenario lmp congestion costs distributed financial transmission right ftr holders existing market according lmp difference ftr amount revenue inadequacy occurs lmp congestion cost collected smaller credit distributed ftr holders also called ftr underfunding serious issue recent years industry reveal transmission reserve another reason ftr underfunding physical transmission limit adopted simultaneous feasibility test sft ftr market conclusion applicable robust framework dam main contributions paper listed follows novel ump uncertainties generation reserves well lmp energy derived within robust framework derivation uncertainties set interval budget constraints general concepts still apply uncertainty sets modeled revealed transmission capacities reserved uncertainty accommodation transmission reserves may cause ftr underfunding deficiency energy congestion revenues based existing market rules new market clearing mechanism proposed credit generation reserve charge load uncertainty payment collected uncertainty sources exactly cover credits generation reserves transmission reserves effectively resolving ftr underfunding issue rest paper organized follows derivation lmp ump presented section market clearing mechanism charge credit based lmp ump case studies presented section iii section concludes paper ruc arket learing one motivation work price uncertainty allocate cost uncertainty accommodation uncertainty source uncertainty source charged uncertainty payment incentive reduce uncertainty ump follow cost causation principle normally required market design charge uncertainty sources cost causation principle described require approved rates reflect degree costs actually caused customer must pay energy ferc cir another important motivation provide theory supports application ruc dam clearing although studied extensively application ruc reliability assessment commitment rac dam several reasons applied dam clearing first computation burden ruc much larger standard second objective cost scenario solution criticized conservatism third economic dispatch prices available within ruc framework recently new achievements algorithms models computing application first two obstacles addressed great promises paper tries clear last obstacle new model adopting ruc market clearing give clear price signals uncertainties reserves side also easier solution pass robustness test ruc rac best knowledge first work pricing energy uncertainties reserves within robust optimization framework dam hence focus illustrating concept following assumptions network loss ignored shift factor matrix constant uncertainty load res contingency ignored uncertainty budget set truly formulated ruc red desire get optimal solution scenario flexible resources adjustable load demands generators fast ramping capabilities follow load deviation occurs uncertainty revealed consistent robust literature uncertainty set modeled rnd ruc min basic idea model find robust scenario dispatch immunized uncertainty uncertainty occurs accommodated generation adjustment please refer appendix detailed formulation max min index set uncertainty points dynamically generated iterations please refer appendix detailed formulation noted extreme point variable associated objective function find worst point given procedure define feasibility tolerance solve obtain optimal solve get solution end procedure converged also get optimal solution solving similar traditional lmp calculation fix binary variables convex linear programming problem red formed cip red min pimax pimin riu pimin rid pimin inj inj pimax pimin riu rid min budget parameter assumed integer noted flexible resources modeled generators paper ruc formulated according model column constraint generation ccg based method used solve model problem established follows inj inj inj inj constraints base constraints different extreme points denotes load balance generation adjustments respects capacity limits ramping limits network constraints denoted inj inj defined inj inj respectively marginal prices section marginal prices energy uncertainty generation reserve derived based lagrangian function denote lagrangian function red shown appendix according definition marginal price lmp energy bus observed impact uncertainty also reflected lmp new concept ump dam defined marginal cost immunizing next unit increment uncertainty extreme point ump uncertainty generation reserve priced derivation worst point concern therefore general principles paper still work replaced sets noted intermediate price signals order get aggregated umps following new sets defined based sign aggregated upward downward umps defined respectively following context show aggregated umps used market clearing mechanism lmp ump charges credits market participants become clear fair dam energy clearing straightforward basic principle related uncertainty flexibility cause uncertainties uncertainty sources res pay based ump contribute management uncertainties uncertainty mitigators generators storage ramping capabilities get paid energy payment credit lses pay based amount load lmp energy payment lse bus noted res entitled credit due negative load modeled ruc generator located bus entitled credit energy production charge uncertainty source uncertainty source charged uncertainty source pays based marginal price worst point uncertainty source charged may pay uncertainty becomes larger uncertainty point upward downward following lemma regarding relation signs lemma please check appendix proof budget set adopted extreme point uncertainty charge also written according lemma thus upward downward uncertainties charged separately noted still need use uncertainty sets used credit generation reserve resources provide deliverable generation reserve entitled credits credits formulated words generation reserve paid ump bus located associated credit zero matter value similar uncertainties generation reserves either upward downward direction denote upward generation reserve qup downward generation reserve pimax riu qup min max pimin also following lemma regarding relation qup lemma optimal solution problem red qup qdown please check appendix proof credit generation reserve located bus rewritten according lemma shows upward downward generation reserves credited separately flexible resources may receive credits upward downward generation reserves simultaneously always holds even uncertainty sets modeled ruc transmission reserve revenue adequacy transmission capacities reserved according solution red transmission reserves used ensure ramping deliverability uncertainty revealed shown noted determined automatically red kept explicitly without explicit transmission reserve requirement constraints like scheduled generation reserve scheduled transmission reserves positive direction negative direction pos inj neg inj respectively always important issue related transmission reserve credit entitled financial transmission right ftr holders ftr financial instrument used hedge congestion cost electricity market participants charged credited due transmission congestion within robust framework effective transmission capacity scenario different physical limit used simultaneous feasibility test sft ftr market existing market ftr credit funded energy congestion cost net payment energy however energy congestion cost may sufficient fund ftr credit argue transmission reserve becomes new reason ftr underfunding framework guarantee ramping deliverability pos neg theorem transmission reserve kept line time dam maximum ftr underfunding associated line time pos neg due deficiency energy congestion cost uncertainty payment res credit energy payment trans res credit lmp cong cost energy credit ftr credit fig money flow proposed market clearing mechanism uncertainty sources make uncertainty payment lses make energy payment please check appendix proof ftr holder point view credit due transmission reserve therefore also call transmission reserve credit denote pos neg one transmission credit positiveptransmission reserve zero line pos time either theorem red feasible uncertainty payment exactly cover generation reserve credit transmission reserve credit revenue adequacy always guaranteed proposed market clearing mechanism please check appendix proof theorem reveals ftr underfunding issue occur within existing market structures long transmission reserve even lmps calculated based approaches theorem shows new market clearing mechanism overcomes ftr underfunding issue money flow proposed market clearing mechanism depicted energy payment collected based lmp distributed ftr holders lmp congestion cost generators energy credit hand payment collected based ump distributed ftr holders transmission reserve credit flexible resources generation reserve credit lmp congestion cost transmission reserve credit exactly cover ftr credit calculated based lmp difference ftr amount market equilibrium section characterize competitive market equilibrium model electricity industry partial market equilibrium model often employed market participants price takers energy cleared according uncertainty generation reserve cleared according without loss generality consider unit located bus profit maximization problem formulated pmpi max decision variable given price signal proved appendix unit inclined change power output level obtain maximum profit following iso dispatch instruction price signal provides incentives unit dispatch power output price gives incentives unit maintain generation reserve uncertainty hence dispatch instruction price signal constitute competitive partial equilibrium discussions qup coupled opportunity cost enough provide incentives keep generation level including generation reserve price several benefits firstly generation reserves provided different units priced fairly generation reserve prices units bus may vary locations line congestions exist secondly higher generation reserve price attracts investment flexible resources thirdly consistent existing reserve pricing practice fact generation reserve price consistent ump therefore uncertainties flexibilities also treated fairly bus upward downward umps obtained according respectively uncertainty sources charged according generation reserves credited according defined price signal intermediate variables market clearing proposed ump may even uncertainty bus zero similar lmp may also bus without load market clearing mechanism proposed paper follows cost causation principle cost allocation reality may controversial allocate reserve cost uncertainty sources however argue would fair must done res penetration level high extreme case loads supplied res study showing possible increasing res penetration cause higher system operation cost issue handled existing market clearing mechanism loads pay additional system reserve required accommodate uncertainty res words loads actually providing subsidies res res penetration level low subsidies help growth res however res penetration level high growing subsidies cause serious fairness issue hand ump stimulating price signals res incentives improve forecast techniques reduce uncertainty ideal case uncertainty approaches zero res longer pay following existing practice variables fixed marginal price derivation hence uplift issue exists real market still remains proposed market clearing mechanism although variables fixed lmp reserve price real market provide effective signals investment generation transmission well consumption strategy electricity similarly uncertainty impact reflected also within ruc model paper hence proposed lmp ump also provide signals investment flexibilities generation transmission demand pricing uncertainties proposed paper conflict pricing traditional reserves mainly prepared contingencies traditional reserve prices derived framework adding extra traditional reserve constraints corresponding reserve costs still allocated lses observed credit sum credits extreme points related constraints may binding multiple extreme points dual variables shadow prices constraints work together dual problem traditional price energy reserve also similar form multiple contingencies modeled although one scenario happen reality still need consider worst scenario defined uncertainty set keep enough reserves dam dam financial market lmp ump financially binding prices similar existing market model considering contingencies even contingency seldom occurs still modeled market clearing contingencies reflected lmp reserve price issue price multiplicity still exists proposed model problem red linear programming problem however price unique nondegeneracy assumption simplicity considered auction proposed model introducing demand bids formulate auction general principles paper still apply iii ase tudy system ieee system simulated illustrate proposed market clearing mechanism system basic ideas ump presented within robust optimization framework ftr underfunding issue illustrated comparison ump traditional reserve price presented ieee system ump related products presented different uncertainty levels behaviors impacts flexible sources analyzed energy storage example system system studied section diagram shown fig unit data line data shown table table respectively table iii presents load uncertainty information column base load shows hourly forecasted load assume load distributions bus bus bus respectively table iii bounds uncertainties bus bus respectively uncertainty bounds buses table arginal osts ifferent eneration evels fig diagram system table nit data bus ystem min max min max generation level fuel cost ramping rate cost min time assumed relative forecasting errors increase hours uncertainty also respect denotes uncertainty interval single bus represents uncertainty budget parameters single bus system respectively lmp ump consider case ccg based approach converges iterations table ine data bus ystem capacity table iii oad ncertainty data bus ystem time base load time base load mar cost mar cost mar cost table eneration eserve qdown qdown qdown hence given solutions problem red solved commercial solver marginal prices obtained byproducts generation outputs presented table hours observed supplies loads hour according bid information table much expensive hence output relatively small low level capacity upward downward generation reserves provided three units also listed table data obtained directly eqs given generation output although remaining generation capacity upward reserve limited upward ramping rate meantime upward reserve provided limited generation capacity although remaining ramping capacity min table shows extreme points obtained ccgbased approach intermediate price signals points also presented observed worst point always obtained extreme point uncertainty set example hour exactly upper bound uncertainty hour bus data table also verifies lemma intermediate umps sign uncertainties bus lmps aggregated upward umps aggregated downward umps hour shown table vii noted umps still exist buses without uncertainties buses similar lmps also exist buses net power injections lmps vary locations indicates line congestion exists table xtreme oints ncertainty table vii lmp ump price price price bus load bus pay highest lmp umps also different various locations highest upward ump hour also located bus prices market participants paid credited lmp paid bus larger marginal cost time upward ump bus exactly difference lmp marginal cost hence ump setter bus umps provide important price signals planning renewable energy sources storages example ump bus relatively small ideal location renewable energy sources terms payment uncertainties contrast ump bus large may attract investment storages generation plants large ramping rates comparison existing lmps reserve prices motivation part compare proposed clearing scheme existing one however reserve robust traditional scheme compare fairly observation transmission constraints challenging one robust framework drop constraints subsection add reserve constraints follows pimin qdown max qup qdown qup qdown qup largest upward downward reserves respectively reserve requirements refer details reserve formulations experiment set reserve requirements set lower upper bounds uncertainty respectively results expected optimal solutions ruc standard explicit reserve constraints lmps calculated ruc also values umps calculated proposed mechanism also exactly reserve prices standard two things verified results first without transmission constraints solution standard easily robust adding reserve constraints second proposed lmps umps consistent lmps reserve prices existing market transmission constraints dropped considering transmission constraints generation reserve guaranteed bus levels traditional hour bus hour fig upward ump blue bar reserve price red bar network constraint model simplicity assume buses zone consider case upward ump reserve price hour depicted fig observed umps bus bus lower traditional reserve prices time umps bus bus higher traditional reserve prices differences caused congestion line reserve delivery worth mentioning lmp differences two models within hour prices illustrated fig reveals another trends ump may higher traditional reserve prices hour zonal reserve price umps nonzeros bus constraint related reserves ruc stronger one traditional model consequently expensive resources used ruc also generally leads higher umps ftr underfunding generation schedules hour power flow line smaller physical limit transmission reserve kept guarantee delivery generation reserve binding constraint line causes lmp differences hence ftr holder gets credits consider set ftr amounts verified ftr amounts satisfy sft ftr market total credit ftr holders however congestion cost dam means lmp congestion cost collected enough cover ftr credit ftr underfunding value revenue residues ump settlement exactly covers ftr underfunding scenario therefore revenue adequate hour ieee system simulations performed ieee system thermal units branches section peak load detailed data including generator parameters line reactance ratings load profiles found http two cases studied section uncertainty levels load levels changed analyze simulation results system level impact transmission line capacity prices also studied table viii peration ost ump payment grc cost payment res credit rev res grc energy storage installed specified bus high ump show potential application umps load level fig uncertainty payment generation reserve credit grc operation cost different load levels upward ump lmp price price case assume uncertainty sources located buses budget parameter set section buslevel uncertainty budget parameter changes bound uncertainty base load simulation results shown table viii observed total operation cost increases increasing indicates larger uncertainty level may increase operation cost columns payment gen res credit denote total payment uncertainty sources credit generation reserves respectively lowest payment highest one hand credit entitled generation reserves also monotonically increasing function generation reserves highest credit last column rev shows revenue residues related umps observed residue always positive fig next page depicts heat map upward umps bus bus hours xaxis represents time intervals represents bus numbers color bar right shows different colors various ump values example denoted blue color bottom represented dark red color top color bar observed uncertainty sources unique umps intervals hours indicates transmission reserve hours hand umps hour vary dramatically different locations highest upward ump around lowest one around according data shown fig high ump bus may attract investment flexible resources energy storages terms generation reserve credit bus attractive location investment renewable energy sources terms uncertainty payments fig shows uncertainty payment generation reserve credit respect load levels base load level set higher loads general lead uncertainty payments generation reserve credits also consistent heat map umps fig umps peak load hours high suggests generation reserves also become scarce resources load levels high transmission line capacity plays important role price calculation fig shows lmps upward umps hour respect increasing capacity line prices buses depicted line capacity increases lmp line capacity line capacity bus bus bus fig lmp left upward ump right hour respect increasing capacity line bus decreases bus also drops upward umps bus bus also drop respectively contrast lmp upward ump bus connected line remain respectively shows change line capacity may impacts prices buses line capacity increases changes lmps umps bus bus within still change bus means additional help deliver cheaper energy reserves bus bus results also consistent analysis traditional lmps case discussed case upward ump bus high hour assume energy storage installed bus simple model energy storage formulated follows ptd ptc max itd ptc itc itd itc ent denotes energy level ptd ptc represent discharging charging rates itd itc indicators discharging charging ump major concern section use simplified parameters bus bus bus bus bus bus bus bus bus bus bus bus bus bus bus bus bus bus bus hour hour hour hour bus hour hour hour hour storage bus without storage bus fig heat map upward umps different colors represent various umps figure depicts umps bus bus hours without energy storage bus figure depicts new umps energy storage sited bus storage discharging efficiency charging efficiency set capacity max initial energy level set mwh mwh respectively maximal charging rate discharging rate set siting energy storage lower new operation cost payment collected uncertainty sources becomes credit generation reserves decreases compared data table viii energy storage also helps reduce payment related umps storage entitled generation reserve credit fig depicts new upward umps installation energy storage compared fig upward ump hour bus decreases lot umps hour also lower suggests sitting energy storage bus effectively lower generation reserve price simulation results demonstrate flexible resources lower umps umps provide investment signal locations generation reserves scarce resources prices within new market scheme reserve fees paid uncertainty sources many potential applications umps open umps unified prices uncertainties reserves interesting investigate optimal strategy one uncertainty source well reserve provider market wind generation company energy storages umps derived paper also provide important price signal investment flexible resources upward ump downward ump bus high investor get return terms generation reserves another potential future research ump study determine budget uncertainty set market modeling traditional spinning reserve contingency also future work paper demand bids considered forecasted load forecasted res uncertainty load res market clearing singlesided model extended model demand bids forecasted res uncertainty load res market clearing forecasted load forecasted res uncertainty load res used rac onclusions novel market model paper clears uncertainty energy generation reserve simultaneously within ruc framework dam uncertainty sources charged generator reserve providers credited based proposed ump ump formulation derived within robust optimization framework also characterize market equilibrium new market clearing mechanism market clearing mechanism established within robust optimization framework robustness dispatch guaranteed optimal reserves uncertainty accommodation obtained model ump proposed paper effectively address issue charge credit uncertainties generation reserve fairly market res study also shows traditional pricing mechanism within ruc framework may lead ftr underfunding proposed market clearing mechanism address issue study shows load serving entities lower energy ppendix etailed ormulation roblem ruc ruc min pimin cip cii pimax riu pimin rid pimin minimum time limit inj pimin pimax riu inj inj inj basic idea model find robust dispatch scenario scenario denotes load balance constraint represents transmission line constraint denotes unit capacity limit constraint denote unit ramping limits indicators unit shutdown respectively units also respect minimum time constraints related binary variables dispatch solution immunized uncertainty uncertainty occurs accommodated generation adjustment generation dispatch also enforced capacity limits models ramping rate limits generation adjustment fact right left hand sides correspond response time similar reserves literatures stands network constraint uncertainty accommodation ppendix etailed ormulation roblem min cip cii minimum time limit pimin pimax riu rid inj inj inj max min inj inj index set uncertainty points dynamically generated iterations noted extreme point variable associated objective function summation slack variables evaluates violation associated solution also explained uncertainties generation load shedding due system limitations ppendix agrangian unction roblem red please check equation next page ppendix roofs lemmas theorems proof lemma small perturbation proof consider replace red optimal value problem red increases means violations original optimal solution problem red hence optimal solution problem red immunized uncertainty contradicts robustness solution therefore similarly proof lemma proof assume according kkt condition optimal point holds according complementary conditions least one binding hence min pimax riu holds similarly equation holds cip pimax pimin min min riu rid inj inj max min rid riu inj inj inj inj proof theorem proof energy congestion cost pos neg pos neg pos neg pos neg inj first equality holds following definition net power injection second equality holds according inj following third equality pholds sign change third equality direction according power flow complementary conditions third term fourth equality must zero based following three cases pos second term last equality corresponds credits ftr holders written inj neg first equality holds according inequality true amount respects according sft ftr market inequality first term last equality based maximum difference ftr credit energy congestion cost equal transmission reserve credit maximum ftr underfunding proof theorem proof according theorem ftr underfunding value due deficiency energy congestion cost therefore need prove money collected uncertainty sources cover ftr underfunding credits generation reserve without loss generality consider payment collected uncertainty sources time inj pos reformulated similarly hence second equality holds third equality holds therefore holds uncertainty payment covers generation reserve credit transmission reserve credit following energy congestion cost shown holds total payments collected loads uncertainty sources cover total credits energy generation reserve ftr holders revenue adequacy proposed market clearing mechanism guaranteed proof competitive equilibrium proof qup coupled constraints according rewrite generation reserve credit pimax pimin riu substituting problem pmpi decouple qup fact also get terms related lagrangian problem red since problem red linear programming problem saddle point optimal solution red also optimal solution pmpi consequently unit inclined deviate output level obtain maximum profit following iso dispatch instruction therefore dispatch price signal constitute competitive partial equilibrium first equalitypholds according according second line rewritten pos neg pos neg pos neg eferences shahidehpour yamin market operations electric power systems forecasting scheduling risk management press zheng litvinov zonal reserve modeling pricing energy reserve market ieee trans power vol may jiang wang guan robust unit commitment wind power pumped storage hydro ieee trans power vol jiang zhang guan network constrained robust unit commitment problem eur oper vol bertsimas litvinov sun zhao zheng adaptive robust optimization security constrained unit commitment problem ieee trans power vol zeng zhao solving robust optimization problems using generation method operations research letters vol sep robust unit commitment dispatch recourse cost requirement ieee trans power doi early access zhao guan unified stochastic robust unit commitment ieee trans power vol warrington goulart mariethoz morari reserves power systems ieee trans power vol jabr adjustable robust opf renewable energy sources ieee trans power vol lorca sun litvinov zheng multistage adaptive robust optimization unit commitment problem operations research vol robust unit commitment recourse cost requirement proc ieee power energy soc general meeting july wang shahidehpour reserve requirements joint energy ancillary services auction ieee trans power vol arroyo galiana energy reserve pricing security electricity markets ieee trans power vol bouffard galiana conejo stochastic case studies ieee trans power vol aganagic waight spot pricing capacities generation transmission reserve extended poolco model ieee trans power vol aug schweppe tabors caraminis bohn spot pricing electricity kluwer academic publishers norwell pjm manual financial transmission rights pjm tech access march online available http pjm options address ftr underfunding pjm tech access may online available https hogan financial transmission right formulations tech online available http formulations hogan contract networks electric power transmission journal regulatory economics vol wang mip reformulation problems robust scuc ieee trans power early access wang fully parallel stochastic securityconstrained unit commitment ieee trans power early access papavasiliou oren rountree applying high performance computing stochastic unit commitment renewable energy integration ieee trans power vol may chao peck oren wilson transmission rights congestion management electricity journal vol zheng litvinov post pricing electricity market ieee trans power vol whinston green microeconomic theory oxford university press new york vol ellison tesfatsion loose byrne project report survey operating reserve markets electric energy regions sandia natl labs publications hogan multiple prices electricity market design price manipulation electricity journal vol shahidehpour unit commitment simultaneous clearing energy ancillary services markets ieee trans power vol liu transmission line rating attack twosettlement electricity markets ieee trans smart grid vol may hongxing received degree information engineering degree systems engineering jiaotong university china degree electrical engineering illinois institute technology chicago research interests include optimization power systems electricity market renewable integration system security smart grid outstanding reviewer ieee transactions power systems ieee transactions sustainable energy received sigma research excellence award illinois institute technology yinyin received degree automation degree systems engineering xian jiaotong university china also received degree electrical engineering illinois institute technology chicago research interests power system optimization modeling pmu applications smart grid monitoring visualization state estimation distribution systems mohammad shahidehpour received degree university missouri electrical engineering currently bodine chair professor director robert galvin center electricity innovation illinois institute technology chicago founding ieee transactions smart grid member national academy engineering nae zuyi received degree shanghai jiaotong university shanghai china degree tsinghua university beijing china degree illinois institute technology iit chicago electrical engineering presently professor electrical computer engineering department iit research interests include economic secure operation electric power systems cyber security smart grid renewable energy integration electric demand management data centers power system protection
| 3 |
quiver mutations boolean reflection monoids feb bing duan luo abstract everitt fountain introduced concept reflection monoids boolean reflection monoids form family reflection monoids symmetric inverse semigroups boolean reflection monoids type paper give family presentations boolean reflection monoids show presentations compatible mutations certain quivers feature quivers paper corresponding presentations boolean reflection monoids quivers frozen vertices results recover presentations boolean reflection monoids given everitt fountain presentations symmetric inverse semigroups given popova surprisingly inner diagram automorphisms irreducible weyl groups boolean reflection monoids constructed sequences mutations preserving underlying diagrams application study cellularity semigroup algebras boolean reflection monoids construct new cellular bases cellular algebras using presentations obtained inner diagram automorphisms boolean reflection monoids key words boolean reflection monoids presentations mutations quivers inner diagram automorphisms cellular semigroups cellular basis mathematics subject classification introduction influential work cluster algebras fomin zelevinsky associated mutations matrices definition mutations quivers proposition quivers whose underlying graphs dynkin diagrams play important role cluster algebra theory appear finite type classification well known finite irreducible crystallographic reflection group finite irreducible weyl group classified dynkin diagrams whose vertex set correspondence family simple reflections edge labeled respectively vertices respectively identity element see let dynkin diagram quiver quiver whose underlying diagram barot marsh gave presentations reflection group determined showed presentations compatible mutation quivers precisely barot marsh introduced additional relations cycle relations corresponding chordless cycles arising quivers finite type quiver mutation equivalent quiver first defined abstract group generators corresponding vertices relations proved motivated bing duan luo barot marsh work similar presentations affine coxeter groups braid groups artin groups weyl groups algebras considered respectively let euclidean space standard orthonormal basis irreducible crystallographic root system turn classified dynkin diagrams everitt fountain introduced concept reflection monoids boolean reflection monoid type formed weyl group classical root system boolean system family reflection monoids symmetric inverse semigroups boolean reflection monoids type note root systems types give rise weyl group concern classical weyl group everitt fountain provided presentation boolean reflection monoid one aims present paper obtain new presentations boolean reflection monoids show presentations compatible mutation certain quivers let respectively dynkin diagram respectively vertices first respectively vertices mutable vertices respectively vertex frozen vertex shown column table practice label left edge weight greater edge left unlabelled weight let quiver mutation equivalent quiver define inverse monoid see section show see theorem proposition implies boolean reflection monoids also classified see table diagrams corresponding generators irreducible weyl groups affine coxeter groups braid groups artin groups frozen vertices present paper diagrams corresponding generators boolean reflection monoids frozen vertices type boolean reflection monoids generators table boolean reflection monoids dynkin diagrams proposition everitt fountain proved symmetric inverse semigroup isomorphic boolean reflection monoid type recover presentation symmetric inverse semigroup defined presentation corresponds exactly presentation determined dynkin diagram quiver mutations boolean reflection monoids moreover also recover everitt fountain presentations boolean reflection monoids defined section presentations obtained quiver finite sequence mutations show theorem inner automorphism group boolean reflection monoid naturally isomorphic study actions inward mutations surprisingly inner diagram automorphisms finite irreducible weyl groups boolean reflection monoids constructed sequence mutations preserving underlying diagrams see theorem theorem respectively application study cellularity semigroup algebras boolean reflection monoids well known hecke algebras finite type algebras brauer algebra algebras partition algebras cellular see recently cellularity semigroup algebras investigated east wilox guo luo respectively applying geck east results show semigroup algebras boolean reflection monoids cellular algebras see proposition moreover construct new cellular bases cellular algebras presentations obtained inner diagram automorphisms boolean reflection monoids results methods paper applications several lines research studied future work including automorphisms boolean reflection monoids hecke algebras boolean reflection monoids coxeter arrangement monoids braid inverse monoids algebraic monoids paper organized follows section recall notations background knowledge useful section building barot marsh work study inner diagram automorphisms irreducible weyl groups theorem cellular basis group algebras irreducible weyl groups section state main results theorem proposition show presentations boolean reflection monoids compatible mutations quivers recover presentations boolean reflection monoids given everitt fountain presentations symmetric inverse semigroups given popova moreover characterize inner diagram automorphisms boolean reflection monoids method mutations theorem furthermore study cellularity semigroup algebras boolean reflection monoids give new cellular bases cellular algebras section consider way mutations quivers oriented cycles appearing section find efficient subset relations sufficient define inverse monoid last section section prove main result theorem preliminaries mutation quivers let quiver finitely many vertices finitely many arrows loops oriented given quiver let set vertices qop opposite quiver set vertices reversed orientation arrows arrows pointing vertex vertex draw arrow weight wij frequently draw arrow label wij bing duan luo mutable vertex one define mutation due fomin zelevinsky produces new quiver denoted obtained following way orientations edges incident reversed weights intact vertices connected via oriented path going quiver mutation affects edge connecting way shown figure weights related sign form oriented cycle otherwise either may equal means arrows figure quiver mutation iii rest edges weights remain unchanged two quivers said mutation equivalent exists finite sequence mutations taking one write indicate mutation equivalent underlying diagram quiver undirected diagram obtained forgetting orientation arrows call quiver connected underlying diagram connected every node reachable obvious dynkin quivers connected quivers shown theorem finitely many quivers mutation classes dynkin quivers call cycle underlying diagram quiver chordless cycle two vertices cycle connected edge shown proposition see proposition chordless cycles oriented mutation classes dynkin quivers cellular algebras cellular semigroups let first recall basic definition cellular algebras introduced graham lehrer let commutative ring identity definition associative called cellular algebra cell datum following conditions satisfied finite partially ordered set associated finite set indices exists sends acs mod quiver mutations boolean reflection monoids independent generated cellular algebras provide general framework studying representation theory many important classes algebras including hecke algebras finite type algebras brauer algebra algebras partition algebras see recently cellularity semigroup algebras investigated east wilox guo luo respectively following shall recall basic notions facts theory semigroups let semigroup define monoid obtained adding identity necessary semigroup said inverse element exists unique inverse finite inverse semigroup define let inverse semigroup set idempotents let suppose choose rak put green lemma unique using east symbol let element detail knowledge semigroups reader referred semigroup said cellular semigroup algebra cellular algebra east proved following theorem theorem theorems let finite inverse semigroup set idempotents satisfies following conditions subgroup cellular cell datum map sending antihomomorphism cellular semigroup cell datum partial order defined definition cellular algebras following corollary corollary suppose cellular algebra cell datum automorphism let cellular basis proof since automorphism also follows definition bing duan luo acs mod independent generated exists bcs mod mod independent therefore cellular basis required new results irreducible weyl groups let euclidean space standard orthonormal basis let root system set simple roots associated simple reflection finite irreducible weyl group generated number reflections equal number positive roots refer reader information weyl groups root systems reflection groups barot marsh results well known finite irreducible crystallographic reflection groups irreducible weyl groups classified dynkin diagrams see let dynkin diagram set vertices let finite irreducible weyl group determined say quiver quiver whose underlying diagram barot marsh gave presentations construction works follows let quiver mutation equivalent quiver barot marsh defined inward mutation vertex follows arrow possibly weighted otherwise two vertices one defines connected connected edge weight mij connected edge weight connected edge weight definition let group generators subjecting following relations mij quiver mutations boolean reflection monoids chordless cycle either weights identity element one barot marsh main results stated follows theorem theorem group depend choice quiver mutation class particular quiver mutation equivalent quiver inner diagram automorphisms irreducible weyl groups let coxeter group defined set generators relations definition call pair coxeter system follows given two coxeter systems say exists automorphism mean automorphism aut automorphism always chosen inn group inner automorphisms called strongly rigid case strongly rigid group aut simple structure see corollary aut inn diag diag consists diagram automorphisms unique coxeter diagram corresponding following lemma well known lemma let finite group generated finite set simple reflections set reflections table bannai computed center irreducible weyl group longest element central element except following important notation introduced franzsen definition definition inner diagram automorphism automorphism generated inner diagram automorphisms aut subgroup inner automorphisms normal subgroup aut therefore inner diagram automorphism written product inner diagram automorphism following two lemmas collect together facts useful later lemma proposition weyl group type aut aut automorphism weyl group type maps reflections reflections furthermore automorphism preserve reflection inner lemma proposition propositions proposition let weyl group type bing duan luo automorphism preserve reflections must inner aut automorphisms map reflections reflections automorphisms inner automorphisms weyl groups preserve reflections inner diagram automorphisms following theorem reveals connection inner diagram automorphisms irreducible weyl groups quiver mutations theorem let quiver corresponding weyl group generated set simple reflections inner diagram automorphism exists sequence mutations preserving underlying diagram obtained mutations particular reflections obtained mutations proof observation every variable obtained mutations must reflection corresponding weyl group sufficiency follows fact automorphisms weyl groups preserve reflections inner diagram automorphisms see lemmas prove necessity assume without loss generality vertex set inner diagram automorphism note diagram automorphisms dynkin diagram keep underlying dynkin diagram relabelling vertices necessary exists inner automorphism sufficient prove obtained mutations sequence mutations preserves underlying diagram let sir reduced expression sik assume gsg following shall use induction prove obtained sequence mutations preserving underlying diagram step mutate firstly vertex twice get quiver underlying diagram moreover set becomes keep vertices label vertices step mutate vertex twice note variable corresponding vertex sit get quiver underlying diagram moreover set generators becomes sit sit sit ssit keep vertices label vertices step repeat step get quiver induction difficult show set generators sir ssir gsg quiver mutations boolean reflection monoids finally every reflection conjugate simple reflection lemma assume reflection form gsi sik reduced expression subset arguments mutate sequence starting obtain reflection gsi remark types inner diagram automorphisms corresponding weyl groups inner automorphisms strongly rigid coxeter system inner diagram automorphism also coxeter system suppose quiver mutation equivalent quiver let corresponding weyl group set generators theorem holds cellular basis group algebras irreducible weyl groups geck proved hecke algebras finite type cellular algebras let inner diagram automorphism irreducible weyl groups see theorem lemma obtain new cellular basis group algebras irreducible weyl groups main results boolean reflection monoids let respectively dynkin diagram respectively vertices first respectively vertices mutable vertices respectively vertex frozen vertex shown figure label left edge weight greater edge left unlabelled weight shall always assume one quiver said quiver underlying diagram quiver figure classical dynkin diagrams frozen vertex bing duan luo boolean reflection monoids everitt fountain introduced reflection monoids see boolean reflection monoids family reflection monoids symmetric inverse semigroups boolean reflection monoids type let euclidean space standard orthonormal basis let root system associated weyl group type partial linear isomorphism vector space isomorphism two vector subspaces partial linear isomorphism realized restricting full isomorphism subspace write partial isomorphism domain effect restricting denote respectively general linear monoid respectively general linear group consisting partial linear isomorphisms respectively linear isomorphisms isotropy group moreover let recall notation system group introduced definition definition let real vector space group collection subspaces called system let rvj boolean system example weyl group subspaces induces isomorphism symmetric group set note system exceptional definition definition let group system monoid partial linear isomorphisms given submonoid defined reflection group called reflection monoid let reflection group boolean system called boolean reflection monoid general write instead call boolean reflection monoid type recall everitt fountain gave presentation boolean reflection monoid section quiver mutations boolean reflection monoids lemma everitt fountain presentations boolean reflection monoids shown follows mij mij mij mij defined inverse monoids determined quivers let set vertices quiver frozen vertex define connected connected edge weight mij connected edge weight connected edge weight connected connected edge weight connected edge weight connected connected edge weight connected edge weight let mii mij coxeter matrix mij generalized coxeter matrix see illustrate generalized coxeter matrices corresponding quiver quiver respectively let ordered tuple subquiver vertices contains one underlying subdiagram contain one bing duan luo ordered tuple called shortest path underlying subdiagram tuple shortest path shortest path tuple denote word denote identity element inverse monoid denote aba alternating product terms definition let quiver mutation equivalent quiver define inverse monoid generators relations mij every chordless oriented cycle either weights every chordless oriented cycle iii every chordless oriented cycle path relations every underlying subdiagram form shown first column table take path relations listed second column table remark case equation reduced relation though paper use case defined relation arbitrary still meaningful see unpublished paper following lemma well known easily verified lemma two quivers underlying diagram tree follows connectivity finiteness two quivers quiver mutations boolean reflection monoids subdiagrams path relations sid table path relations underlying subdiagrams stands chordless cycle ready main results section theorem let quiver prove theorem section isomorphism denote inverse monoid determined quiver appearing mutation class quivers say mutate sequence vertices quiver mean first mutate vertex quiver mutate vertex first vertex following proposition shows everitt fountain presentations boolean reflection monoids obtained quiver mutations proposition let proof quivers quiver viewed initial quiver mutate bing duan luo mutate sequence vertices following quiver obtain quiver definition mij mij otherwise lemma deduce mutating sequence vertices following quiver get obtain quiver definition mij mij connected connected edge weight connected edge weight lemma quiver mutations boolean reflection monoids mutating sequence vertices following quiver obtain quiver definition mij mij connected connected edge weight claim follows lemma suppose quiver mutation equivalent quiver theorem proposition gives presentation boolean reflection monoid everitt fountain proved boolean reflection monoid respectively isomorphic symmetric inverse semigroup respectively monoid partial signed permutations hence results recover presentation symmetric inverse semigroup defined presentation exactly presentation quiver following example given explain theorem example start quiver shown figure let quiver obtained mutation bing duan luo figure quiver quiver follows definition inverse monoid isomorphism defined otherwise inner diagram automorphisms boolean reflection monoids first consider inner automorphisms boolean reflection monoids well known group inn inn inner automorphism group center shown automorphisms boolean reflection monoid inner every automorphism exists uniquely determined element weyl group gtg words automorphism group naturally isomorphic automorphism group naturally isomorphic generalization result following theorem theorem inner automorphism group naturally isomorphic proof let inner automorphism since unique unit group inn following prove cases let one shown figure suppose set generators element claim set still set generators firstly obvious gsi nextly prove satisfies definition case edge gsi gsj gsi gsj gsj gsi case edge labeled gsi gsj gsi gsi gsj gsj gsi gsj case edge labeled gsi gsj gsi gsj gsi gsj gsj gsi gsj gsi quiver mutations boolean reflection monoids case edge gsi gsi gsi case edge labeled gsi gsi gsi gsi gsi gsi case type case type finally shall show suffices prove longest element involution section type type definition type type therefore inner automorphism group isomorphic let one shown figure let quiver let set vertices quiver obtained mutation mutable vertex following barot marsh work one define variables bing duan luo follows arrow possibly weighted otherwise arrow possibly weighted otherwise lemma equation follows new elements appearing procedure mutations quivers must reflections weyl groups theorem proposition isomorphism boolean reflection monoids encoded generalized coxeter diagrams see figure following theorem show inner diagram automorphisms boolean reflection monoids constructed sequence mutations preserving underlying diagrams theorem let quiver corresponding boolean reflection monoid generated set consisting simple reflections inner diagram automorphism exists sequence mutations preserving underlying diagram obtained mutations particular reflections obtained mutations proof let one shown figure suppose case type automorphisms inner see sequence mutations preserving underlying diagram induces inner automorphism automorphisms weyl groups preserve reflections inner diagram automorphisms see lemmas assume without loss generality gsi respectively claim firstly set generators generalized coxeter diagram corresponding preserves underlying diagram since respectively respectively variable must form respectively longest word respectively therefore must unique identity element respectively hence conversely inner automorphism theorem exists element weyl group gtg remainder proof necessity similar proof necessity theorem every reflection form gsi sik reduced expression arguments mutate sequence starting get gsi quiver mutations boolean reflection monoids cellularity semigroup algebras boolean reflection monoids section show semigroup algebras boolean reflection monoids cellular algebras use presentations obtained construct new cellular bases cellular algebras let commutative ring identity recall semigroup said cellular semigroup algebra cellular algebra proposition boolean reflection monoid cellular semigroup proof maximal subgroups boolean reflection monoid finite reflection groups shown finite reflection group cellular respect inversion therefore subgroup cellular cell datum satisfies east first assumption see theorem theorem define map map theorem theorem follows boolean reflection monoid cellular semigroup required remark case finite inverse semigroup whose maximal subgroups direct products symmetric groups considered east see theorem boolean reflection monoid type isomorphic symmetric group degree maximal subgroups boolean reflection monoid type finite reflection groups type isomorphic let one shown figure two quivers underlying diagrams appearing mutation class quivers always use presentations construct inner diagram automorphisms boolean reflection monoids see theorem extend automorphism semigroup algebras boolean reflection monoids corollary obtain new cellular bases semigroup algebras boolean reflection monoids example let symmetric inverse semigroup let partial permutation set denote image map image map denote sequence example partial permutation domain range following example gives new cellular bases method quiver mutations example let quiver example results preceding sections boolean reflection monoid bing duan luo set elements rank idempotents partial identity permutation let shown example containing idempotent ida subgroup dom well known group algebra cellular bases respect inversion indeed bases murphy basis property see example example section take mutating obtain following isomorphic quivers theorems theorem proposition follows inverse monoids determined quivers isomorphic symmetric inverse semigroup respectively presentation determined quiver admits initial cellular bases theorem theorem construct automorphism using presentations corresponding quivers corollary obtain new cellular bases mutations quivers finite type throughout section let one figure consider way mutations quivers oriented cycles appearing refer quiver without loops said finite type mutation equivalent dynkin quiver chordless cycle cycle two vertices cycle connected edge one show proposition proposition chordless cycles oriented mutation classes dynkin quivers extend results corollary case quivers lemma let quiver mutable vertex suppose two neighbouring vertices induced subquivers containing vertex neighbours shown figure effect mutation shown case quiver mutations boolean reflection monoids figure subquivers mutations diagram vertex said connected another edge let quiver mutation equivalent dynkin quiver lemma barot marsh described way vertices connected chordless cycle vertex connected two vertices chordless cycle connected two vertices two vertices must adjacent cycle following lemma generalization barot marsh results lemma lemma let mutation vertex list various types induced subquivers corresponding cycles every chordless cycle arises way bing duan luo vertex connect oriented chordless cycle corresponding cycle quiver mutations boolean reflection monoids vertex connects one vertex oriented chordless cycle via edge unspecified weight corresponding cycle lemmas following corollary corollary let quiver mutation class quiver frozen vertex one neighbour two neighbours two neighbours must oriented cycle cycle relations path relations section find efficient subset relations sufficient define boolean reflection monoids generalizes barot marsh results lemmas proposition lemma lemmas let dynkin quiver reflection group determined see section contains chordless cycle see figure following equivalent subscripts modulo single fixed value subscripts modulo contains chordless see figure following equivalent furthermore one holds following holds contains chordless see figure following equivalent furthermore one holds following holds figure chordless chordless chordless bing duan luo let one figure suppose quiver mutation equivalent quiver following lemma shows efficient subset relations definition generalizes lemma lemma let inverse monoid generators subjecting relations definition contains chordless cycle see figure following statements equivalent furthermore one holds following statements equivalent contains chordless cycle see figure contains chordless cycle see figure following statement holds furthermore one holds following statements equivalent contains subquiver see figure following statements equivalent single fixed value figure chordless chordless chordless subquiver see lemma quiver mutations boolean reflection monoids proof equivalence follows using suppose hold equivalence follows equivalence follows using first suppose hold using first last equation used commute using similar argument therefore equivalent using obvious end section show could defined using underlying unoriented weighted diagram taking relations corresponding qop defining relations result viewed generalization proposition proposition let boolean reflection monoid generators generators satisfy respect satisfy respect qop bing duan luo proof assume generators satisfy relations respect show generators satisfy relations respect qop converse follows replacing qop since depend orientation generators satisfy relation respect qop cases chordless cycles appearing quivers finite type proved proposition remaining needed check cases shown figure case case case note case follows lemma depend orientation chordless cycles since every chordless cylce qop corresponds chordless cycle result holds proof theorem section give proof theorem let one figure fix quiver let mutation vertex throughout section write generators corresponding vertex respectively similar define elements follows arrow possibly weighted otherwise arrow possibly weighted otherwise arrow possibly weighted otherwise arrow possibly weighted otherwise quiver mutations boolean reflection monoids order prove theorem need following proposition prove section proposition map inverse monoid homomorphism proof theorem vertex define elements follows arrow otherwise arrow otherwise claim elements vertex satisfy relations defining follows proposition interchanging using fact definition unchanged reversing orientation arrows see lemma therefore inverse monoid homomorphism arrow also arrow consequently arrow arrow therefore idm similarly idm hence isomorphisms proof proposition prove proposition showing elements satisfy relations denote value mij equation obvious sequel proof elements satisfy follows lemma rest proof completed case case lemma elements vertex satisfy following relations mij one connected equivalently mij let connected connected edge weight connected edge weight proof lemma barot marsh proved parts need prove part bing duan luo suppose without loss generality nontrivial case arrow weight note following suppose divide proof three cases case arrows hold case arrows one arrows assume arrows arrows connected connected edge weight connected edge weight arrows impossible fact cycle mutation class quivers corollary case arrows possibilities subquivers induced enumerated figure show satisfy checking case within case subcase subquiver diagram left subcase subquiver diagram right quiver mutations boolean reflection monoids note note note note possibilities chordless cycles mutation classes quivers enumerated lemma barot marsh proved holds show iii hold checking case need check corresponding cycle relations hold within case subcase subquiver diagram left subcase subquiver diagram right sequel frequently use without comment note bing duan luo note note note note note note note case follows either barot marsh result case trivial case follows commutative property vertex lemma prove following several cases case number vertices subquivers convenience within case subcase subquiver diagram left subcase subquiver diagram right sequel frequently use without comment quiver mutations boolean reflection monoids note bing duan luo quiver mutations boolean reflection monoids acknowledgements duan would like express gratitude everitt franzsen schiffler helpful discussions duan supported china scholarship council visit uconn department mathematics would like thank uconn department mathematics hospitality visit work partially supported national natural science foundation china research project supported minerva foundation funding federal german ministry education research references bannai automorphisms irreducible weyl groups fac sci univ tokyo sect barot marsh reflection group presentations arising cluster algebras trans amer math soc bourbaki lie groups lie algebras chapters berlin coxeter complete enumeration finite groups form kij london math soc charney davis coxeter system determined coxeter group london math soc davis geometry topology coxeter groups london mathematical society monographs series vol princeton university press princeton duan presentations monoids uniform block permutations ready east cellular algebras inverse semigroups algebra braids partial permutations adv math generators relations partition monoids algebras algebra everitt fountain partial symmetry reflection monoids coxeter groups adv math partial mirror symmetry lattice presentations algebraic monoids proc lond math soc easdown lavers inverse braid monoid adv math fitzgerald presentation monoid uniform block permutations bull austral math soc fitzgerald leech dual symmetric inverse monoids representation theory austral math soc ser franzsen automorphisms coxeter groups phd thesis university sydney australia felikson tumarkin coxeter groups quotients arising cluster algebras int math res imrn coxeter groups quiver mutations geometric manifolds lond math soc bing duan luo fomin reading root systems generalized associahedra geometric combinatorics city math vol amer math providence fomin zelevinsky cluster algebras foundations amer math soc cluster algebras finite type classification invent math geck relative cells represent theory hecke algebras finite type cellular invent math graham lehrer cellular algebras invent math grant marsh braid groups quiver mutation pacific journal mathematics guo cellularity twisted semigroup algebras pure appl algebra tom halverson representations monoid algebra humphreys reflection groups coxeter groups cambridge studies advanced mathematics cambridge university press cambridge howie fundamental semigroup theory oxford university press new york haley hemminger landesman peck artin group presentations arising cluster algebras algebr represent theory tom halverson arun ram monoid algebras hecke algebras duality math sci luo cellularity semigroup algebras bull malays math sci soc liber symmetric generalized groups russian mat sbornik popova defining relations semigroups partial transformations finite set uchenye zap leningrad gos ped inst marsh lecture notes cluster algebras zurich lectures advanced mathematics european mathematical society ems mathas algebras schur algebras symmetric group university lecture series american mathematical society providence murphy representations hecke algebras type algebra seven reflection group relations arising cluster algebras proc amer math soc schein teclezghi endomorphisms finite symmetric inverse semigroups algebra tsaranov representation classification coxeter monoids european combin wilcox cellularity diagram algebras twisted semigroup algebras algebra partition algebras cellular compositio math cellular algebras available https bing duan school mathematics statistics lanzhou university lanzhou china address dept mathematics weizmann institute science rehovot israel school mathematics statistics lanzhou university lanzhou china address quiver mutations boolean reflection monoids luo school mathematics statistics lanzhou university lanzhou china address luoyf
| 4 |
sep conjugacy ratio groups laura ciobanu charles garnet cox armando martino abstract paper introduce study conjugacy ratio finitely generated group limit infinity quotient conjugacy standard growth functions conjecture conjugacy ratio groups except virtually abelian ones confirm conjecture certain residually finite groups subexponential growth hyperbolic groups artin groups lamplighter group introduction paper introduce study conjugacy ratio group limit quotient two functions naturally associated finitely generated group conjugacy growth standard growth precisely generated finite set let denote ball radius respect let denote set conjugacy classes representative conjugacy ratio respect crx lim sup motivation paper twofold one hand conjugacy ratio finite group equal degree commutativity measures probability two elements group commute defined degree commutativity group received lot attention recently definition extended finitely generated infinite groups dcx lim sup raised natural explore whether degree commutativity conjugacy ratio related infinite groups well second motivation comes fact quantitative results comparing standard conjugacy growth groups exist literature group fewer conjugacy classes elements gap two functions explored detail worth investigating example standard conjugacy growth rates taking limit nth root function equal frequently encountered families infinite groups hyperbolic groups graph products date march mathematics subject classification key words phrases conjugacy growth degree commutativity polynomial growth raags hyperbolic groups wreath products laura ciobanu charles garnet cox armando martino many wreath products thus examples quotient two functions function must subexponential conjugacy ratio convergence fast starting point following conjecture inspired conj conjecture let group generated finite set crx virtually abelian results conjugacy ratio several families groups support conjecture section investigate groups stable subexponential growth definition first show virtually abelian group crx finite generating set show normal finite index subgroup finite generating set crx allows apply technique show residually finite group stable subexponential growth virtually abelian crx finite generating set also show theorem finitely generated virtually abelian group finite generating sets crx cry say group generating set stable subexponential growth definition includes finitely generated groups since finitely generated groups residually finite theorem means conjecture true groups polynomial growth theorem conjugacy ratio finitely generated residually finite groups stable subexponential growth virtually abelian zero respect finite generating sets proof theorem generalised groups exponential growth provide independent arguments several important classes groups exponential growth theorem let hyperbolic group crx finite generating set theorem let lamplighter group wreath product crx standard generating set defined theorem let artin group raag based graph generating set crxv unless free abelian case crxv may also consider strict spherical conjugacy ratio counting done sphere radius rather ball radius may take ratio strict conjugacy growth function spherical growth function precisely let sphere radius group respect finite generating set let conjugacy classes intersect conjugacy classes minimal length representative spherical conjugacy ratio crsx lim sup conjugacy ratio groups remark theorem anytime spherical conjugacy ratio turns limit conjugacy ratio equal limit particular spherical conjugacy ratio conjugacy ratio preliminaries recall finitely generated group generating set exponential growth rate respect expx lim definition group finite generating set said exponential growth expx subexponential growth expx depend generating set additionally expx sufficiently large moreover replace balls spheres get limit inequality collect results convergence series relevant later theorem let two sequences strictly increasing divergent lefthandside limit exists lim lim proposition partial converse theorem implies groups exponential growth conjugacy ratio limit ratio sizes consecutive balls limit spherical conjugacy ratio equal conjugacy ratio proposition let two sequences strictly increasing divergent lefthandside limit exists lim lim proposition let monotonically increasing sequences positive integers define sequences dbn dbn suppose dbn lim proof given fix abnn next choose dcnn laura ciobanu charles garnet cox armando martino thus obtain result using fact proposition let sequences positive integers satisfying following properties monotone sequences iii abnn dbnn sufficiently large lim proof given fix thus suffices show lim lim using hypothesis sufficiently large results groups stable subexponential growth definition group finite generating set said stable subexponential growth conjugacy ratio groups note stable subexponential growth implies expx hence group subexponential growth celebrated result gromov every finitely generated group polynomial growth bounded polynomial function virtually nilpotent groups stable subexponential growth since result bass finitely generated virtually nilpotent group finite generating set exponent constants bnd exponent calculated explicitly virtually abelian group equal rank finite index free abelian subgroup get positive integer lim main result require class following proposition let finitely generated group stable subexponential growth finite generating set every finite index subgroup every lim lim furthermore infinite index subgroup limits zero coset remark last statement appear explicitly follows easily arguments alternatively one could prove via construction invariant mean requires choice ultrafilter stable subexponential condition ensures ultrafilter hence limit points sequences equal whenever ambiguity concerning group generating set write instead instead proposition suppose finitely generated virtually abelian group finite generating set crx precisely abelian crx proof let abelian note acts multiplication right cosets lie right coset since abelian thus conjugates element tends proposition lemma let group stable subexponential growth finite generating set let let finite index subgroup lim laura ciobanu charles garnet cox armando martino proof follows writing lim lim together proposition proposition let finitely generated group stable subexponential growth subgroup finite index crx finite generating set proof let let max consider xgn xgg moreover since conjugate may choose let know must exist hence every element conjugate element let representatives conjugacy classes every every assume conjugacy classes hence tends previous lemma theorem conjecture true finitely generated residually finite groups stable subexponetial growth proof proposition states finitely generated group virtually abelian finite generating set crx direction apply method proof thm using proposition completeness describe argument requires following result finite group hypotheses finitely generated residually finite stable subexponential growth virtually abelian wish show crx finite generating set work finite quotients build chain normal subgroups since finitely generated may choose subgroups characteristic characteristic transitive since virtually abelian choose commute using residually finite assumption let characteristic finite index subgroup hence gustafson result since properties used also apply finite index subgroups argument also applies hence may construct descending chain characteristic finite index subgroups conjugacy ratio groups every moreover induction proposition finite generating set crx since holds every obtain crx corollary conjecture true finitely generated virtually nilpotent groups equivalently groups polynomial growth virtually abelian groups goal section prove theorem let finitely generated virtually abelian group finite generating sets crx cry useful following shorthand definition let generated finite set subset generic lim negligible limit given group finite generating set finitely generated subgroup said undistorted word metric equivalent word metric restricted makes sense since two finite generating sets group induce equivalent word metrics easy see finite index subgroup always undistorted subgroup undistorted undistorted subgroup finite index retracts also undistorted recall retract image endomorphism collect following facts proposition suppose finitely generated virtually abelian group finite generating set subgroup finite index isomorphic every subgroup finitely generated undistorted let infinite subgroup let transversal number cosets representative lim proof let well known finitely generated fact true case virtually polycyclic includes finitely generated virtually nilpotent abelian case however fact undistorted true generally follows fact every subgroup finitely generated free abelian group finite index direct summand case finite index subgroup retract finite index subgroup therefore undistorted finitely generated undistorted since infinite must contain element infinite order exists precisely polynomial bounds degree laura ciobanu charles garnet cox armando martino let constants hence lim lim let infinite finitely generated virtually abelian group let normal finite index free abelian subgroup centraliser note subgroup therefore finite index proposition let finitely generated virtually abelian group finite generating set let normal finite index free abelian subgroup centraliser set minimal length representatives negligible proof let element denote cya number conjugacy classes representative claim lim cya conjugacy class representative choose shortest representative denote set representatives yai extract set rewriting geodesics required note fixed length cya let denote automorphism induced conjugation think matrix switch additive notation let image since conclude subgroup therefore infinite moreover elements distinct cosets hence proposition part may conclude proposition shows elements contribute conjugacy ratio elements representative conjugacy class might shortest representative particular coset varying see overcount number conjugacy classes complement nonetheless gives thus strategy proving theorem following first note element finite conjugacy class split elements centralise elements outside whose centraliser completely proposition shows former ones form negligible set conjugacy ratio groups latter ones generic set corollary moreover latter ones size class index constant elements therefore coset rather conjugacy class cosets contributes fixed amount conjugacy ratio algebraically determined use notation lemma let moreover proposition set finite union infinite index subgroups hence set negligible respect finite generating set proof since finite union enough show infinite index fact sufficient show infinite index subgroup however pure subgroup implies direct summand since direct summand whole therefore infinite index subgroup required corollary generic set elements respect generating set whose centraliser lies entirely proof exists negligible proposition proof theorem let elements index therefore conjugacy class size let set elements whose centraliser fully lie corollary negligible set since finite index finitely many values index thus finitely many moreover since union thus lim times number independent easy see integer two elements conjugate conjugate element length holds moreover since normal easy see acts conjugation acts conjugation hence well let number conjugacy classes meet contained rcn first inequality comes fact element conjugates second fact conjugates obtained conjugator length laura ciobanu charles garnet cox armando martino corollary lim lim get lim hence number conjugacy classes meet independent generating set summing finitely many gives result remark ideas presented used show finitely generated virtually abelian group finite generating set crx inf conjugacy ratio equal infimum conjugacy ratios finite quotients hence one measure conjugacy ratio using invariant means one would get numerical value unpublished results indicate degree commutativity similar reasons true whenever finitely generated virtually nilpotent group virtually abelian case key one results families groups hyperbolic groups section prove conjecture hyperbolic groups write mean theorem let hyperbolic group crx finite generating set proof let hyperbolic group finite generating set result coornaert see positive constants integer enh enh expx theorem positive constants enh enh thus get max taking limit obtain crx conjugacy ratio groups lamplighter group follow notation let set write ith component moreover group define say left translate definition consider groups symmetric generating sets neutral elements respectively wreath product written defined let generates lamplighter group let element let standard generating set theorem let lamplighter group wreath product crx standard generating set proof statement immediately example shown fact artin groups let simple graph graph without loops multiple edges vertex set edge set vertex let group graph product groups respect defined quotient free product normal closure relators edge consider artin groups raags graph products denote raag based graph generating set bijection conjugacy representatives raag come large extent taking one word cyclic permutation class first establish asymptotics language cyclic representatives rather general setting example free group free generating basis counting conjugacy classes minimal representative length equivalent counting number cyclically reduced words length cyclic permutation cyclic representatives languages follow notation section let language finite alphabet let denote set length let define prim language primitive words suppose closed cyclic permutations construct language cycrep cyclic representatives words word least lexicographically among cyclic permutations cycrep laura ciobanu charles garnet cox armando martino proposition see also lemma let exponential growth language closed cyclic permutations furthermore assume lim proof simplicity notation let consider numbers words length exactly write primk notice number cyclic representatives length prim number cyclic representatives length primk also thus let standardpnumber theoretic euler functions inversion follows since exponential last term sum magnitude lim obtain result conjugacy representatives raags first establish result conjugacy ratio direct products lemma let two groups finite generating sets respectively either crx cry crx expx expy proof calculate conjugacy ratio respect balls use balls spheres let lim sup crx cry proposition putting cbn get similarly crx expx expy proposition putting states limit zero since raags interpolate free free abelian groups presence commutativity allow simply consider cyclically reduced words permutation free groups need single words taking cyclic representatives produces conjugacy representatives use crisp godelle wiest approach cgw developed conjugacy ratio groups definition def cgw let set total order cyclically reduced word cyclic normal form shortlex language respect cyclic conjugates well elements posses cyclic normal form example word cyclic permutation deal situation cgw divides words split definition definition cgw let cyclically reduced word denote full subgraph spanned supp let graph complement word split disconnected amounts able write product commuting subwords blocks word connected iii let cycsl denote set cyclic normal forms corresponding words say group element split represented cyclically reduced word split proposition prop cgw two cyclic normal forms represent conjugate elements equal cyclic permutation proposition remark cgw let two cyclically reduced split words conjugate words corresponding commuting blocks conjugate respectively lemma let cycsl set cyclic normal forms following hold cycslk cycsl cycsl closed cyclic permutations theorem let artin group raag based graph generating set crxv unless free abelian case crxv proof use induction number vertices let result trivial direct product get least one factors follows lemma factors lemma say first factor second induction free abelian strictly smaller growth rate first get factor free abelian suppose direct product split conjugacy classes cgv two types shortest length representative support denote cgv shortest length representative support exactly denote cgv propositions well defined moreover propositions two cyclically reduced words support conjugate conjugate note word cycsl cycsl laura ciobanu charles garnet cox armando martino thus write cgv cgv implies cgu express cgv cgu crxu lim sup right hand side equal since either crxu induction free abelian polynomial growth since direct product assumption exponential growth last fraction remains find lim second part right hand side since direct product conjugacy representatives support exactly suffices consider cyclic normal forms cyclic permutations lim sup cycsl cycsl proposition applied language cycsl satisfies hypothesis proposition lemma lim proves result cycsl reflections open questions results conjugacy ratio values essentially identical degree commutativity two quantities equal classes groups studied however could establish direct general link question limsup definition conjugacy ratio limit question groups dcx crx vice versa equal virtually nilpotent case hyperbolic group case many case degree commutativity know whether conjugacy ratio might influenced change generators question exist group finite generating sets crx cry conjugacy ratio groups finally would interesting unify proofs confirming conjecture larger classes groups groups exponential growth example references ciobanu formal conjugacy growth acylindrically hyperbolic groups int math res notices martino ventura degree commutativity infinite groups proceedings american mathematical society bass degree polynomial growth finitely generated nilpotent groups proc london math soc burillo ventura counting primitive elements free groups geom dedicata ciobanu hermiller mercier conjugacy growth graph products preprint coornaert mesures dans les espaces hyperboliques sens gromov pacific math coornaert asymptotic growth conjugacy classes free groups ijac vol cox degree commutativity lamplighter groups preprint https cgw crisp godelle wiest linear time solution conjugacy problem artin groups subgroups journal topology vol problems statistical acta math acad sci hungar gallagher number conjugacy classes finite group math gustafson probability two group elements commute amer math monthly mercier conjugacy growth series wreath products preprint https parry growth series wreath products trans amer math valiunas degree commutativity artin groups preprint https university edinburgh address url http university bath address cpgcox mathematical sciences university southampton address url http
| 4 |
complex systems science meets iot nicola marchetti irene macaluso nicholas kaminski merim dzaferagic majid butt marco ruffini saul friedner julie bradford andrea zanella oct michele zorzi linda doyle abstract propose new paradigm telecommunications develop framework drawing concepts information different metrics complexity computational agent based modeling theory adapted complex system science proceed systematic fashion dividing network complexity understanding analysis different layers modelling layer forms foundation proposed framework supporting analysis tuning layers modelling layer aims capturing significant attributes networks interactions shape application tools modelling graph theoretical abstractions derive new metrics holistically describe network analysis phase completes core functionality framework linking new metrics overall network performance tuning layer augments core algorithms aim automatically guiding networks toward desired conditions order maximize impact ideas proposed approach rooted relevant architectures use cases networks internet things iot cellular networks index terms complex systems science modelling internet things nicola marchetti irene macaluso nicholas kaminski merim dzaferagic majid butt marco ruffini linda doyle connect centre future networks communications trinity college university dublin ireland saul friedner julie bradford real wireless andrea zanella michele zorzi university padova italy material based upon works supported science foundation ireland grants ntroduction transition humanity information age precipitated need new paradigms comprehend overcome new set challenges specifically telecommunication networks underpin modern societies represent largest scale construction deployment efforts ever attempted humanity renovations occurring nearly continuously course decades results networks consist numerous subsections following trajectory development commingled cacophony emerging trends confirm picture drawn mobile wireless networks getting denser heterogeneous nature nodes network vary hugely form functionality ranging tiny simple sensors sophisticated cognitive entities wider range node parameters set many interdependent impact heavily network performance networks becoming adaptive dynamic many parameters set response changing contexts networks evolve issues become exaggerated networks see antennas base stations devices modes operation variability dynamism world like way systematically capture network behaviour straightforward network theory information theoretic approach used describe overall network interplay different networks propose tackle studying wireless networks perspective complex systems science css developing complexity metrics relating traditional measures network performance one key questions css relates degree term complexity refer specific set complex systems science quantities related interactions network entities rather entities networks current future trend towards diverse networks coexisting entities within iot ultra dense small cell networks amount interactions increase leading increase complexity meaning given word complex systems science organization system terms difficulty describing organizational structure amount information shared parts system result organizational structure example measure excess entropy type used describe behaviour collection networks signalling complexity associated future network resource management analyzed type measure functional complexity introduced conceptual structure based complexity informs modelling abm paradigm examine interactions different entities shape network abm provides method modelling complex systems ground allows deeper investigation interactions shape ultimate system performance abm provides powerful modelling entities variety areas contexts attributes abm applied inform communication networks decision making particular abm used investigate impact several medium access control mac component technologies key performance indicators kpi telecom networks applications example case wireless sensor network aiding internet things iot system summary propose new paradigm telecommunications drawing concepts complex systems science nature understand model behaviour highly heterogeneous networks systems networks also employ framework create new technologies supporting network operation otivation propose development conceptual framework means exploring broad range possibilities wireless networks including vast array technological possibilities framework thought applies concepts complex systems science provide means understand wireless networks holistically variety scales specifically consider communication patterns enable network functions capturing nodes necessary perform given function drawing connections nodes highlight functional dependencies call graph obtained way functional topology approach allows analyze communication patterns multiple scales lowest scale models communication individual words lowest scale focuses communication node immediate neighbors node functional topology second scale models communication node immediate neighbors neighbors neighbors increasing scale size moves focus away communication individual nodes allows analyze communication patterns groups nodes functional considering high degree heterogeneity dense interplay network elements proposed iot systems achieving holistic understanding network operation poised become even challenging prospect near future address challenges demonstrate power framework modeling analysis relevant scenarios cellular iot networks framework supports innovation beyond concepts feel scenarios adequately represent applications work development concept organized layered fashion modelling layer forming foundation framework supporting analysis tuning layers main aspects framework represented fig discussed detail remainder paper compared css literature addressing communication systems study wireless networks infrastructure perspective simple example excess entropy used measure complexity combination entropy leads understanding structure emerging lattice networks systems modelling analysis communication metrics css metrics functional topology graphs links communication css metrics constraints technological behaviours abm parameters range values tuning guidelines tuning network local rules global fitness adaptive resource allocation fig complex systems science based layered approach networks functional topology graphs abstracted network used compute complexity telecom metrics find relations understanding relations feed abm approach network tuning studied exhibit complex behaviour relates robustness changes environment particular exploring frequency planning complex systems perspective leads conclude future networks shall eschew current frequency planning approaches instead determine frequency operation fly enormous implications design networks deployment small cells network operation iii ethodology significant impacts made css wide range areas including physics biology economics social sciences computer sciences various engineering domains claim css perspective provides necessary means redefine general understanding telecommunication networks draw concepts information theory abm concept augmenting developing understanding wireless networks briefly review important tools concepts use studies order specify analyse complexity network function introduced framework representing abstraction telecommunication network modelling operation capturing elements nodes connections necessary perform given function framework includes functional topologies graphs created based functional connectivity system entities see fig node topology represents functional entity network node information source part given network function links indicate dependencies nodes definition functional topologies allows visualise relationships system entities enables systematic study interactions based topologies one define css inspired metrics functional complexity quantifies variety structural patterns roles nodes functional topology information metrics modelling abm useful method model networks abm used investigate impact several mac component technologies terms telecom iot application key performance indicators kpi key framework analysis tuning layers framework enables modelling analysis tuning wireless networks changes networks domain analysed assessed indeed order maximize impact framework proposed approach rooted relevant architectures use cases networks cellular iot networks use cases define expected parameters types environments general set possible scenarios could investigate using framework shown table table possible use cases parameters type users environments low latency high throughput high reliability extensive coverage energy efficiency typical mobile broadband healthcare automotive automation wearable devices busy train station location busy office large plant solution approach framework based around idea using concepts tools measures complex systems science nature framework based modelling layer supports analysis tuning layers see fig modeling layer modelling phase focuses developing techniques capture significant attributes networks interactions shape along traditional attributes used characterize networks coverage throughput modelling phase develops new complexity metrics investigates relation telecom kpis metrics shall developed distinctly application based existing new concepts draw css modelling component framework develops appropriate abstractions formalisms enable metric calculation end produce abstraction networks first level device level abstraction focuses individual elements within network targeting interplay results information collected used locally single entity interference stability connection function power available node nodes network two examples notions studied device scale available local information may case interference perceived certain network node may battery level result actions nodes device scale typically models implicit exchange information nodes infer information actions without directly exchanging messages paradigm distributed time division multiple access tdma system higher scales model explicit exchange information groups nodes network level interaction scale nodes act basis information provided node directly occurs example assigning slot centralized tdma system interactions shape network formation operation directly modelled using abm model considers interactions interests different network operators agents operate hierarchical fashion see fig network operator agents turn contain determine specific aspects network based technical behaviours anything makes decisions network viewed agent abm applied model interactions agents example iot agents may attempt use infrastructure provided operator agents shown fig capture range possibilities use nested subagents major agents might represent whole network subagents representing individual cells abm allows conversion experience detailed processes behaviours knowledge complete systems macrolevel outcomes general consider several radio resources abm model resources belonging frequency power space domains several alternative techniques technologies applied within domain entails wide set resources related modes utilisation analysis layer analysis layer models reviewed determine representative power meaning metrics developed linking operator behaviours new css metrics operator behaviours network kpis fitness iii new css metrics kpis example could analyse relationship operator decisions amount shared resources infrastructure spectrum resulting network characteristics scenario measures network performance identified including standard network operator agent cellular network agent cell agent access point agent iot agent network operator agent cellular network agent network operator agent wifi network agent wifi network agent wifi network agent cellular network agent iot agent iot agent fig agent organization agent model hierarchical major agents representing whole network subagents representing iot agents individual cells access points agents operator kpis cell edge peak mean throughput spectrum utilisation relative available bandwidth network reliability coverage type mentioned relations iii determine promising pairing elements operator behaviour css metric css metric kpi within scale scales determining connections particular identify behaviours correlate specific network performance measures scale extent css metrics describe relationships investigate certain css relation certain scale affects another css relation different scale strategy leading throughput maximisation device level might compromise fairness objective resource allocation scheduler interaction level process involves assessing ability css metrics describe impact operator behaviours analysing effect behaviours network kpis finally describing network kpis terms css metrics determining link network css metrics kpis would allow attempt answer fundamental questions whether one needs minimum complexity achieving given level kpi fitness excess complexity implies terms adaptivity robustness cost summary analysis layer completes development core framework establishing compact representation networks linking complexity metrics network performance tuning layer tuning layer augments framework algorithms automatically guide operation management behaviours relevant agents achieve desired network properties tuning approach utilizes holistic information encoded complexity based quantities select appropriate parameters constraints behaviours agents developed tuning approach based application optimization techniques algorithms developed within paradigm might apply optimization algorithms pgen successive pareto optimization determine pareto fronts state spaces agent behaviours basis achieving desirable css metrics values pareto fronts provide parameters constraints operator behaviours allowing operators optimize specific differentiations maintaining desired holistic properties particular solution may selected pareto front basis agent preferences preference high adaptivity robustness low complexity without compromising overall quality solution pplications roposed ramework modeling layer modelling internet things employ instance framework concept investigate tightened coupling operative reality information transfer precipitated iot investigation resides primarily modelling phase extension analysis phase within work apply tool abm study impact communications technologies within scope iot automatic traffic management system considered purposes illustrating fig single intersection diagram sensors deployed alongside roads represented dots inactive sensors depicted black dots sensors detecting moving static cars shown orange purple dots respectively nature abm approach single intersection assumed depicted fig controlled traffic lights avenue observed sensor nodes processing unit denoted decision maker serves sink sensor information source light control commands sensor nodes mark advancement cars portrayed yellow squares proceeding left side roadway toward intersection two mac protocols csma aloha investigated communication sensors applies resultant information process govern vehicular progress coloration traffic signals notably semantics communications greatly impact operation physical system fig exemplifies notion depiction difference actual number cars waiting traffic light perceived number cars known component system revealed abm minor difference channel csma aloha causes either actual number vehicles controlling element system application abm techniques allows development understanding various direct behavior complete telecommunication system fig impact mac perception situation scenario vehicles always travel straight line constant speed unless need stop due traffic lights cars iteration probability new car arriving one four edges grid travelling corresponding direction functional complexity another example work modelling layer developed metric capture amount information shared elements network result organization network support network function analytical approach quantify complexity functional topology provides means capture signaling complexity functional operations within network handover frequency assignment complexity metric provides new method describing functional operation telecommunication networks complexity metric built upon concept shannon entropy employ bernoulli random variable model potential node interact nodes probability interaction defined reachability node inr inr number nodes reach node number nodes given subgraph definition reachability terms number hops allowed two nodes functional topology enables analysis complexity multiple scales one hop reachability represents lowest possible scale node interacts immediate neighbors increasing number allowed hops nodes brings nodes closer terms interactions moves focus interactions among nodes interactions among groups nodes analysis higher scales total amount information subgraph nodes scale calculated subgraph nodes total amount information represents total uncertainty related actual roles nodes appear within subgraph different subgraph patterns complexity metric calculated quantifies amount order structure system seemingly disordered maximum scale size defined diameter functional topology number nodes functional topology whole functional graph hir average amount information given subgraph size call metric functional complexity approach holistically gauges functional organization network first describing interactions necessary perform given function topologically within representation capture network elements involved performing function interactions support operation function quantification networks terms functional relationships provides wholly new approach understanding operation networks corroborated fig typical metrics network topology capture notions represented complexity metric fact correlation complexity metric traditional metrics lower average path length clustering coefficient complexity average degree complexity average path length complexity average path length average degree complexity clustering coefficient average degree complexity clustering coefficient complexity fig correlation proposed complexity metric three used measures network topology average path length average degree distribution clustering coefficient cases consider complexity metric thus provides alternative method describing network operation functional topology complexity framework applied instance understand underlying mechanisms lead certain network properties scalability energy efficiency wireless sensor networks wsn result different clustering algorithms analysis layer context analysis layer framework focus cellular network selforganises frequency perspective understand collective behaviour network calculate excess entropy measure complexity entropy lim entropy target cell conditioned surrounding cells measuring gain understanding structure emerging lattice network based eqs one shows cellular network exhibit complex behaviour robust changes environment detail centralised channel allocation analyzed respect robustness local changes environment order compare stability two types channel allocation instances frequency allocation algorithm run using lattices resulting channel allocation possible cells considered cell possible frequencies turn considered optimal minimum distance channel allocation computed define distance two channel allocations number changes necessary move one configuration found locally perturbed channel allocation matrices resulting stable resulting centralized frequency planner know far relation complexity metrics telecom kpis excess entropy robustness changes functional complexity efficiency complexity metrics introduced shed new light relevant telecom context networks excess entropy measure capabilities frequency allocation context functional complexity measure scalability wsn widely acknowledged scalability important properties systems iot dense small cell deployments future plan improve expand understanding prominent network technologies kpis tuning layer abm rules choose technological behaviour options maximize targeted communication network kpi subject constraints defined correlation css metrics set available parameters complexity robustness complexity energy efficiency complexity resilience fitness functions waveforms mimo frequency reuse duplexing tuning layer mimo scheme ofdma full duplex frequency assignment algorithm network configuration parameters fig adaptation network configuration parameters tuning layer set available parameters represents virtual pool available network resources fitness functions depict relationship different network kpis complexity metrics calculated upon set available parameters telecom kpis local decisions based css metrics lead desired global network local decisions made according abm rules exploring selecting fittest behaviours behaviour mean algorithm policy acting radio resources goal different services mobile broadband choose behaviours allow network achieve satisfactory kpis terms delay throughput coverage energy efficiency emission etc question whether keep achieving globally satisfactory kpis changing abm rules distributed fashion different nodes adaptation act within certain resource allocation domain picking among different massive mimo schemes allocations using resources different domains spectrum infrastructure main ideas behind tuning layer framework exemplified fig although work tuning layer still initial phase substantial amount literature gather evidence different physical layer phy radio resource management rrm techniques domain chosen depending environmental conditions network requirements potentially situation tuning layer relevant beneficial give brief account evidence next shown massive mimo system linear sublinear behaviour respect number base station antennas depending spatial richness environment related work adaptive precoding distributed mimo explored several works investigate coexistence various waveforms terms leakage interference possible implications waveform selection fraction cells full duplex base stations used design parameter target optimal area spectral efficiency outage mixed duplex cellular system shown increasing frequency reuse improve small cell deployments lower frequency reuse favoured target maximizing throughput given certain density summary plan use understanding benefit adaptation phy mac layers networks extend needed terms technology components kpis adaptation criteria inform framework show immediate benefit understanding operating designing systems pen hallenges several component technologies addition considered paper enrich set possible choices used model analyse tune network including massive distributed multiple antenna arrays different waveforms multiple access schemes different duplexing schemes novel spectrum sharing schemes license assisted access laa different frequency reuse schemes including probabilistic ones networks know far relation complexity metrics telecom kpis excess entropy robustness changes functional complexity efficiency future aim improve expand understanding prominent technologies kpis networks particular still open question achieve desired network tuning properties within large optimization space encompassing many different network resources kpi objectives constraints many different heterogeneous networks large number nodes decision points conjecture abm help achieve ambitious goal key tool engineer desired emergent properties future challenging networks network graph representations discussed proposed framework might dynamically change according different radio resource domains related techniques used one open area investigation study complexity metrics calculated evolve time dynamic resource allocation use metrics analyse tune network behaviour taking account robustness resilience network utilization network characteristics onclusion current complex systems science literature focusing communication systems draws network science studying applications traffic modelling lacks considerations architecture infrastructure technology instead apply complex systems science wireless networks functional perspective drawing concepts information different metrics complexity computational agent based modeling theory adapted complex system science since complex systems science metrics currently absent quantities considered operating designing communication networks introducing proposed framework initiate completely new way model analyse engineer networks founding new theory practice telecommunications previously anticipated simple example work exploring frequency planning complex systems perspective leads conclude future networks shall eschew current frequency planning approaches instead determine frequency operation fly enormous implications design rollout operation networks believe distributed decision making paradigm likely going way forward many future iot resource allocation problems particular reasons believe complex systems science provides key unlock full potential telecom systems eferences lloyd measures complexity nonexhaustive list ieee control systems magazine vol feldman crutchfield structural information patterns entropy convergence excess entropy physical review macaluso cornean marchetti doyle complex communication systems achieving frequency allocation ieee icc macaluso galiotto marchetti doyle complex systems science perspective cognitive networks journal systems science complexity vol january dzaferagic kaminski mcbride macaluso marchetti functional complexity framework analysis telecommunication networks journal systems science complexity review available online arxiv https dzaferagic kaminski macaluso marchetti relation functional complexity scalability energy efficiency wsns international wireless communications mobile computing conference iwcmc jun review available online arxiv https niazi hussain tools modeling simulation hoc complex networks ieee communications magazine vol mar cirillo evaluating potential impact transmission constraints operation competitive electricity market illinois report anl tonmukayakul weiss model secondary use radio spectrum ieee international symposium dynamic spectrum access networks dyspan kaminski murphy marchetti modelling iot network ieee international symposium systems engineering isse whitacre degeneracy link evolvability robustness complexity biological systems theoretical biology medical modelling hooker philosophy complex systems elsevier candia uncovering individual collective human dynamics mobile phone records journal physics mathematical theoretical vol may deville inard martin gilbert stevens gaughan blondel tatem dynamic population mapping using mobile phone data proceedings national academy sciences hidalgo dynamics mobile phone network physica statistical mechanics applications vol may onnela structure tie strengths mobile communication networks proceedings national academy sciences vol may wang gonzalez hidalgo barabasi understanding spreading patterns mobile phone viruses science vol march bentosela cornean farhang marchetti sublinear behavior massive multi user mimo sum rate deterministic channel models ieee transactions communications ryu jung song adaptive precoding scheme efficient joint processing downlink coordinated transmission system electronics letters xing renfors investigation filter bank based communication integrated ofdma cellular system international symposium wireless communications systems iswcs bodinier bader palicot modeling interference limitations model international conference telecommunications ict may sexton bodinier farhang marchetti bader dasilva coexistence ofdm fbmc underlay communication networks ieee global telecommunications conference globecom goyal galiotto marchetti panwar throughput coverage mixed full half duplex small cell network ieee international conference communications icc may cirik rikkinen joint subcarrier power allocation maximization ofdma systems ieee vehicular technology conference vtc may galiotto pratas doyle marchetti effect propagation networks computer networks review available arxiv https
| 3 |
analysis unprotected intersection conflicts based naturalistic driving data apr xinpeng ding huei david analyzing reconstructing driving scenarios crucial testing evaluating highly automated vehicles havs research analyzed conflicts unprotected intersections extracting actual vehicle motion data naturalistic driving database collected university michigan nearly left turn across path opposite direction events involving heavy trucks light vehicles extracted used build stochastic model scenario among top priority scenarios identified national highway traffic safety administration nhtsa statistical analysis showed vehicle type significant factor whereas change season seems limited influence statistical nature conflict results used build testing environments havs simulate crash cases stochastic manner introduction highly automated vehicles havs released general public process testing evaluating must established google car project experienced first crash february moreover tesla autopilot failed detect first fatal crash happened may criticized using consumers beta testers fig briefly demonstrates crash happened red sedan representing tesla national highway traffic safety administration nhtsa considering possibility putting approval process place addition rigorous process still anticipated vehicle manufacturers key factor hav testing test scenarios behaviors road users particularly vehicles test conditions need realistic also feasible repeated safety tests test scenario models divided two types first type fixed scenarios tests lane support systems lss autonomous emergency braking aeb launched european new car assessment programme euro ncap major advantage type repeatable however hard use type models work funded mobility transformation center denso tailor project university michigan grant wang department automation tsinghua university beijing china visiting scholar university michigan ann arbor zhao corresponding author zhaoding leblanc university michigan transportation research institute ann arbor peng department mechanical engineering university michigan transportation research institute ann arbor fig brief description tesla accident represent highly complex variable nature human driving environment moreover havs could adjusted pass certain fixed scenarios performance broad conditions might well assessed overcome drawbacks proposed second type models previous works proposed stochastic test method built test environment scenarios paper focus intersection scenario intersection one challenging scenarios havs due variety road users complexity traffic flow unpredictability vehicles pedestrians according crashes intersections took major portion traffic crashes among kinds scenarios potential risks intersection unprotected left turn across path opposite direction typical one scenario ranked second among priority precrash scenarios scenario two vehicles considered turning vehicle straightdriving vehicle sdv although lot research conducted traffic conflict analysis scenario factor vehicle type widely investigated crash tesla autopilot system attributed failure detect truck turning ahead crucial attention paid scenarios involving heavy trucks moreover insufficient research influence season change driving behaviors intersections extreme weather storm fog strong impact driving behaviors human drivers propose possibly influential havs well research focused two major tasks first built stochastic model traffic conflicts scenario table introduction ivbss database vehicle type distance time trips vehicles drivers type front radar light vehicle apr apr sedans drivers bosch heavy truck tractors drivers trw light vehicle fig based naturalistic driving data events involving light vehicles lvs heavy trucks hts sdv extracted database reconstructed realistic trajectories tvs sdvs finally described several key variables secondly influence vehicle type sdv season factor driving behavior analyzed comparing distribution key variables lvs hts well summer winter data ource data source research integrated vehiclebased safety systems ivbss database collected maintained university michigan transportation research institute umtri database consists two parts platform platform comes naturalistic field operational test assess potential safety benefits driver acceptance associated prototype integrated crash warning system system incorporates forward crash warning fcw lateral drift warning ldw warning lcm curve speed warning csw non functions designed deal scenario thus assumed research whether warning system enabled affect driver behavior scenario platform identical prototype vehicles driven drivers personal use six weeks test vehicle one radar looks forward six radars cover adjacent lanes well area behind vehicle addition vision system automotivegrade global positioning system gps digital map around different channels signals collected platform male commercial truck drivers freight drove equipped class tractors months eight radars three exterior cameras several interior cameras test truck recording channels data including driving environment drivers activity system behaviors vehicle kinematics basic information platforms ivbss database listed table configuration sensors platform shown fig test area covered ivbss primarily heavy truck sensor configuration ivbss test vehicles detroit area trips took place lower peninsula michigan ohio trips fell within similar region database provides adequate information research event extraction data gps sensor used locate instrumented vehicle data front radar used reconstruct trajectory target vehicles video recordings vision cameras around vehicles used supplemental tool event screening addition ivbss test lasted approximately one year driving data variety weather conditions throughout year covered enabling uncover influence season factors iii xtraction eft urn cenario order extract eligible events database platform three major tasks performed first processed data radar use second searched database events meet criteria finally data points event interpreted trajectories sdv target association truck data radar data platform need associate mark data points together belong target order screen unfit targets create trajectory every eligible cluster points interests apply following criteria processing data objects tvs move opposite direction detected points small azimuth angle considered effective detecting range radar cluster point expended data points within small time slot considered neighbor points satisfy following rules grouped strong correspondence range range rate time difference range fig transversal fig configuration instrumented vehicle target vehicle event extraction scenario exemplary result target association range point range rate point reasonable difference transversal transversal data point time fig shows example data points associated divided different groups one event dots color show trajectories targets red dots belong group seen noise typical scenario vehicle turning front instrumented truck fig shows range target points change time target vehicles cross intersection instrumented truck moving forward steady speed ranges different targets decreasing linearly moreover fig shows transversal multiple targets increasing negative positive indicating cross left right view instrumented truck data points target clustered platform used event extraction eligible scenarios event screening unprotected scenario recorded either sdv paper use scenarios recorded sdvs fig demonstrates configuration instrumented vehicle sdv target vehicle event extraction scenarios platforms eligible leftturn events queried based following criteria intersection stop sign set signal lights although protected events retrieved criterion screened following conditions constraint velocity sdv instrumented vehicle moving straight speed larger change heading angle smaller target vehicle moving towards instrumented vehicle longitudinal projection speed smaller moving left right due difference radars transversal goes positive negative lvs negative positive hts fig procedure interim results event extraction time duration event adequate maximum time difference two consecutive points defined event small enough seen points target max event extraction follows similar procedure platform first select occurrence intersections extract leftturn objects opposite direction tasks completed microsoft sql server management studio ssms afterwards extracted events exported matlab last round screening guarantees reasonable speed targets time duration platform difference retrieving occurrences intersections export data database server directly matlab target association following extraction tasks diagram fig illustrates procedure interim results phase event extraction finally eligible events whereas location events shown fig trajectory reconstruction eligible events selected trajectories sdv event reconstructed exact position sdv comes gps sensor data front radar used extract relative position coordinate sdv synchronization gps radar data trajectories sdv generated fig shows reconstructed trajectories sdv one event dots color represent position sdv moment crossed intersection sdv example light vehicle fig heavy truck fig location extracted events time conflict point sdv scenario vsdv speed sdv speed first demonstrate example conflict analysis single event use aforementioned occurrence crossed intersection sdv fig uses tcp demonstrate sdv interacted one real event vertical axis indicates predicted time point conflict sdv whereas horizontal axis shows real elapsed time relative moment crosses intersection scenario time conflict point decreased linearly time indicating margin sdv large enough sdv maintain nearly constant speed crossing reached conflict point margin sdv tcp demonstrated red dot tcp described essence interaction sdv following modeling analysis ignore detailed interaction sdv paying attention four aforementioned variables event use events retrieved previous section platforms source modeling fig example reconstructed trajectory onflict nalysis omparison definition metrics conflicts section conflict used describe risky events traffic according conflict defined observational situation two road users approach space time extent collision imminent movements remain unchanged many conflict metrics used measuring level safety event including time pet leading buffer trailing buffer used gap time used paper goal construct stochastic model choose representative time slice event model conflicts heading angle sdv taken constant event small deviation ignored thus conflict point naturally defined location transversal radar sdv crosses zero exact moment regarded representative moment conflict defined consequently four variables chosen model conflict including two modified conflict metrics time conflict point tcp distance conflict point dcp dcp distance conflict point sdv dcp dist psdv section effect vehicle type traffic conflict scenarios discussed distributions variables lvs hts compared events smaller dcp tcp dangerous generated distributions reciprocal dcp tcp put risky rare events tail shown psdv positions sdv tcp time conflict point sdv tcp dcp comparison light vehicles heavy trucks distribution dcp distribution tcp fig comparison dcp dcp heavy trucks light vehicles light vehicle heavy truck speed distribution speed distribution turning driving heavy trucks heavy trucks turning light vehidriving light vehicles cles fig comparison speed heavy trucks light vehicles fig dots bars top figures show mean value standard deviation empirical distribution fig see dcp tcp increases fewer points data giving rise shape long tail moreover events sdv tend smaller dcp smaller tcp indicating less severe conflicts fig shows distributions vsdv distribution vsdv platforms triangular shape vsdv ranges whereas less conflict point though obvious difference distribution vsdv events sdv tends significantly higher sdv combined previous results dcp tcp conclude conflicts hts sdvs conflict metrics significantly higher value tvs tend turn less aggressive speed means chooses time turning commences turning action behaves conservatively confronted coming opposite direction difference vehicle type influence driving behavior severity conflict analysis season factor section uncover influence season factor behaviors sdvs tvs scenarios test driving hts lvs took place freezing temperature months events took place freezing temperatures defined winter includes december march following year period also coincides time average snowfall ann arbor inches hand summer defined june august retrieved events summer events winter lvs whereas numbers hts respectively tcp dcp vsdv compared summer winter driving mww test nonparametric hypothesis test null hypothesis two populations alternative hypothesis used determine whether conflict metrics differ summer winter fig comparison events summer winter fig shows result comparison concluded platforms mean values summer winter four variables describe conflict sdvs tvs close mww test large eight distributions able distinguish pattern summer vsdv result winter terms dcp tcp indicates despite large difference climate significant difference way people drive winter summer scenarios great lakes area conclusion significance designing testing havs conclusion research traffic conflicts tvs sdvs scenarios modeled analyzed based nearly events extracted reconstructed naturalistic database two modified conflict metrics tcp dcp used model turning behavior stochastic model used developing simulation tools evaluating havs significance vehicle type season also addressed research general sdv driver tends turn conservative fashion wider margin surprisingly despite prevailing snow freezing weather winter michigan driver behavior scenarios test differ significantly summer winter two conclusions useful designing automated driving algorithms establishing regulations policies havs following research improve accuracy trajectory reconstruction conducting sensor fusion gps yaw rate sensor data different channels moreover investigate reasons behind conclusion similarity driver behavior summer winter possible causes could snow road shoveled promptly winter thus normal driving almost unaffected trips avoided extreme weather winter data biased besides also facilitate model build stochastic simulation environment testing evaluation havs isclaimers work funded part university michigan mobility transformation center denso pool project findings conclusions report authors necessarily represent views mtc denso eferences google car project monthly report february solon tesla beta testing autopilot chance someone might die online available https department transportation national highway traffic safety administration federal automated vehicles policy tech september online available https european new car assessment programme test protocol lane support systems online available http european new car assessment programme test protocol aeb systems online available http zhao huang peng lam leblanc accelerated evaluation automated vehicles maneuvers submitted ieee transactions intelligent transportation systems online available http zhao lam peng bao leblanc pan accelerated evaluation automated vehicles safety lane change scenarios based importance sampling techniques ieee transactions intelligent transportation systems huang zhao lam leblanc accelerated evaluation automated vehicles using piecewise mixture models submitted ieee transactions intelligent transportation systems online available http chan defining safety performance measures driver assistance systems intersection conflicts ieee intelligent vehicles wassim toma brewer depiction priority scenarios safety applications based communications tech april chan characterization driving behaviors based field observation intersection left turn across path scenarios ieee transactions intelligent transportation systems vol nobukawa barnes goodsell gordon arbor reconstruction vehicle trajectories intersection conflict analysis using sensors conflict july preliminary report online available http leblanc sayer bao bogard buonarosa blankespoor funkhouser driver acceptance behavioral changes integrated warning system key findings ivbss fot tech online available http sayer buonarosa bao bogard leblanc blankespoor funkhouser winkler integrated safety systems field operational test methodology results report december sayer bogard funkhouser lablance bao blankespoor buonarosa mary lynn winkler integrated safety systems field operational test key findings report tech august zhao peng nobukawa bao leblanc pan analysis mandatory discretionary lane change behaviors heavy trucks avec dlc tarko use crash surrogates exceedance statistics estimate road safety accident analysis prevention vol nobukawa model based approach analysis intersection conflicts collision avoidance systems dissertation university michigan misener california intersection decision support systems approach achieve nationally interoperable solutions california path research report mann whitney test whether one two random variables stochastically larger annals mathematical statistics annals mathematical statistics vol
| 3 |
revised incremental conductance mppt algorithm solar generation systems meng yue xiaoyu wang sustainable energy technologies department brookhaven national laboratory upton usa yuemeng xywang revised incremental conductance inccond maximum power point tracking mppt algorithm generation systems proposed paper commonly adopted traditional inccond method uses constant step size voltage adjustment difficult achieve good tracking performance quick elimination oscillations especially dramatic changes environment conditions revised algorithm incremental voltage change step size adaptively adjusted based slope curve accelerating factor decelerating factor applied adjust voltage step change considering whether sign curve slope remains subsequent tracking step addition upper bound maximum voltage step change also updated considering information sign changes revised mppt algorithm quickly track maximum power points mpps remove oscillation actual operation points around real mpps effectiveness revised algorithm demonstrated using simulation index mppt algorithm fractional mppt algorithm mptt algorithm solar generation introduction one promising renewable energy technologies installed capacity solar photovoltaic generation increased dramatically recent years although cost generation continues drop economic competitiveness solar energy still low compared traditional energy sources even various local federal policy instruments desirable lower cost increase efficiency solar energy systems including solar panels power electronic devices increasing efficiency installed energy systems simply improving existing control algorithms also pursued one way achieving modify existing mppt algorithms extract solar energy various environmental conditions many different types mppt algorithms proposed literature summarized different algorithms pros cons terms complexity accuracy convergence speed etc among commonly used algorithms observation method easy implement using either analog digital circuits periodically perturbs either duty ratio converter array operating voltage even mpp achieved true mppt achieved using method since operation point system oscillating around mpp conditions continuous fast changing irradiance operating point might continuously deviate mpps eventually optimal operation points achieved issues degrade performance solar generation system fractional voltage current method needs sense one voltage current parameter approximate mpp using empirical parameters major issue related method circuit periodically operated conditions may significant impact grid operation types algorithms based fuzzy logic control neural network may accurately track mpps different environmental conditions mppt performance however guaranteed since rely heavily algorithm developers significant volume field data kind conditions design implementation algorithms inccond method see appears popular one practice due medium complexity relatively good tracking performance one major difficulties implementing inccond method selection fixed voltage change step size simultaneously satisfying tracking speed maintaining mpp large step size voltage change helps system rapidly approach mpps hand large value generally induces persisting oscillations around mpp special countermeasures taken issues using small step size voltage change opposite simple effective revised inccond algorithm proposed paper adaptive voltage step change scheme first adopted based slope operation point locates curve accelerating factor decelerating factor applied adjust voltage step change considering whether sign curve slope remains changes subsequent tracking step information sign changes also used update upper bound maximum voltage step change adaptive voltage step change enables system quickly track environment condition variations reach stay mpps way solar energy generation harvested energy systems improvements enable quick response environment condition changes rapid landing mpp revised method easy implement since require knowledge characteristics specific panels parameters easy tune revised inccond algorithm described detail section overview various modified inccond methods modeling generic generation systems presented section iii simulation purposes simulation results using proposed mppt algorithm shown section concluding remarks given section revised inccond mppt method mpp achieved adjusting terminal output voltage solar array controlling converter duty ratio cell temperature easily measured irradiance difficult measure accurately desired voltage mpp hard know exactly therefore test condition needs developed order determine whether current operating point mpp without measuring temperature irradiance solar panel one maximum power point given irradiance level cell temperature note presence partial shading condition panel may cause multiple local maxima considered paper although revised algorithm used together methods proposed inccond method uses information solar curve left hand side mpp slope greater zero right hand side mpp slope less zero slope zero exactly mpp therefore solar array terminal voltage needs increased slope positive decreased slope negative slope calculated incremental conductance implementation mpp condition following relationship holds major difficulty inccond method selection incremental step size duty ratio adjusting solar terminal output voltage fixed incremental step size duty ratio general bring array mpp operating point oscillate around mpp either left right mpp ref divided entire curves two domains using square root functions mpps contained one therefore first step performing mppt bring operating point domain contains mpps method however requires good understanding panel characteristics panel specific van allen oscillator added solar panel inverter purpose balancing power source load continues changing simple proportional integral controller developed track mpps based configuration intuitively easy avoid fixed voltage change step size adjusting increment proportionally steepness slope eventually increment duty ratio become zero mpp slope zero similar controller proposed implementation however appears difficult curve steepness around mpp different different operating conditions curve lower irradiance level flat sudden change operating condition solar array may produce large numerical difference calculating slope change occurs since duty ratio may cause unacceptable change solar terminal output voltage make difficult bring voltage back normal note refs proposed twostage methods mainly avoid local maxima caused insolation experienced solar panels traditional inccond method still used bringing operating point close global mpp using monitoring cells section simple effective modified inccond method proposed based observations two consecutive tracking steps changing sign slope positive negative negative positive indicates increment step size large otherwise may land mpp side curve duty ratio sign slope two consecutive tracking steps indicates increment step size small otherwise operating point may land mpp side curve based observations strategy proposed adjust incremental step size considering steepness slope adjust incremental step size comparing sign slopes two consecutive tracking steps decrease incremental size former case multiplying incremental size factor deacc increase incremental size latter case multiplying increment factor acc applying improved strategy solar array approach mpp accelerating manner change operating condition magnitude oscillation around mpp may rapidly decrease test condition considered satisfied landing onto mpp duty ratio adjusted operating condition changes implementation incremental step size need defined avoid extremely drastic changes duty ratio however generally fixed needs large enough permit rapid tracking mpp sudden change operation condition issue fixed upper bound array starts tracking new mpp quickly approaches mpp lands side mpp duty ratio needs adjusted reverse direction using incremental step could large due factor acc remain large time although factor deacc applied point large incremental step help may cause large fluctuations overshoot voltage mpp achieved therefore second proposed improvement sign slope changes also decreased together incremental size also preferred maintain small nearby mpp mpp reached note implementation algorithm nominal incremental step size test condition slope considered satisfied duty ratio changed incremental step size might become small corrected cause slow response beginning attempting track next mpp different operating condition simple solution reset step size nominal value without adjusting duty ratio mpp considered reached slope small enough greater preselected constant equation still satisfied duty ratio indicates initial incremental size duty ratio max initial upper bound incremental size max updated based conditions discussed indicated fig increment size upper boundary incremental size iii modeling energy systems solar array general solar array consists many solar modules connected series parallel module manufactured serially connecting certain number solar cells solar cell essentially represented equivalent electrical circuit shown fig illustration purposes modeling solar cell briefly summarized part interested readers find details references note also terminal output voltage may sensitive duty ratio especially duty ratio close converter input output voltages thus selected duty ratio middle duty ratio range fig equivalent electrical circuit solar cell solar cell model fig following equation derived vpv ipv vpv represent solar cell terminal output current voltage respectively iph photon current source diode current series resistance shunt resistance used represent power losses latter generally neglected noted equation photon current diode current temperature irradiance dependent given panel temperature kelvin irradiance level iph calculated using following equations gref tref gref gref gref exp nkt exp tref tref fig flowchart revised inccond mppt algorithm gref exp qvocref nktref based discussions flow chart proposed modified inccond algorithm shown fig fig terminal output voltage current represent reference cell equation ref ref solar array adjusted according slope calculated converter temperature tref reference irradiance gref standard condition obtained manufacturers data ocref sheet substituting parameters equation characteristics solar cell numerically computed given cell temperature irradiance level used represent solar array consisting interconnected modules cells converter used step output voltage solar array bulky transformer avoided perform mppt controlling duty ratio converter see details simulation results revised inccond algorithm implemented integrated power system simulation software eptool developed based power system toolbox eptool used perform transient analysis grid faulted conditions solar irradiance temperature changes mppt simulation results presented validate effectiveness revised algorithm using hypothetical solar irradiance profile input solar plant tabulated table input panel temperature assumed constant cloud transients centralized solar plant consists solar panels capacity plant base mva standard environmental conditions system used example carrying simulation purpose table variation irradiance solar plant cloud transient time irradiance time irradiance time irradiance time irradiance conventional inccond algorithm fixed incremental step size duty ratio first applied mppt performed every parameters selected following output voltage qvocref exp ref nktref simulation results solar array terminal output voltage deviation actual output power calculated maximum power points shown fig one observe persisting oscillations around mpps time mpps truly achieved reason oscillations implied algorithm description section terminal voltage continues adjusted fig also indicates conventional inccond algorithm able track mpp rapid variation irradiance since output power deviations actual solar power generation significant large change irradiance although slow variations provide acceptable performance significant power deficiency around caused inability adjust panel voltage rapidly enough compensate large decrement irradiance level seen comparing top curves fig given fig highlights inefficiencies selecting control parameters incremental step size accordance conventional mppt algorithms inefficiencies related conventional inccond method addressed modified algorithm proposed paper power deviation temperature coefficient isc current solar cell constants obtained manufacturers data sheets reverse saturation current diode diode ideality factor coulomb constant boltzmann constant gap given voc voltage solar cell series resistance solved using parameters reference temperature irradiance time fig terminal voltage variation deviated power output plant using conventional inccond algorithm modified mppt algorithm proposed section deceleration acceleration factors applied modified inccond algorithm deacc acc parameters remain first scenario upper bound incremental step size duty ratio fixed shown fig oscillations eliminated quickly irradiance changes fig also shows revised algorithm fixed step size quickly make operating point reach mpp voltage level becomes quickly stable even sudden change irradiance time however relatively large terminal voltage overshoot changing points irradiance introduced must addressed fig shows improvement tracking performance second scenario adaptive upperbound incremental step used overshoot change points irradiance significantly decreased output power deviation also reduced simulation also shows tracking performance sensitive associated parameters makes parameter tuning easy modified inccond algorithm robust power deviation output voltage time fig terminal voltage deviated power output array using modified inccond algorithm fixed upper bound incremental step size duty ratio power deviation output voltage time fig terminal voltage deviated power output array using modified inccond algorithm adaptive upper bound incremental step size duty ratio conclusions revised inccond algorithm presented paper generation systems compared traditional inccond methods voltage step change adaptively determined based slope curve location operating points two consecutive tracking steps system track rapid change environmental conditions oscillation system operating points around mpp avoided addition upper bound voltage step change assigned factor deacc less constrain step change change sign slope detected simulation results demonstrate effectiveness proposed algorithm robustness mppt algorithm also enhanced due fact parameters easily tuned regardless systems require knowledge characteristics specific panels references trends photovoltaic applications iea report esram chapman comparison photovoltaic array maximum power point tracking techniques ieee transactions energy conversion vol femia petrone spagnuolo vitelli optimization perturb observe maximum power point tracking method power electronics ieee transactions vol wasynezuk dynamic behavior class photovoltaic power systems power apparatus systems ieee transactions vol lopes xuejun intelligent maximum power point tracker using peak current control power electronics specialists conference pesc ieee kasa iida chen flyback inverter controlled sensorless current mppt photovoltaic power system industrial electronics ieee transactions vol schoeman wyk simplified maximal power controller terrestrial photovoltaic panel arrays annu ieee power electron spec kobayashi matsuo sekine novel optimum operating point tracker solar cell power supply system power electronics specialists conference pesc ieee annual mutoh matuo okada sakai method photovoltaic power generation systems power electronics specialists conference pesc ieee annual hussein muta hoshino osakada maximum photovoltaic power tracking algorithm rapidly changing atmospheric conditions generation transmission distribution iee vol seung kyu novel maximum power point tracking control photovoltaic power system rapidly changing solar radiation industrial electronics proceedings isie ieee international symposium novel controller photovoltaic energy conversion system industrial electronics ieee transactions vol wenkai pongratananukul weihong rustom kasparis batarseh multiple peak power tracking expandable power system applied power electronics conference exposition apec eighteenth annual ieee koizumi kurokawa novel maximum power point tracking method module integrated converter power electronics specialists conference pesc ieee harada zhao controlled power interface solar cells source power electronics ieee transactions vol irisawa saito takano sawada maximum power point tracking control photovoltaic generation system nonuniform insolation means monitoring cells photovoltaic specialists conference conference record twentyeighth ieee kobayashi takano sawada study two stage maximum power point tracking control photovoltaic system partially shaded insolation conditions power engineering society general meeting ieee vol kim jeon cho kim ahn modeling simulation generation system electromagnetic transient analysis solar energy vol mohan undeland robbins power electronics converters applications design john wiley sons power system toolbox webpage http http
| 5 |
focus querying large video datasets low latency low cost kevin ganesh peter paramvir matthai phillip onur carnegie mellon university microsoft eth large volumes videos continuously recorded cameras deployed traffic control surveillance goal answering fact queries identify video frames objects certain classes cars bags many days recorded video advancements convolutional neural networks cnns enabled answering queries high accuracy expensive slow build focus system lowlatency querying large video datasets focus uses cheap ingestion techniques index videos objects occurring uses compression specialization cnns focus handles lower accuracy cheap cnns judiciously leveraging expensive cnns reduce query time latency cluster similar objects hence avoid redundant processing using experiments video streams traffic surveillance news channels see focus uses fewer gpu cycles running expensive ingest processors faster processing video query time normalized query latency jan abstract normalized ingest cost normalized ingest cost figure effectiveness focus reducing ingest cost query latency example traffic video compare two baselines runs video frames ingest runs video frames query time zooming see focus point simultaneously cheaper gpu consumption faster query latency achieving least precision recall also shown two alternatives offering slightly different using video analytics queries expensive slow using classifier identify video frames cars traffic video requires gpu hours costs azure cloud latency running queries also high achieve query latency one minute gpu hours work would require tens thousands gpus classifying frames video parallel many orders magnitude typically provisioned tens hundreds traffic jurisdictions retail stores note cost latency values using motion detection techniques exclude frames moving objects believe enabling querying large video datasets make video analytics useful open many new opportunities natural approach enabling low latency querying classifications live videos store results index object classes video frames queries specific classes cars thus involve simple index lookup however least two problems approach first cost index video introduction cameras ubiquitous millions deployed government private entities traffic intersections enterprise offices retail stores videos cameras continuously recorded one main purposes recording videos answering queries identify video frames objects certain classes like cars bags many days recorded video results queries used analysts investigators achieving low query latencies crucial advances convolutional neural networks cnns backed copious training data hardware accelerators gpus led high accuracy computer vision tasks like object detection object classification instance object classifier cnn imagenet challenge evaluates classification accuracy classes using public image dataset labeled ground truths image classifiers return ranked list classes decreasing order confidence despite accuracy image classifier cnns like example prohibitively high second cost wasteful typically small fraction recorded videos get queried following theft police would query days video handful surveillance cameras videos present focus system support querying large video datasets address drawbacks focus following goals low cost indexing video providing high accuracy low latency queries allowing trade offs cost latency input user specifies cnn classifier desired accuracy results focus needs achieve relative focus uses four key techniques cheap cnns ingest using results cnn clustering similar objects judicious selection system model parameters first make video ingestion cheap focus uses compressed specialized versions cnns create index object classes frames cnn compression creates new cnns fewer convolutional layers smaller input images specialization trains cnns smaller set object classes specific video stream cheaper cnns classify objects accurately together techniques result highly efficient cnns video indexing second cheap ingest cnns however also less accurate expensive like measured terms recall precision recall fraction frames video contained objects queried class actually returned query results precision hand fraction frames query results contained objects queried class increase recall focus relies empirical observation confident classification results cheap expensive cnns may always match result expensive cnn falls within results cheap cnn therefore focus indexes object results cheap cnn instead increase precision first filter objects index classify filtered objects expensive third reduce latency using expensive focus relies significant similarity objects videos example car moving across intersection look similar consecutive frames focus leverages similarity clustering objects classifying cluster centroids expensive assigning class objects cluster thus considerably reducing query latency nutshell focus operations follows classifies detected objects using cheap cnn clusters similar objects indexes cluster centroid using classification results user queries class focus looks ingest index centroids match class classifies using centroids classified class returns objects corresponding clusters user finally focus smartly chooses cnn parameters meet targets precision recall among choices meet accuracy targets allows user trade ingest cost query latency example using cheaper ingest cnn reduces ingest cost increases query latency focus needs use larger index retain accuracy targets focus identifies sweet spot parameters sharply improve one ingest cost query latency small worsening built focus evaluated thirteen videos three domains traffic cameras surveillance cameras news channels compare two baselines runs video frames ingest runs video frames query time use augment baselines motion detection remove frames objects one core techniques recent prior work noscope figure shows representative result traffic video commercial intersection average focus cheaper faster leads cost ingestion coming latency query hour video dropping hour minutes see full details make following contributions formulate problem querying video datasets showing query latency ingest cost accuracy precision recall results propose techniques ingest videos low cost leveraging compressed specialization cnns retaining high accuracy targets creating approximate indexes identify leverage similarity objects video cluster using cnn features significantly speeding queries propose build new system support querying large video datasets show system offers new options ingestion cost query latency significantly cheaper analyzing videos frames ingest time significantly faster analyzing queried video frames query time background motivation process even gpu nvidia makes querying large video datasets using cnns slow costly least two recent techniques designed reduce cost cnns first compression set techniques aiming reduce cost cnn inference classification expense reduced accuracy techniques include removing expensive convolutional layers matrix pruning others dramatically reduce classification cost cnn example variant layers cheaper second recent technique cnn specialization cnns trained subset dataset specific particular context also making much cheaper using combination cheap expensive cnns key facet solution described first provide brief overview convolutional neural networks cnn approach detecting classifying objects images discuss new observations made videos motivate design techniques convolutional neural networks convolution neural network cnn specific class neural networks works extracting visual features images image classification inference cnn takes input image outputs probability class dog flower car cnns method used many computer vision tasks image classification face recognition input image pooling layers convolutional rectification layers layer prob apple prob car prob orange prob cat prob flower prob dog extracted features characterizing videos aim support queries form find frames video contain objects class identify key characteristics videos towards supporting queries large portions videos excluded limited set object classes occur video objects class similar feature vectors design focus based characteristics analyzed hours video six video streams six video stream span across traffic cameras surveillance cameras news channels provides details detect objects frame videos using background subtraction classify object cnn among supported object classes paper use results costly cnn ground truth excluding large portions videos find considerable potential avoid processing large portions videos significant portions video streams either objects garage camera night objects stationary like parked cars find video sets frames fall categories therefore queries object class would benefit filters applied exclude portions videos even among frames contain objects relevant query query looks specific class objects video sets object class occurs frames average even frequent object classes occur frames different videos usually dominant classes cars traffic camera people news channel classes rare since queries specific object classes considerable figure architecture image classification cnn figure illustrates architecture image classification cnn broadly almost cnns consist three key types network layers convolutional rectification layers detect visual features input pixels pooling layers input merging neighboring pixel values layers provide reasoning classify input object based outputs previous layers outputs image classification cnn probabilities object classes class highest probability predicted class input image output penultimate layer considered representative features input image features vector lengths classifier cnns shown images similar feature vectors small euclidean distances visually similar high accuracy cnns comes cost inferring classifying using cnns classify objects images requires significant computational resources higher accuracy cnns comes using deeper architectures layers obtain better visual features instance winner imagenet competition trained classify across classes imagenet dataset using layers cdf number objects since specifically trained extract visual features classification verify robustness feature vectors using following analysis video object find nearest neighbor using feature vectors cheap cnn compute fraction object pairs belong class fraction videos shows using feature vectors cheap cnns potentially help identify duplicate objects objects auburn lausanne cnn jackson hole sittard msnbc percentage classes figure cdf frequency object classes fraction classes recognized truncated overview focus goal focus index live video streams object classes occurring enable answering queries later stored videos form find frames contain objects class optionally query restricted subset cameras time range query formulation basis many widespread applications could used either detecting cars bicycles video used basis processing finding collisions cars bicycles focus designed work wide variety current future cnns system configuration time user system administrator provides cnn serves accuracy baseline focus far costly run every video frame sequence techniques focus provides nearlycomparable accuracy greatly reduced cost default throughout paper use image classifier acceptable target accuracy applicationdependent focus permits user specify target providing reasonable defaults accuracy specified terms precision fraction frames output query actually contain object class according recall fraction frames contain objects class according actually returned query lower target greater provided focus even high targets focus able achieve cost savings figure presents design focus left part figure focus classifies objects incoming video frames extracts feature vectors make step cheap uses highly compressed specialized version model figure focus clusters objects based feature vectors assign cluster top likely classes objects belong based classification confidence ingest cnn creates index maps class set object clusters index output focus processing videos right part figure user tial indexing frames classes objects limited set object classes video next focus classes objects occur videos disparity frequency among video streams limited set objects video context traffic cameras automobiles pedestrians bikes airplanes rare video stream contains objects classes recognized classifier cnns figure shows cumulative distribution function cdf frequency object classes videos classified make two observations first objects graphed object classes occur less busy videos auburn jackson hole lausanne sittard even busier videos cnn msnbc objects classes appear also little overlap classes objects among different videos average jaccard indexes intersection union videos based object classes second even among object classes occur small fraction classes disproportionately dominate figure shows frequent object classes cover objects video stream suggests video stream automatically determine frequently occurring classes train efficient cnns specialized classifying classes feature vectors finding duplicate objects objects moving video often stay frame several seconds example pedestrian might take minute cross street instead classifying instance object across frames would like inexpensively find duplicate objects classify one using cnn apply label duplicates thus given duplicate objects requires one cnn classification operation instead comparing pixel values across frames obvious choice identify duplicate objects however turn highly sensitive even small changes camera view object instead feature vectors extracted cnns much robust frames frames frames object feature vectors objects specialized compressed cnn cnn specialization object clusters object classes frames objects class querying class matching clusters centroid objects index figure overview focus queries certain class focus retrieves matching clusters index runs centroids clusters returns frames clusters whose centroids classified class ingest index mapping object class clusters specifically object class hcluster idi cluster centroid object hobjectsi cluster hframe idsi objects next explain focus key techniques keep ingest cost query latency low also meeting userspecified accuracy targets cheap cnn focus makes indexing cheap compressing specializing model video stream compression cnn models uses fewer convolutional layers approximation techniques specialization cnns uses observation specific video stream contains small number object classes appearance constrained generic video techniques done automatically together result cnn models cheaper ingest index cheap cnns less accurate results often match classifications therefore keep recall high focus associates object classification results cheap cnn instead result increasing increases recall result often falls within cnn results querytime focus uses remove objects larger set match class regain precision lost including clustering similar objects high value increases work query time thereby increasing query latency reduce overhead focus clusters similar objects using feature vectors cnn cluster querytime run cluster centroid apply classified result objects cluster thus objects tightly clustered clustering reduce precision recall trading ingest query costs focus automatically chooses cheap cnn specialization clustering parameters achieve desired precision recall targets choices also help focus trade work done instance save ingest work focus select cheaper cnn counteract resultant loss accuracy running expensive objects query time focus chooses parameters offer sharp improvement one two costs small degradation cost desired point focus provides users choice three options balanced default note explanation anchored image classification cnns architecture focus generally applicable existing cnns face recognition techniques use cnn compression specialization feature extraction cnns broadly applicable cnns video ingest querying techniques section describe main techniques used focus using cheap cnn models identifying similar objects frames save redundant cnn processing specializing cnns specific videos analyzed describes setting parameters focus cheap ingestion focus indexes live videos reduce latency perform object detection frame typically inexpensive operation classify extracted objects using cnns far cheaper use classifications index objects class cheap cnn noted earlier user provides focus optionally user also provide classifier architectures used focus search cheap cnns alexnet cheapcnn cheapcnn cheapcnn class selection cheap cnn model cheapcnni value results significant influence recall outputs produced lower values reduce recall focus miss returning frames contain queried objects time higher values increase number objects classify query time keep precision high hence adds latency defer focus sets parameters jointly set parameters recall number selected results figure effect recall three cheap cnns number within parenthesis indicates much cheaper model compared vgg vary resource costs accuracies starting cnns focus applies various levels compression removing convolutional layers reducing input image resolution results large set cnn options ingestion cheapcnnn wide range costs accuracies ingest index keep recall high focus indexes object using top object classes cheapcnni output instead using class recall output cnn list object classes descending order confidence empirically observe output expensive often contained within classes output cheap cnn small value relative classes recognized cnns figure plots effect recall one video streams lausanne see three models figure layers removed additionally input images rescaled pixels respectively models retrained original training data imagenet make two observations first observe steady increase recall increasing three cheapcnns figure shows reach recall respectively note models recognize classes even represents possible classes second different models cheaper lower recall overall conclude selecting appropriate focus achieve target recall focus creates index object classes output cheapcnni filtering objects queried class using index appropriate high recall low precision since associate object classes one true class average precision thus query time keep precision high focus determines actual class objects index using expensive return objects match queried redundancy elimination query time focus retrieves objects likely matching class index infers actual class using would ensure precision could cause significant latency query time even inference parallelized across many gpus would still incur large cost focus uses following observation reduce cost two objects visually similar feature vectors would closely aligned would likely classified class cars model focus clusters objects similar invokes expensive cluster centroids assigns centroid label objects cluster dramatically reduces work done gtcnn classifier query time focus uses feature vector output layer cheap ingest cnn see clustering note focus clusters objects frames frames whole key questions regarding clustering cluster algorithm cluster system discuss key questions clustering heuristic require two properties clustering technique first given high volume video data algorithm keep overhead low complexities clustering algorithms quadratic second make assumption number clusters adapt outliers data points fly given requirements use following simple approach incremental clustering literature put first object first cluster cluster new object feature vector assign closest cluster distance away however none clusters within distance create new cluster centroid distance threshold measure distance norm cluster centroid object feature vector keep number clusters constant removing smallest ones storing data index using algorithm keep growing popular clusters similar cars keeping complexity linear total number objects clustering reduce precision recall depending parameter centroid classified queried class cluster contains another object different class reduces precision centroid classified class different cluster object class reduces recall discuss setting clustering ingest query time focus clusters objects rather clustering would involve storing feature vectors loading objects filtered ingest index clustering instead clustering ingest time creates clusters right feature vectors created stores cluster centroids index makes latency much lower also reduces size index observe ordering indexing clustering operations mostly commutative practice little impact result accuracy present results due space constraints therefore use clustering due latency storage benefits pixel differencing objects clustering primarily reduces work done number objects classified focus also employs pixel differencing among objects adjacent incoming frames reduce ingest cost specifically two objects similar pixel values runs cheap cnn one assign cluster index curacy video streams removing convolutional layers making input image smaller resolution leads specialized cheapcnni cheaper even generic cheapcnni since specialized cnn classifies across fewer classes accurate allows focus select much smaller ingest index meet desired recall find specialized models use much smaller typical generic cheap cnns figure smaller directly translates fewer objects classified query time thus reducing latency model retraining video stream focus periodically obtains small sample video frames classifies objects using estimate ground truth distribution object classes video similar figure distribution focus selects frequently occurring object classes retrain new specialized models saw usually power law distribution classes handful classes account dominant majority objects thus low values usually specialization also based family cnn architectures resnet alexnet vgg different number convolution layers similar specialization adds set options available ingest cnns cheapcnnn focus picks best model cheapcnni corresponding index class focus specializes cnn towards frequently occurring classes also want support querying less frequent classes purpose focus includes additional class called specialized classified simply means one classes query time queried class among classes ingest cnn index focus extracts clusters match class classifies centroids model parameter stream exposes following using small allows train simpler model cheaper ingest cost lower latency popular classes however also leads larger fraction objects falling class querying expensive objects classified using larger value hand leads expensive ingest models cheaper querying classes select next specialization cnns recall focus uses cheap cnn cheapcnni index object classes focus reduces cost specializing cnn model video stream model specialization benefits two properties objects video stream first object classification models trained differentiate thousands object classes many video streams contain small number classes second objects specific stream often visually constrained objects general say compared imagenet dataset cars buses occur specific traffic camera much less variability similar angle distortion size generic set vehicles instead trying differentiate among thousands object classes differentiating among say fifty classes specific camera video much simpler task requiring simpler image features smaller image resolutions result specialized models smaller accurate example retraining cheapcnni achieve similar specialized cnns retrained quickly small dataset retraining relatively infrequent done every days since considerably fewer objects video belonging class proportionally training data contain equal number objects classes normalized query latency balancing accuracy latency cost focus goals high accuracy low ingest cost low query latency impacted parameters focus techniques number top results ingesttime cnn index object number popular object classes use create specialized model cheapcnni specialized cheap cnn distance threshold clustering objects effect four parameters intertwined four parameters impact ingest cost query latency recall impacts precision apply cluster centroid classification objects cluster thus clustering tight high value lose precision parameter selection focus selects parameter values per video stream samples representative fraction frames video stream classifies using ground truth combination parameter values focus computes expected precision recall using ground truths generated would achieved object classes navigate combinatorial space options parameters adopt approach first step focus chooses cheapcnni using recall target next step focus iterates values clustering distance threshold select values meet precision target trading ingest cost query latency among combination values meet precision recall targets selection based balancing costs example picking model cheapcnni accurate higher ingest cost lower query cost use lower using less accurate cheapcnni opposite effect focus identifies intelligent defaults sharply improve one two costs small worsening cost figure illustrates parameter selection based ingest cost query latency one video streams figure plots viable configurations set parameters meet precision recall target based ingest cost cost cheapcnni query latency number clusters according first draw pareto boundary set configurations improve one metric without worsening focus discard configurations least one point pareto boundary better metrics focus balances ingest cost query latency balance figure selecting configuration minimizes sum ingest query cost measured total gpu cycles focus also allows configurations based application preferences query rates balance normalized ingest cost figure parameter selection based trading ingest cost query latency ingest cost normalized ingesting objects query latency normalized time querying objects dashed line pareto boundary minimizes ingest cost applicable application expects video streams get queried surveillance cameras policy also minimizes amount wasted ingest work hand minimizes query latency even incurs heavy ingest cost flexibility allows focus fit different applications implementation details describe key aspects focus implementation worker processes focus work distributed across many machines machine running one worker process video stream ingestion ingest worker receives live video stream extracts moving objects using background subtraction extensible plug object detector detected objects sent cnn infer classes feature vectors ingest worker uses features cluster objects video stream stores index mongodb efficient retrieval worker processes also serve queries fetching relevant frames index database classifying objects parallelize query work across many worker processes resources idle gpus cnn classification cheap cnns gtcnn execute gpus hardware accelerators cnns could either local machine worker processes disaggregated remote cluster detail abstracted away worker process seamlessly works designs dynamically adjusting enhanced technique select new query time extract clusters class appears among classes result fewer clusters thus also lower latency technique useful two scenarios classes might accurately classified cheap cnn using lower still meet accuracy yet result much lower latency want retrieve objects class use low quickly retrieve objects required increase table video dataset characteristics extract new batch results type description commercial area intersection city auburn residential area intersection usa city auburn downtown intersection usa city traffic residential area intersection usa city camera city bend usa bend busy intersection town jacksonh usa square jackson hole video stream rotates among usa cameras shopping mall church street marketplace pedestrian plazalatency place surveillance lausanne switzerland palud lausanne bookshop street oxford england university oxford sittard netherlands market square sittard news channel cnn usa news news channel foxnews usa news channel msnbc usa evaluation evaluate focus prototype hours videos real video streams span across traffic cameras surveillance cameras news channels highlights average focus simultaneously cheaper baseline gpu consumption faster baseline query latency achieving least precision recall focus provides rich space ingest cost query latency among video streams ingest cost cheaper ingestall baseline reduces query latency optimizing ingest query latency reduced cheaper ingest optimizing query latency focus effective broad conditions high accuracy targets one savings even accuracy target various frame sampling rates fps name location usa segment video reports class frames segment use criteria ground truth sometimes gives different answers exact object consecutive frames criteria effectively eliminate random erroneous results set default accuracy target recall precision also evaluate results accuracy targets note practical cases one two metrics recall accuracy needs high example investigator cares high recall looking irrelevant results acceptable setting targets high lower bounding performance improvements focus achieve setup software tools use opencv decode videos frames use background subtraction algorithm opencv extract moving objects video frames use background subtraction instead object detector cnns faster detect objects running background subtraction orders magnitude faster running cnns background subtraction detect moving objects reliably object detector cnns usually difficulties small objects nonetheless system seamlessly use object detector cnns well run train cnns microsoft cognitive toolkit deep learning system video datasets evaluate live video streams span across traffic cameras surveillance cameras news channels evaluate video stream hours evenly cover day time night time table summarizes video characteristics default evaluate video fps also evaluate sensitivity frame rates figures show representative sample cameras improve legibility accuracy target use cnn cnn evaluate extracted objects use results correct answers define class present baselines use two baselines comparisons baseline system uses analyze objects ingest time stores inverted index query baseline system simply extracts objects ingest time uses analyze objects fall query interval query time note strengthen baselines basic motion detection background subtraction therefore baselines run frames moving objects note running gtcnn frames moving objects one core techniques recent noscope work metrics use two performance metrics first metric ingest cost gpu time ingest video second metric query latency latency object class query specifically video stream evaluate dominant object classes take average latencies querying classes much cheaper querying popular classes would skew results far classes thus focus video streams obtained real operational traffic cameras city mask city name anonymity traffic surveillance news avg surveillance msnbc cnn foxnews sittard oxford jacksonh normal intersections roads bend rotating cameras busy plazas lausanne sittard university street oxford different news channels cnn foxnews msnbc among videos gains query latency smaller relatively less busy videos bend lausanne oxford videos dominated fewer object classes focus work analysis using query time classes conclude core techniques focus general effective variety videos msnbc lausanne jacksonh bend cnn sittard oxford lausanne jacksonh faster factor traffic bend foxnews cheaper factor news effect different focus components figure shows breakdown cost query latency across different design points focus compressed model applies generic compressed model indexing ingest time compressed specialized model uses specialized compressed model indexing compressed specialized model clustering adds clustering ingest time reduce redundant work query time include index using achieve accuracy three main observations order first generic compressed models provide benefits ingest cost query latency major source improvement accuracy generic compressed model degrades significantly remove convolutional layers order retain accuracy target need choose relatively expensive compressed models cheapcnni larger incur higher ingest cost query latency second specializing model addition compressing greatly reduces ingest cost query latency fewer convolutional layers smaller input resolution specialized models cheaper retaining accuracy target video streams running specialized model ingest time speeds query latency figure third clustering effective technique reduce query latency unnoticeable costs ingest time figure shows using clustering top specialized compressed model reduces query latency significantly better running specialized model ingest time gain comes negligible cost figure run clustering algorithm cpus ingest machine fully pipelined gpus run specialized cnn model avg figure top focus ingest cost compared bottom focus query latency compared popular ones metrics include gpu time spent classifying images exclude cpu time spent decoding video frames detecting moving objects recording loading video reading writing index focus solely gpu time gpu involved bottleneck resource query latency ingest cost experiment platform run experiments local cluster machine cluster equipped gpu nvidia gtx titan intel xeon cpu ram gbe nic runs ubuntu lts performance first show performance focus showing ingest cost query latency focus aims balance two metrics figure compares ingest cost focus query latency focus make two main observations first focus significantly improves query latency small ingest cost focus makes queries average faster small cost ingest time average cheaper cluster query latency video goes one hour less two minutes processing cost video stream also goes shows focus strike good balance two competing goals effectively second focus effective across different video streams various characteristics makes queries faster small ingest time cost cheaper across busy intersections ingest cost query latency one interesting features focus flexibility tune system parameters achieve different application ingest cost cnn jacksonh lausanne sittard ingest cheaper query faster improvements factor compressed model specialized model clustering jacksonh lausanne sittard cnn foxnews msnbc avg jacksonh lausanne sittard cnn foxnews msnbc avg faster factor cheaper factor compressed model specialized model clustering foxnews msnbc figure ingest cost query latency three higher targets figures show higher accuracy targets ingest costs improvement query latency decreases focus keeps ingest cost similar cheaper baseline still runs specialized compressed cnn ingest time however accuracy targets higher focus needs select classification results increases work query time average query latency focus faster respect accuracy targets conclude techniques focus achieve higher accuracy targets significant improvements ingest cost query latency query latency figure effect different focus components ingest cheaper factor goals figure depicted three alternative settings focus illustrate space ingest cost query latency using video stream optimizes query latency increasing ingest cost default option balances two metrics opposite results shown relative two baselines chart right figure region covers three settings focus data label indicates ingest cost cheaper query latency faster figure shows focus offers good options space ingest cost query latency achieves cheaper cost ingestall ingest video stream makes query faster nothing ingest hand reduces query latency relatively higher ingest cost still cheaper good options compared baselines flexibility allows user tailor focus different contexts example traffic camera requires fast turnaround time queries use surveillance video stream queried rarely would choose reduce amount wasted ingest cost figure shows values representative videos figure show flexibility exists among videos average spends cheaper ingest cost provide query latency reduction hand makes queries faster higher ingest cost cheaper conclude focus provides good flexibility ingest cost query latency makes better fit different contexts query faster factor figure ingest cost sensitivity accuracy target figure query latency sensitivity accuracy target sensitivity frame sampling common approach reduce video processing time use frame sampling periodically select frame process however applications use frame sampling miss objects show disappear within frame sampling window frame sampling rate application dependent choice study sensitivity focus performance different frame rates figures show ingest cost query latency focus different frame rates fps fps fps fps compared respectively make two observations first ingest cost reduction roughly across different frame rates average ingest sensitivity accuracy target figures illustrate improvements ingest cost query latency focus compared baselines different accuracy targets default accuracy target recall precision evaluate ingest cheaper factor related work best knowledge focus first system offers video queries balancing cost query latency discuss work related key techniques cascaded classification various works vision research propose speeding classification cascading series classifiers viola earliest work cascades series classifiers simplest complicated quickly disregard regions image many improvements followed cnns also cascaded reduce object detection latency work different two major ways first decouple compressed cnn allows choose wider range cnns allows better ingest cost query latency key aspect work second cluster similar objects using cnn features eliminate redundant work new effective technique video streams neural network compression recent work proposes various techniques reduce running time cnns techniques include shallow models predicting weights matrix pruning model quantization others work largely orthogonal system tied specific model compression technique employ techniques model specialization contextspecific specialization models improve accuracy reduce running time among closest work kang proposal noscope aims optimize video queries key differences stand first noscope applies optimizations focus adopts different architecture splitting work thus focus trades higher ingest cost even lower query latency second noscope optimizes cnns single class optimize ingest cnns frequent classes stream allow queries even rare classes finally use object feature vectors cluster similar objects create index map classes clusters allows efficiently query across classes noscope redo work including training specialized cnns query stream processing systems systems general stream data processing specific video analytics mainly focus general stream processing challenges load shedding fault tolerance distributed execution limited network bandwidth contrast work specific querying recorded video data ingest query thus mostly orthogonal query faster factor figure ingest cost sensitivity frame sampling figure query latency sensitivity frame sampling cost focus cheaper fps cheaper lower frame rates major ingest cost saving comes specialized compressed cnn models orthogonal frame sampling rates second query latency improvement focus degrades lower frame rates expected one key techniques reduce query latency redundancy elimination especially clustering similar objects using cnn feature vectors lower frame rates benefit technique reduces fewer redundancies nonetheless average focus still one order magnitude faster low frame rate fps applicability different query rate two factors affect applicability focus number classes get queried time fraction videos get queried first extreme case classes videos queried could good option cost amortized among queries study even extreme case overall cost focus still cheaper average cheaper run cheap cnn ingest time run per object cluster overall cost still cheaper second extreme case tiny fraction videos gets queried focus save ingest cost costly fraction videos gets queried less case choose nothing ingest time run techniques focus query time know fraction videos get queried approach increases query latency still reduces query latency average evaluation conclude focus still better baselines even extreme query rates integrate focus one general stream processing system build fault tolerable system video indexing retrieval large body works multimedia information retrieval research propose various video indexing retrieval techniques facilitate queries videos among works focus indexing videos different types queries shot boundary detection semantic video search video classification video retrieval works focus query interface enable query keywords concepts examples works largely orthogonal work focus cost latency video queries query types interfaces believe idea splitting work generic videos queries extended different types queries city auburn north ross east magnolia online available https cjuskmmylla city auburn toomer corner online available https fozami genetec https greenwood avenue bend online available https jackson hole wyoming usa town online available https online available http lausanne place online available https online available https nvidia tesla online available http opencv online available http oxford martin school webcam broad street online available https top video surveillance trends online available https wikipedia pareto online available https abadi ahmad balazinska cherniack hwang lindner maskey rasin ryvkina tatbul xing zdonik design borealis stream processing engine cidr amini andrade bhagwan eskesen king selo park venkatramani spc distributed scalable platform data mining anwar hwang sung fixed point optimization deep convolutional neural networks object recognition icassp caruana deep nets really need deep nips babenko lempitsky aggregating deep convolutional features image retrieval iccv babenko slesarev chigorin lempitsky neural codes image retrieval eccv bailis gan madden narayanan rong suri macrobase prioritizing attention fast data sigmod brezeale cook automatic video classification survey literature ieee trans systems man cybernetics part cai saberian vasconcelos learning cascades deep pedestrian detection iccv cao ester qian zhou clustering evolving data stream noise siam international conference data mining carney cherniack convey lee seidman stonebraker tatbul conclusion answering queries form find frames contain objects class important workload recorded video datasets queries used analysts investigators crucial answer low latency low cost present focus system performs low cost analytics live video later facilitates queries recorded videos focus uses compressed specialized cnns substantially reduces cost also clusters similar objects reduce work done hence latency focus selects cnn parameters smartly ingesttime cost latency evaluations using hours video traffic surveillance news domains show focus reduces gpu consumption makes queries faster compared current baselines conclude focus promising approach querying large video datasets hope focus enable future works better determining video querying systems next steps include training specialized highly accurate cnn stream object reduce query latency references apache online available http avigilon http church street market online available https city cam webcamsittard town square sittard online available https zdonik monitoring streams new class data management applications vldb chandrasekaran cooper deshpande franklin hellerstein hong krishnamurthy madden reiss shah telegraphcq continuous dataflow processing sigmod chang smeulders recent advances challenges semantic search icassp chen wilson tyree weinberger chen compressing neural networks hashing trick corr vol christel conescu mining novice user activity trecvid interactive retrieval tasks civr denil shakibi dinh ranzato freitas predicting parameters deep learning nips denton zaremba bruna lecun fergus exploiting linear structure within convolutional networks efficient evaluation nips han shen philipose agarwal wolman krishnamurthy mcdnn approximationbased execution framework deep stream processing resource constraints mobisys han mao dally deep compression compressing deep neural network pruning trained quantization huffman coding iclr han pool tran dally learning weights connections efficient neural network nips zhang ren sun deep residual learning image recognition cvpr hinton vinyals dean distilling knowledge neural network corr vol xie zeng maybank survey visual video indexing retrieval ieee trans systems man cybernetics part hwang sung feedforward deep neural network design using weights sips jaderberg vedaldi zisserman speeding convolutional neural networks low rank expansions corr vol kaewtrakulpong bowden improved adaptive background mixture model tracking shadow detection avss kang emmons abuzaid bailis zaharia noscope optimizing deep queries video streams scale pvldb krizhevsky sutskever hinton imagenet classification deep convolutional neural networks nips lawrence giles tsoi back face recognition convolutional approach ieee trans neural networks lecun boser denker henderson howard hubbard jackel backpropagation applied handwritten zip code recognition neural computation lew sebe djeraba jain contentbased multimedia information retrieval state art challenges tomccap lin shen brandt hua convolutional neural network cascade face detection cvpr lienhart maydt extended set features rapid object detection icip lin fan qian yang zhou zhou streamscope continuous reliable distributed processing big data streams nsdi liu anguelov erhan szegedy reed berg ssd single shot multibox detector eccv mhalla chateau gazzah amara faster scene specialization sequential framework dicta microsoft microsoft cognitive online available https callaghan mishra meyerson guha algorithms clustering icde rabkin arye sen pai freedman aggregation degradation jetstream streaming analytics wide area nsdi rastegari ordonez redmon farhadi imagenet classification using binary convolutional neural networks eccv razavian azizpour sullivan carlsson cnn features astounding baseline recognition cvpr workshops redmon farhadi better faster stronger corr vol ren girshick sun faster towards object detection region proposal networks nips ren singh singh zhu video retrieval pattern recognition romero ballas kahou chassang gatta bengio fitnets hints thin deep nets corr vol russakovsky deng krause satheesh huang karpathy khosla bernstein berg imagenet large scale visual recognition challenge ijcv schroff kalenichenko philbin facenet unified embedding face recognition clustering cvpr shen han philipose krishnamurthy fast video classification via adaptive cascading deep models cvpr simonyan zisserman deep convolutional networks image recognition iclr snoek van sande rooij huurnink gavves odijk rijke gevers worring koelma smeulders mediamill trecvid semantic video search engine trecvid workshop participants notebook papers snoek worring multimodal video indexing review multimedia tools appl snoek worring video retrieval foundations trends information retrieval sun wang tang deep convolutional network cascade facial point detection cvpr szegedy liu jia sermanet reed anguelov erhan vanhoucke rabinovich going deeper convolutions cvpr tan steinbach kumar introduction data mining first edition boston usa longman publishing tatbul zdonik staying fit efficient load shedding techniques distributed stream processing vldb liu prabhakar yao load shedding stream databases approach vldb viola jones rapid object detection using boosted cascade simple features cvpr kusner weinberger chen tree classifiers icml yang ling chai pan sensitive classification data missing values ieee trans knowl data yuan wang xiao zheng lin zhang formal study shot boundary detection ieee trans circuits syst video techn zaharia das hunter shenker stoica discretized streams streaming computation scale sosp zhang ananthanarayanan philipose bahl freedman live video analytics scale approximation nsdi zivkovic improved adaptive gaussian mixture model background subtraction icpr
| 1 |
jan subspace perspective canonical correlation analysis dimension reduction minimax rates zhuang xiaodong abstract canonical correlation analysis cca fundamental statistical tool exploring correlation structure two sets random variables paper motivated recent success applying cca learn low dimensional representations high dimensional objects propose two losses based principal angles model spaces spanned sample canonical variates population correspondents respectively characterize error bounds estimation risks proposed error metrics reveal performance sample cca depends adaptively key quantities including dimensions sample size condition number covariance matrices particularly population canonical correlation coefficients optimality uniform upper bounds also justified analysis based stringent localized parameter spaces best knowledge first time paper separates first order term upper bounds without assuming residual correlations zeros significantly paper derives first time nonasymptotic cca estimation convergence rates essential understand behavior cca leading canonical correlation coefficients close introduction canonical correlation analysis cca first introduced hotelling fundamental statistical tool characterize relationship two groups random variables finds wide range applications across many different fields example association study gwas cca used discover genetic associations genotype data single nucleotide polymorphisms snps phenotype data gene expression levels witten chen information retrieval cca used embed search space images query space text shared low dimensional latent space similarity queries candidates quantified rasiwasia gong natural language processing cca applied word matrix learns vector representations words capture semantics dhillon faruqui dyer applications name include fmri data analysis friman computer vision kim speech recognition arora livescu wang enormous empirical success motivates revisit estimation problem canonical correlation analysis two theoretical questions naturally posed proper error metrics quantify discrepancy population cca sample estimates metrics quantities characterize fundamental statistical limits justification loss functions context cca seldom appeared literature first principles proper metric quantify estimation loss depend specific purpose using cca find applications discussed mainly fall two categories identifying variables interest dimension reduction first category mostly genomic research witten chen treats one group variables responses group variables covariates goal discover specific subset covariates correlated responses applications featured low ratio interpretability results major concern contrast second category investigated extensively statistical machine learning engineering community cca used learn low dimensional latent representations complex objects images rasiwasia text dhillon speeches arora livescu scenarios usually accompanied relatively high ratio prediction accuracy using learned low dimensional embeddings new set predictors primary interest recent years series publications establishing fundamental theoretical guarantees cca achieve sufficient dimension reduction kakade foster foster sridharan kakade fukumizu chaudhuri many others paper aim address problems raised treating cca tool dimension reduction population sample cca suppose two sets variates joint covariance matrix cov simplicity assume epxi epyj population level cca designed extract correlated linear combinations two sets random variables sequentially ith pair canonical variables maximizes corrpui unit variances uncorrelated previous pairs canonical variables called ith pair canonical loadings ith canonical correlation well known multivariate statistical analysis canonical loadings found recursively following criterion arg max subject although criterion nonconvex optimization obtained easily spectral methods define singular values actually left right singular vectors respectively canonical variables versus canonical loadings quk given estimates leading canonical loadings denoted corresponding estimates canonical variables represented vpi quantify estimation loss generally speaking either focus measuring difference quk measuring difference canonical loadings quki vpi quk definition tpui quk canonical variables tpui quki tpu quk constructed vpi quk independent samples based tpu therefore discrepancy canonical variables extra layer randomness discussed modern machine learning applications natural language processing information retrieval leading sample canonical loadings used dimension reduction new observation ideally hope use corresponding values canonical variables pui pvi represent observation low dimension space empirically actual low dimensional representations therefore discrepancy ideal dimension reduction vpi quk approximate tpui quk actual dimension reduction explained well tpu consequently choose quantify difference sample population canonical variables instead canonical loadings linear span however still many options quantify well sample canonical variables approximate population correspondents choose suitable losses convenient come back specific applications get inspiration motivated applications natural language processing information retrieval model sufficient dimension reduction studied foster roughly speaking statistical model proposed foster study predict using two sets predictors denoted joint covariance cov proven foster certain assumptions leading canonical variables sufficient dimension reduction linear prediction best linear predictor based best linear predictor based similarly best linear predictor based best linear predictor based notice best linear predictor actually determined set linear combinations referred model space literature linear regression prediction denote inspired foster propose quantify discrepancy corresponding subspaces discrepancy tui uki similarly measure difference tvi spanpu tvpi uki distance vpk hilbert spaces principal angles xpu spanpu mpu section define discrepancy introducing hilbert space noting given sample tpxi quni xpu mpu composed linear combinations denote set possible linear combinations moreover define bilinear function easy show inner product hilbert space isomorphic xpu mpu natural inner product know subspaces natural define discrepancy based principal angles literature statistics linear algebra two loss functions usually used lmax pspanpu spite somewhat abstract definition following clean formula two losses lave pspanpu theorem suppose matrix represents orthogonal projector onto column span assume observed sample fixed lave pspanpu min lave lmax pspanpu max min gprk lmax matrix consisting leading population canonical estimate based given sample moreover loadings uniform upper bounds minimax rates important contribution paper establish sharp upper bounds cca based proposed subspace losses lmax lave noteworthy upper bounds hold uniformly invertible provided numerical constant furthermore order justify sharpness bounds also establish minimax lower bounds family stringent localized parameter spaces results detailed section notations organization throughout paper use letters represent fixed random variables respectively also use bold letters represent vectors could either deterministic random matrices respectively matrix vector denotes operator spectral norm frobenius norm respectively denotes vector norm denotes submatrix consisting first columns stands projection matrix onto column space moreover use represent largest smallest singular value respectively denote condition number matrix use identity matrix dimension submatrix composed first columns opm simply opnq stands set matrices orthonormal columns denotes set strictly positive definite matrices random vector spanpxj txj denotes subspace linear combinations notations specified within corresponding context following introduce main upper lower bound results section highlight contributions new loss functions theoretical results compare results existing work literature section proofs deferred section theory section introduce main results upper lower bounds estimating cca proposed loss functions worth recalling singular values natural estimate population cca sample counterparts similar equation sample canonical loadings defined recursively arg max subject sample covariance matrices sample canonical variables defined following linear combinations sample canonical loadings vpi prove following upper bound estimate based sample cca theorem upper bound suppose defined assume invertible moreover assume predetermined exist universal positive constants sample canonical satisfies coefficients matrix lmax lave obtained switching upper bounds since pursue nonasymptotic theoretical framework cca estimates loss functions propose nonstandard literature standard minimax lower bound results parametric maximum likelihood estimates apply straightforwardly instead turn nonparametric minimax lower bound frameworks particularly pca cca see cai gao compared existing works technical novelties results proofs summarized sections define parameter space collection joint covariance matrices satisfying deliberately set demonstrate lower bound independent condition number rest paper use shorthand represent parameter space simplicity theorem lower bound exists universal constant independent inf sup lmax inf sup lave obtained replacing lower bounds corollary cplog universal positive constant minimax rates characterized inf sup lmax inf sup lave related work contributions recently rate convergence cca studied gao sparse setup cai zhang usual setup cai zhang appeared arxiv almost time first version paper posted section state contributions detailed comparison works novel loss funcitons proposed new loss functions based principal angles subspace spanned population canonical variates subspace spanned estimated canonical variates contrast gao proposed studied loss lave cai zhang proposed lmax studied lave lmax min lave qpopk max lmax min gprk qpopk lave lmax resemble loss functions lave lmax respectively theorem also min lave lmax max min gprk two expressions easily obtain lave lmax lmax lave equivalent constant neither however lave lmax lmax fact prove long lmax lave lmax almost surely lave illustrate comparison simple simulation suppose consider following setup know population canonical leading correlation coefficients simulation generated following data canonical loadings matrices furthermore obtain sample canonical correlations well leading sample canonical loadings lave lave lmax lmax numerical example clearly shows sample cca exactly identify among linear combinations linear combinations mostly correlated loss functions lave lmax characterize exact identification whereas lave lmax moreover following joint loss studied gao ljoint almost surely special case similarly ljoint sharper upper bounds regardless loss functions explain following theorem implies sharper upper bounds existing rates gao gao cai zhang nonsparse case discussion focused lave following discussion discussion lmax similar notice apply wedin law replacing fine bound lemma rough bound lemma also see gao similar ideas obtain following rough bound lave gao order decouple estimation error bound cai zhang assume residual canonical correlations zero assumption essential proofs gao cai zhang certain sample size conditions got rid assumption developing new proof techniques techniques actually work lave lmax well detailed comparison result cai zhang summarized table results gao regime implied cai zhang milder sample size conditions loss function sample size cai zhang lave upper bound rates work lave yes perhaps striking contribution upper bound first derive factors literature nonasymptotic cca estimate explain factors essential leading canonical correlation coefficients close example consider example log bound rates actually imply elave rates gao cai zhang imply elave result could shows even condition loss lave imply sharper convergence rates gao cai zhang notice aforementioned actually prove elave separate argument improve theorem imply result open problem future research example close consider example log bound rates actually imply elave rough rates wedin law implies elave shows upper bound rates could much sharper rough rates close new proof techniques connection asymptotic theory best knowledge none analysis gao gao cai zhang used obtain multiplicative factor first order term upper bound even strong condition following different path careful perturbation analysis estimating equations cca avoid loss precision caused applying matrix inequalities early stage proof main challenge analyze properties matrix hardmard products especially derive tight operator norm bounds certain hardmard products particularly luckily find approach proof lemma decompose target matrices matrices apply tools developed lemma studied asymptotic distribution canonical loadings anderson assumption canonical correlations distinct since focus subspaces require given anderson work based analyzing estimating equations cca analysis involved completely novel techniques required obtain factor nonasymptotic framework sharper lower bounds parameter spaces fixed minimax lower bounds estimation rates cca first established gao losses ljoint lave however parameter space discussed gao requires moreover parameter space gao parameterized satisfying specified fact also constructed hypothesis class resulting minimax lower bound proportional however minimax lower bound sharp close suppose minimax lower bound theorem leads inf sup lave contrast capture fundamental limit cca estimates scenario framework gao one needs choose capture hence resulting minimax lower bound rate much looser technically speaking follow analytical framework gao gao hypothesis classes construction requires given instead brings new technical challenges detailed technical discussions deferred section proof theorem suppose observed sample fixed consider correlation let two subspaces defined spanpu pwk first second kth pair canonical variates xwi xwi varpwi varpw actually ith principal angle definition principal angles know implies lave sin linear combinations denote since definition covpwq basis matrices moreover bjp similarly moreover xwi covpw notice nonsingular matrix implies column space since basis matrix similarly straightforward calculation gives trace qqq trace tracepb klave equalities yield first two equalities notice orthonormal bases orthonormal bases spanpu similarly orthogonal matrix min min min min min prk qrj min pwi min prk epwi min epwi prk notice minqi prk epwi obtained best linear predictor min epwi varpwi prk therefore min klave implies third equality similarly max min gprk max min max min gprk gprk max gprk max gprk max gprk sin min min prk pwi finally prove wedin lmax implies equalities proof upper bound throughout proof denote linear invariance without loss generality assume definition canonical variables know determined words invertible canonical pairs still therefore consider following orthonormal bases orthonormal extension therefore know also canonical pairs similarly fixed sample variables sample canonical pairs vpp also sample canonical pairs corresponding sample easily seen concept sample respectively linear combinations canonical variables example corresponding sample variance sample correlation maximized replace respectively seek first sample canonical pair constraints linear combinations two sets variables unit sample variances objective still answer similarly sample correlation maximized sample canonical pairs particular sample canonical pairs argument gives following convenient fact order bound lave max pspanpu replace words assume satisfy standard form moreover implies upper bound standard form standard form lave pspanpu lmax denote upper lower sub respectively matrices trace since trace well lower bound therefore suffices give upper bounds basic bounds recall cov cov left upper principal submatrix moreover define similarly define lemma exist universal constants probability least following inequalities hold proof obvious lemma exist constants probability least holds moreover submatrices pip qpip pip implies lemma exist universal constants probability least following inequalities hold proof deferred section estimating equations upper bound notice already section aim give sharp upper bound established upper bound lemma wedin sin law plays essential role however bound actually loose purpose therefore need develop new techniques sharpen results consist sample canonical coefficients definition recall sample canonical coefficients satisfy following two estimating equations left right singular vectors respectively define define diagonal matrices diagonal matrices imply divide matrices blocks matrices finally define blocks rewritten way define equations imply following lemma lemma equality gives following result proof deferred section lemma one easily obtain recall lemma similarly therefore get combined lemma proof following lemma deferred section lemma probability upper bounds risks notice inequality yields lemma lemma know event probability least moreover since elmax since previous inequality elave fact factor main term reduced similar arguments done operator norm frobenius norm version lemma actually much simpler omit proof avoid unnecessary redundancy repetition supporting lemmas linear algebra probability definition hadamard operator norm define hadamard operator norm sup let arbitrary positive numbers lower bounded positive constant lemma let two sequences positive numbers hold proof proof found norm bounds hadamard products mean inequality unitarily invariant norms horn denote proof relies following two results lemma theorem hom johnson positive semidefinite max aii operator norm lemma theorem mathias symmetric matrix minpa positive semidefinite define define rpm mij lemma also positive semidefinite apply lemma notice lower left easy obtain finally since implies lemma covariance matrix estimation remark vershynin assume independent random rows second moment matrix exists universal constant every following inequality holds probability least lemma bernstein inequality proposition vershynin let independent centered random variables maxi every every min lemma inequality theorem rudelson vershynin let random vector independent components satisfy exi let exists universal constant every exj min lemma covering number sphere lemma vershynin unit euclidean sphere equipped euclidean metric satisfies every minimal cardinality following variant wedin sin law wedin proved proposition cai define singular value decompositions lemma following perturbation bound holds pup pup paq paq paq paq kth singular values proofs key lemmas proof lemma proof exactly lemma observe pip pip singular values respectively hence notice famous weyl inequality singular values pjp since left singular vectors submatrix long sufficiently large case argument recall last inequality lemma relies fact leading singular vectors respectively variant wedin sin law stated lemma hand second equality due fact orthonormal columns moreover denotes lower triangle inequality last inequality due let proof done proof lemma equality implies similarly implies equality equivalent written apply argument obtain consider combined finishes proof plug get finishes proof proof lemma first discuss two quite different cases case let define matrices aij lemma holds lemma diag diag recall obvious moreover previous section also shown suffices bound end apply standard covering argument step reduction denote psd unit ball surface pair vectors choose max maximize obtain max therefore max let suffices give upper bound max high probability step concentration bsi let way mutually independent standard gaussian random variables given pair vectors rxj symmetric determined corresponding quadratic form yields max max max max second last inequality due facts moreover define wnj max therefore classic inequality lemma holds exp min numerical constant without loss generality also assume let straightforward calculation gives step union bound lemma choose max words probability least max summary long probability last inequality due absolute constants case submatrices lemma notice moreover lemma holds similar submatrices rip similarly argument lemma holds finally diag since summary since diag lemma holds lower bound proof theorem establish minimax lower bounds cca estimates proposed losses follow analytical frameworks literature pca cca cai gao calculation focused construction hypothesis class packing lemma fano inequality applied however since fix localized parameter spaces new technical challenges arise consequently construct hypothesis classes based equality section also denote divergence following lemma viewed extension lemma gao arbitrary proof lemma found section lemma let upiq wpiq vpiq zpiq upiq vpiq let define upiq vpiq piq piq vpiq upiq piq piq let ppiq denote distribution random sample size assume one show remark conditon crucial obtaining factor lower bound key insight behind construction hypothesis class proof gao similar lemma deals case residual canonical correlations zero best knowledge proof techniques gao directly used obtain results packing number fano lemma following result packing number based metric entropy grassmannian manifold gpk due szarek use version adapted lemma cai also used gao fixed opp opp kqs define exists universal constant packing number satisfies following corollary used prove lower bound corollary change set lemma opp still proof apply lemma exists uij uij ujj arg define min ptui qpopkqu lemma riu therefore implies riu uij ujj lemma matrices opp inf qpopk proof definition let singular value decomposition opk inf qpopk hand since opp therefore diagonal elements less implies trpdq trpd inf qpopk lemma fano lemma let semi metric space collection probability measures totally bounded denote mpt number respect metric maximal number points whoese pairwise minimum distance least define diameter dkl sup sup sup dkl log inf sup log mpt proof lower bound fixed define fixed consider parametrization define straightforward verify yield parametrization upiq vpiq wpiq zpiq vpiq upiq zpiq wpiq piq piq upiq wpiq vpiq zpiq canonical vectors upiq vpiq define lemma definition dkl dkl sup bound diameter definition implies singular value decompositions matrix therefore exists decompose four blocks substitute second equality due fact orthogonal column space third equality valid argument notice substitute dkl let claim packing number lower bounded packing number prove claim suffices show exists corresponding first definition let orthogonal complement therefore exists set implies let depends chosen small enough kqs corollary apply lemma inf sup sup sup kqlog choose small enough kqlog lower bound reduced inf sup symmetry inf sup lower bound operator norm error immediately obtained noticing rank proof lemma simple algebra divergence two multivariate gaussian distributions satisfies log notice upiq vpiq piq piq vpiq upiq piq piq also notice log upiq piq upiq piq piq vpiq wpiq wpiq wpiq zpiq wpiq zpiq therefore share set eigenvalues multiplicity multiplicity multiplicity multiplicity multiplicity implies log hand block inversion formula compute divide blocks spell algebra computed exactly fashion similarly assumption argument sum equations repeat argument one show therefore references anderson asymptotic theory canonical correlation analysis multivariate analysis journal arora livescu acoustic features phonetic recognition across speakers domains acoustics speech signal processing icassp ieee international conference ieee cai optimal estimation rank detection sparse spiked covariance matrices probability theory related fields cai sparse pca optimal rates adaptive estimation annals statistics cai zhang perturbation bounds singular subspaces applications statistics annals statistics appear chaudhuri kakade livescu sridharan clustering via canonical correlation analysis proceedings annual international conference machine learning acm chen liu carbonell structured sparse canonical correlation analysis international conference artificial intelligence statistics dhillon foster ungar learning word embeddings via cca advances neural information processing systems nips volume faruqui dyer improving vector space word representations using multilingual correlation association computational linguistics foster johnson kakade zhang dimensionality reduction via canonical correlation analysis technical report friman borga lundberg knutsson adaptive analysis fmri data neuroimage fukumizu bach jordan kernel dimension reduction regression annals statistics gao ren zhou minimax estimation sparse canonical correlation analysis annals statistics gao zhou sparse cca adaptive estimation computational barriers annals statistics appear gong isard lazebnik embedding space modeling internet images tags semantics international journal computer vision hom johnson topics matrix analysis cambridge new york hotelling relations two sets variables biometrika kakade foster regression via canonical correlation analysis proc conference learning theory kim wong cipolla tensor canonical correlation analysis action classification computer vision pattern recognition cvpr ieee conference ieee mathias hadamard operator norm circulant applications siam journal matrix analysis applications rasiwasia costa pereira coviello doyle lanckriet levy vasconcelos new approach multimedia retrieval proceedings acm international conference multimedia acm rudelson vershynin inequality concentration electron commun probab sridharan kakade information theoretic framework learning servedio zhang eds colt omnipress szarek nets grassmann manifold orthogonal group proceedings research workshop banach space theory iowa city iowa volume vershynin introduction analysis random matrices arxiv preprint lei minimax sparse principal subspace estimation high dimensions annals statistics wang arora livescu bilmes deep representation learning proceedings international conference machine learning wedin perturbation bounds connection singular value decomposition bit numerical mathematics wedin angles subspaces finite dimensional inner product space matrix pencils springer witten tibshirani hastie penalized matrix decomposition applications sparse principal components canonical correlation analysis biostatistics assouad fano cam festschrift lucien cam springer
| 10 |
vivekkan abstract realm multimodal communication sign language continues one understudied areas line recent advances field deep learning far reaching implications applications neural networks sign language interpretation paper present method using deep convolutional networks classify images letters interpretation asl places convolutional neural networks extremely successful image recognition classification problems successfully implemented human gesture recognition recent years particular work done realm sign language recognition using deep cnns sensitive pixels images use cameras sense depth contour process made much easier via developing characteristic depth use technology quickly growing popularity tools incorporated process proven successful developments color gloves used facilitate recognition process make feature extraction step efficient making certain recently however methods automatic sign language recognition able make use technology widely available today previous works made use basic camera technology generate datasets simply images depth contour information available pixels present attempts using cnns handle task classifying images asl letter gestures success implementations surrounding task attempted via transfer learning network trained scratch general architecture fairly common cnn architecture consisting multiple convolutional dense layers architecture included groups convolutional layers followed layer dropout layer two groups fully connected layer followed dropout layer one final initially trained tested dataset images took dataset collection images people alphabet digits since dataset constructed controlled setting especially prone differences light skin color differences environment images captured also used premade dataset compare dataset performance additionally pipeline developed used people able generate continue adding images generating dataset captured images sign removed backgrounds images using techniques initially split dataset two training validation validation accuracy showed high however used datasets two different sources training testing premade vice versa validation accuracy drastically decreased since training one dataset validating another yielding accurate results used premade dataset different gestures train saw performances improve differently two datasets via data augmentation transforming images pixels rotating degrees translating axes increased accuracy approximately also flipped images horizontally sign using hands extremely effective saw better representative initial training data augmenting improved performance drastically observed augmentation premade dataset observed accuracy alphabet gestures validation set accuracy digits using asl dataset dataset observed much lower accuracy measures expected since data less uniform collected studio settings better equipment saw accuracy letters alphabet accuracy digits terms time complexity gestures letters converged approximately minutes trained categorical cross entropy loss function datasets fairly common loss function initially observed low accuracy measures testing validation set data accounted largely lighting skin tone variations images higher accuracy measure digits expected since gestures digits much distinguishable easier classify compared previous methods working task network performed quite well considering using color glove kinect camera cause higher accuracy stanford method likely due lack images since used large dataset part competition method accuracy paper described deep learning approach classification algorithm american sign language results process severely affected hindered skin color lighting variations data led resort professionally constructed dataset camera like microsoft kinect depth sensor problem easy solve however cameras technology widely accessible costly method shows potential solving problem using simple camera enough substantial training data provided continuously done added via aforementioned processing pipeline since people access simple camera technologies could contribute scalable solution recognizing classification limited goal plan incorporating structured pgms future implementations classification schema would describe probability distributions different letters occurrences based sequential contexts think accounting individual letters interact directly likelihood vowel proceed letter accuracy classification would increase hmm approach sequential pattern boosting done actual gesture units occur certain gestures contexts capturing movements precede certain letter incorporate probability weight next unit class processing sequential phonological information tandem gesture recognition tagging also recognize representation makes huge difference performance algorithms like hope find best representation data building results research incorporate learning process see learning potential facilitate translation process american sign language english implementing learning translating alphabet numbers american sign language written english comparing pure deep learning heuristic could successful potential benefit error correction via language models recent implementations adaptation also success solving real world computer vision tasks effectively trained deep convolutional neural networks using little data even limited datasets ultimately aim create holistic comprehensive representation learning system designed set features recognized simple gesture nips barczak reyes abastillas piccio susnjak new static hand gesture colour image dataset asl gestures letters kim taehwan livescu shakhnarovich greg american sign language fingerspelling recognition phonological tandem models slt agarwal anant thakur manish sign language recognition using microsoft kinect international cooper ong pugeault bowden sign language recognition using journal garcia brandon viesca sigberto american sign language recognition convolutional neural networks neural networks cao dong ming leu zhaozheng yin american sign language alphabet recognition using microsoft kinect international conference computer
| 1 |
may rate systematic mds convolutional codes barbero universidad valladolid valladolid spain email angbar ytrehus simula uib university bergen bergen norway email oyvindy abstract systematic convolutional encoder rate maximum degree generates code free distance best column distance profile cdp code maximum distance separable mds possesses cdp applied communication channel packets transmitted sequentially loses erases packets randomly code allows recovery pattern erasures first blocks delay blocks counting first erasure paper addresses problem finding largest systematic rate code exists given particular constructions rates presented provide optimum values equal respectively search algorithm also developed produces new codes field sizes using complete search version algorithm maximum value codes achieve determined code rates every field size rates ntroduction many practical communication applications multimedia transmission packet erasure channels delivery important criterion traditional arq systems example one used tcp transport layer unicast service suffer long delays due erasures time large led increased interest design analysis systems based error correcting codes coded schemes also known beneficial transport layer models example case two main approaches coding problem discussed literature deterministic approach send packets using fixed convolutional code good column distance profile approach discussed subsection random coding proposed solution schemes sender transmits uncoded information packets followed parity check packets formed random linear combinations information packets acknowledged receiver far subsection describes approach also discusses hybrid approach combines deterministic random coding contributions present new codes section iii section present two new general optimum constructions mds convolutional codes literature exist general constructions convolutional codes far know code binary generalizations thms present simple far see previously described literature construction code rate viterbi complexity binary code better column distance profile also present much interesting algebraic construction proposition section describe search algorithm section present codes found algorithm parameters codes better sense made precise previously known codes present simple upper bounds section convention call convolutional code systematic systematic encoder one preserves information symbols obtains redundancy extra parity symbols systematic rate convolutional encoders useful order obtain fast recovery packet erasures common case channels moderate erasure rates focus class codes work supported ministerio industria competitividad gobierno project estonian research council project norwegian research council sards project background notation thorough introduction convolutional codes please see following describe concept mds convolutional code way convenient purposes paper let integers define matrices vectors space row vectors denotes space matrices rows columns define integer let matrix rows columns parity check matrix lth truncated block code systematic convolutional code thus vector length codeword syndrome systematic encoder code represented identity zero matrices respectively straightforward verify example let primitive element defined parity generator matrices define truncated code rate block code note matrices completely determined parity check coefficients conventional polynomial notation convolutional codes parity check matrix described example similarly corresponding polynomial generator matrix mds convolutional codes constructed superregular matrices deterministic approach goal design codes optimum column distance profile define column distance convolutional code minimum hamming weight truncated codeword first block nonzero column distance profile cdp sequence free distance code index cdp reaches cdp originally studied significance performance sequential decoding please see recently cdp received renewed attention context codes due importance fast recovery losses symbols erasure channel recall consider convolutional codes rate systematic encoder case singleton bound truncated block codes similar linear algebra arguments moreover best column distance profile one hope find code systematic encoder mds convolutional code paper mean code cdp remark concept codes introduced concept takes account codes possess systematic encoder free distance may grow beyond memory minimal encoder order complicate notation since viterbi complexity issue paper omit details definition consider lower triangular matrix element consider square submatrix size formed entries rows indices columns indices corresponding minor proper superregular proper minors non singular matrix upper triangular definition proper submatrices analogous superregular matrix used construct rate code two ways systematic mds convolutional code cdp code general nonsystematic parity check matrix max degree cdp systematic codes case superregular matrices known exist dimensions field large enough general efficient constructions known minimum field size superregular matrix exists known another problem deterministic approach existing design methods allow simple construction codes high rate high degree codes higher rates desirable many practical cases also constructed superregular matrices involves deleting columns conditions superregular matrix strict means practice simple codes constructed way since superregular matrices hard construct reduction superregular matrix problem blocks code construction therefore generalize definition follows definition consider triangular matrix positive integer ssr consider square submatrix size ssr formed entries ssr rows indices columns indices corresponding minor proper matrix ssr called iff proper minors nonsingular rate field size table description superregular superregular hoc superregular superregular ome rate mds codes necessarily systematic described literature following lemma restatement theorem using terminology section lemma let parity check matrix truncation systematic convolutional code given let matrix obtained removing columns positions cdp convolutional code given matrix theorem stated without proof reference include formal proof appendix definition let largest free distance exists rate systematic mds convolutional code column distance profile main problem address paper determine exact values constructive lower bounds please note restriction degree definition known code constructions literature beyond based superregular matrices table contains current world records respect rate mds codes best knowledge describe new codes section iii although paper focuses rate mds codes observe following lemma follows directly theorem implies results also provide rate mds codes lemma systematic rate mds code memory free distance exists dual code equivalent systematic rate mds code memory free distance random convolutional codes terminology paper random approach consists selecting coefficients independently random advantage one pick codes large degrees large fields expected performance reasonably good although exact loss compared optimum average performance optimum guaranteed worst case performance remains determined coefficients need transmitted headers data packets represents small rate loss large packets transmitted proposition consider hybrid scheme first blocks coefficients time selected fixed subsequent random coefficients selected random thus parity check equation form hcdp hrandom hcdp hrandom nonzero randomly selected coefficients degree random polynomials need fixed except application protocol initial cdp time affected random part code construction proof obvious first component hcdp parity check matrix determines initial part cdp suggestion use hybrid codes codes terms degree parity check polynomials preselected constants yielding optimum initial column distance profile subsequent random parity checks added needed guarantees optimum recovery simplest likely erasure patterns hence better performance random codes light moderate erasure patterns still allowing degree grow required application iii codes use superregular matrices design codes however authors also give examples codes better ones constructed superregular matrices note abundance small examples suggests construction might possible might lead smaller alphabets given parameters construction leave open question future comes future research section present constructions new search algorithm combination improve knowledge almost sets parameters respect find literature codes free distance present two optimum constructions construction simple seen presented prior literature tacitly assumed following fact constant terms comes justification lemma assume proof equal zero want assume nonzero multiply corresponding column obtain new code cdp weight structure proposition prime proof select distinct nonzero elements without loss generality parity check matrix takes form obvious proper minors sizes nonsingular clearly remark instructive compare construction proposition binary codes codes considered digital media transmission already code length binary polynomial parity check matrix hwa easy see cdp code mds code construction proposition considered generalization code memory code mds code cdp present optimum construction proposition complete computer searches indicate construction unique sense much better achieved choices set first degree coefficients lemma code cdp parity check matrix must satisfy iii proof lemma need proper minors non singular equivalent condition proper minors size following types first type trivially non zero second type non zero condition satisfied third type nonsingular equivalent condition fourth type guaranteed non zero condition iii satisfied fifth type nonsingular equivalent condition finally proper minors four different types first type trivially nonsingular condition takes care second type nonsingular third type nonsingular condition satisfied fifth type non singular condition satisfied example consider code example checking conditions lemma observe code cdp equal proposition proof let following construction gives code meets requirements trace function defined trm trm consider set regarded vector space set hyperplane linear subspace let select arbitrary nonzero field element select arbitrary constant select distinct nonzero elements set need verify construction satisfies conditions lemma holds product two nonzeros distinct assume first factor nonzero since second factor also nonzero since closed addition contradiction iii assume contradiction since assume contradiction product nonzero factors hence nonzero follows theorem section remark theorem later construction proposition optimum sense offers maximum distance given field size code rate also offers minimum field size code given rate distance maximum code rate given field size distance moreover complete computer search field sizes show construction unique parameters computer search algorithm goal search algorithm select coefficients successively ordered first reversely way conditions minors met useful facts first constructions section use lemma order set order simplify search apply following results lemma assume choice ordering lemma consider mds convolutional code polynomial parity check matrix code parity check matrix also mds proof let iff corollary systematic mds convolutional code exists assume parity check matrix proof assume systematic mds convolutional code exists parity check matrix apply lemma lemma let matrix prime raising element power yields another matrix proof given square matrix definition det group permutations elements sign permutation also xnqn prime divisor coefficient except cases therefore characteristic xnq back definition determinant characteristic det aqn finally either case case odd gives det aqn aqn det denotes hadamard schur power matrix whose entries entries raised clear given proper minor size matrix corresponding proper minor nonsingular nonsingular corollary particular let matrix squaring element yields another matrix proof particular case lemma hence also corollary assume values fixed allowed lemma corollary suffices consider one representative cyclotomic coset proof consider minor squaring coefficients change values thus matrix also matrix search simplified constant factors use lemma corollary lemma respectively corollary reduces complexity extra factor approximately reduction entirely independent reductions summary search algorithm highly exponential complexity tricks allow deeper search would otherwise possible search algorithm sketched algorithm trickier steps explained detail remark remark explain steps algorithm essence algorithm runs search tree depth tree points one variables abusing notation also say points current depth throughout course algorithm goes back forth along along values last row reverse order starting since assumed equal lemma corollary context use ordering refer reverse order last row addition subtraction moves left right respectively row line let refer one element last row set formal proper submatrices left lower corner matrices nonsingular target maximum degree submatrix lower left corner set found recursion number proper submatrices related catalan numbers omit details iii line depth values already assigned depth hence keeping track subdeterminants already computed determinant corresponding proper submatrix value would algorithm computer search algorithm result finds good mds codes rate input field size target distance code length data points current position initialization value value srd precompute set proper submatrices precompute set legal values coefficient values check coefficient values check assign next value coefficient update determinants needed deepest level far record selected values coefficients end else end end make determinant zero obtained constant time words going identify set illegal values coefficient line complete search version algorithm successively try values faster incomplete search algorithm may set skip arbitrary subset values depth target distance input parameter algorithm order determine code maximum distance necessary verify complete search version algorithm pass depth depth line using set values currently assigned coefficients depths compute subdeterminants useful computing determinants initialize set legal values next depth complexity assumptions enabled lemmas section together efficient computation determinants allow deeper search would possible search however depth search tree finds code degree many early depths complete search needs almost values size set proper submatrices also grows exponentially overall complexity least number proper submatrices codes found computer search present codes found computer search field sizes characteristic ranging free distances exact values provided propositions tables row summarizes discovered rate codes column lists maximum value found code cdp absence sign column indicates established exhaustive search value indeed maximum rate field size coefficients column presents one encoder possesses cdp terms coefficients primitive element field note degree zero terms suppressed since assumed identically column contains rareness code explained section reference column include references cases similar codes codes field cdp necessarily possess systematic encoder previously described literature list encoders found search also found codes set parameters rate cdp smaller field also due lemma list codes rate exist codes rate cdp lemma systematic mds code free distance rate exists also systematic mds code free distance rate proof shorten matrix selecting removing columns coefficients remark table table bounds field defined coefficients remark table iii table bounds field defined lease also see xample example according table iii finite field defined exists systematic code rate example code represented implicitly thus code polynomial parity check matrix matrix obviously absence symbol column table iii indicates complete search systematic mds codes rate reveals maximum column explained later indicates one seventy random assignments nonzero values give code cdp codes parameters rare nonsystematic code degree presented pper bounds code assessment would useful determine upper bounds order assess good codes random search respect optimum heller bound relates convolutional codes given free distance truncated block codes uses known bounds block codes determine convolutional code parameters achieved unfortunately heller bound limited use case since truncated code actually much lower minimum distance viewed block code also since exact bounds block codes range parameters interested well known moreover approach sphere packing binary codes easily adapted current case since structure optimum nonbinary codes turns quite different optimum binary simple bound described next subsection subsection present alternative way describing great codes concept rareness coefficients table table bounds field defined coefficients table table bounds field defined optimum binary convolutional codes tend require parity check matrices many whereas seen nonbinary case degree one coefficients nonzero differences impose different combinatorial constraints binary nonbinary case coefficients table table bounds field defined coefficients table vii table bounds field defined coefficients table viii table bounds field defined coefficients table table bounds field defined coefficients table table bounds field defined coefficients table table bounds field defined coefficients table xii table bounds field defined coefficients table xiii table bounds field defined simple bound following simple bound tight theorem rate codes cdp proof result follows proposition assume recall coefficients nonzero consider minors type conditions proper minors since minors nonzero follows order values sets must distinct values consider code minors nonzero implies set new set different values different values sets order need least different non zero elements field generalizing argument follows distinct nonzero values rareness section address probability randomly generated convolutional code rate mds code cdp randomly generated code mean one generated random systematic encoder coding coefficient selected independently uniformly define probability rareness parameter pair small values exact value rareness determined complete code search since large parameters quickly becomes intractable determine best codes also quickly turns difficult compute exact results rareness however possible obtain estimates rareness described first assume complete search applied determine set distinct sequences proper submatrices nonsingular thus probability given randomly selected sequence corresponds path search tree satisfies conditions depth define avg avg average computed complete search average conditional probability random generator satisfies depth search tree also satisfies depth large parameters able carry complete search however perform deep incomplete searches also provide estimates conditional probabilities estimates quite accurate especially first depths hence changed together obtain estimate long substantial number different search tree paths leading depth estimate reasonably good hence also estimate avg avg weighted average computed incomplete search estimate tables include exact rareness cases perform complete search otherwise include estimate concede approach foolproof example construction proposition unique least field sizes choices first layer coefficients indicated proof proposition appears search tree ends considerably shallower rareness construction proposition probability random sequence match construction exactly already rareness less hence arbitrary set search parameters exists rare construction caught incomplete search estimates deepest values may unprecise however believe estimates provide intuition difficulty reaching certain depth search tree random path cases able carry complete search also note estimates described pretty accurate modest search effort figure contains exact values estimates please see figure caption explanations also include rareness estimates tables onclusion open problems motivated practical problem fast recovery coded channel studied systematic mds convolutional codes characterized terms certain matrix presented new optimum constructions free distances tables new codes found computer search combinatorial upper bound tight case small free distances order assess good code also introduced concept rareness would interesting establish upper bounds tight also larger free distances another issue would study whether exist general algebraic constructions similar one proposition systematic mds codes free distance would also theoretical interest optimize cdp codes additional constraint degree minimal encoders considered problem since complexity viterbi decoding codes prohibitive small values product since seems difficult eferences gabidulin convolutional codes large alphabets proc int workshop algebraic combinatorial coding theory varna bulgaria heide joachim rosenthal roxana smarandache convolutional codes ieee transactions information theory vol february paulo almeida diego napp raquel pinto new class superregular matrices mdp convolutional codes random code mds rate rate rate rate number coefficients starting figure rareness codes exact rareness estimates figure search depth measured terms number coefficients order construct rate encoder distance necessary find sequence coefficients get encoder distance suffices coefficients similar cases pierre ugo tournoux emmanuel lochin lacan amine bouabdallah vincent roca erasure coding video applications ieee transactions multimedia vol kim cloud parandeh gheibi urbina fouli leith network coded tcp ctcp http wyner ash analysis recurrent codes ieee transactions information theory vol issue jul ytrehus ascetic convolutional codes proc allerton conference communications control computing october robert mceliece algebraic theory convolutional codes handbook coding theory eds pless huffman lin costello error control coding stott oliphant osborne digital video error correcting codes practical study error corrector techn report british broadcasting corporation december justesen hughes convolutional codes corresp ieee transactions information theory vol mar macwilliams sloane theory codes elsevier heller sequential decoding short constraint length convolutional codes space programs summary jpl pasadena eirik rosnes ytrehus bounds convolutional codes ieee transactions information theory vol issue ppendix roof emma proof starting set notations let set column indices taking account way constructed clear submatrix formed first rows columns matrix truncation also submatrix formed last rows columns set column indices last index corresponding column column identity analogous way call set column indices matrix follows use name square submatrix corresponding minor since create confusion start proof assume cdp particular implies entries non zero let proper minor size formed entries rows indices columns indices since proper let set row indices construct minor following row indices define column index follows exists unique note considering corresponding column index unique note increasing function also implies define clearly cql cql actually corresponding columns identical corresponding column last block column indices let note column indices guaranteed ordered increasing order added columns form submatrix rows therefore value minor value order see need check proceed recursive way using truncation provide minimum distance least one column index proof columns means otherwise would implies hence means would contradict assumption exactly column columns indices first position submatrix formed last rows last columns since proceed working way least two columns suppose first index least two columns least least columns columns clearly implies column let consider exists kil therefore note case since implies column contradicting columns proven even though indices ordered increasing order hand index hence decomposed part corresponding first rows columns proven contained submatrix formed first rows first columns actually guaranteed non zero minor satisfies condition least columns among first columns least three among first least among first minor formed last rows columns contained submatrix formed last rows last columns argument used far used prove non zero decomposing blocks nonzero finally note one column index least two would imply also least two columns would contradict condition proper since implies suppose consider minor size formed columns positions assume construct minor removing column position corresponding row clear number removed columns size remaining minor careful analysis similar one done reciprocal part proof one prove proper minor hence non zero continue using notations demonstration reverse consider rows remaining call set indices rows correspond identity columns suppressed corresponding column indices columns copy column using notations clear implies part proof block column indices note since column indices multiples removed never turn columns first observe columns block indices hence last column removed never row remains corresponding column copy column hence proper condition satisfied last index general consider row index position number identity columns removed column copy column columns already considered removed implies final observation always block block contained least two columns even one removed always least one column remaining first block hand first last observations necessary help understand general case proven minor proper therefore singular
| 7 |
fusion systems sporadic jun justin lynd julianne rainbolt abstract aschbacher program classification simple fusion systems odd type prime two main stages classification systems subintrinsic component type classification systems type make contribution latter stage classifying systems isomorphic systems several sporadic groups assumption centralizer component cyclic introduction dichotomy theorem saturated fusion systems partitions class saturated systems fusion systems characteristic fusion systems component type much cleaner statement corresponding statement finite simple groups much shorter proof last years aschbacher begun work program give classification large subclass systems component type memoir setting outline first steps program forthcoming see survey contents immediate goal give simpler proof roughly half classification finite simple groups carrying work category saturated systems let saturated fusion system finite standard example fusion system finite group sylow component subnormal quasisimple subsystem system said component type involution centralizer component systems odd type consist subintrinsic component type type proper subclass systems component type focusing attention restricted class one expected avoid several difficulties treatment standard form problems like ones considered paper carrying work fusion systems expected certain difficulties within classification simple groups component type avoided including necessity proving thompson refer definition fusion system subinstrinsic component type needed paper fusion system said type subintrinsic component type fully centralized involution equal component shall call component involution centralizer date march key words phrases fusion systems sporadic groups involution centralizer components research first author partially supported nsa young investigator grant supported grant allowed travel related work paper classify saturated systems isomorphic system mcl assumption centralizer component cyclic similar problem fusion system mod treated stronger hypotheses theorem let saturated fusion system finite suppose fully centralized involution system mcl cyclic assume component generalized fitting subsystem centralizer largest rank elementary abelian mention fusion system involution centralizer component isomorphic mcl necessarily subintrinsic component type means restricted components theorem gives result weaker needed fit subintrinsic type portion aschbacher program however included mcl arguments apply equally well four cases almost simple group involution centralizer simple groups component wreath product hxi always hxi component diagonally embedded strategy proof theorem locate suitable elementary abelian subgroup sylow show normalizer hxif least twice rank thus aim force resemblance wreath product hxif modulo core extension projection onto ith factor hxi autk lemma important getting control extension determined order carry argument acknowledgements would like thank department mathematics statistics saint louis university departments mathematics rutgers university ohio state university hospitality support mutual visits authors would also like thank solomon lyons helpful discussions anonymous referee comments suggestions background fusion systems assume familiarity notions regarding saturated fusion systems found although items recalled notation standard whenever group write set nonidentity elements wish indicate split extension group group write denote conjugation homomorphism restrictions morphisms fusion systems written right exponent write image element subgroup morphism fusion system analogy standard exponential notation conjugation group terminology basic properties throughout section fix saturated fusion system sometimes refer sylow subgroup subgroup write autf homf outf autf inn whenever two subgroups elements isomorphic say conjugate write set subsystem subgroup morphism conjugate subsystem morphisms morphism first recall terminology subgroups common subsystems fusion system definition fix saturated fusion system let fully fully outf weakly centralizer fusion system morphisms homf extension homf restricts identity normalizer fusion system morphisms homf extension write collections fully respectively write intersection two collections sometimes refer element fully actually mean group hxi fully especially involution example done statement theorem introduction whenever write set homf fully lemma empty moreover proof applied aut result puig centralizer saturated fully normalizer saturated fully write unique largest subgroup satisfying unique largest subgroup satisfying note finite group sylow normal converse hold general model theorem subgroup fully fully normalizer fusion system constrained model theorem proposition unique finite group isomorphism sylow fns said model case tame fusion systems main hypothesis theorem generalized fitting subsystem involution centralizer fusion system finite group cyclic simple situation fusion system finite group mcl since simple groups tamely realizes system roughly finite group tamely realizes fusion system every automorphism fusion system induced automorphism group moreover fusion system said tame finite group tamely realizes refer details importance tameness context standard form problems pointed discussion centered around notion strong tameness needed proofs results contents imply fusion system tame strongly tame recently oliver established following useful corollary results state setup theorem corollary let saturated fusion system assume simple tamely realized finite simple group tamely realized finite group note upon application theorem involution centralizer theorem indeed since normal one sees combining lemma lemma however normal properties generalized fitting subgroup group outer automorphisms cyclic follows normal since hence thus effect theorem purposes may work group normal subgroup particular setup theorem quotient isomorphic subgroup aut containing inn one simple groups appearing theorem structure components section recall properties simple systems appearing theorem required remainder lemma let faithful dimension acts transitively nonzero vectors cgl acts homocyclic proof case irreducible unique module namely natural considered module thus holds case module unique taking duals clearly points independent choice two modules independent choice note acts transitively seen noting sylow acts exactly one fixed point sylow acts fixed points point holds absolute irreducibility similarly one cgl follows case point holds coprime action fixed point module containing submodule see point holds example applying indeed satisfy hypotheses theorem sylow nontrivial fixed point using similar argument via coprime action turn follows special case result higman theorem says acts faithfully homocyclic element order acts without fixed points elementary abelian case natural module certainly respect appropriate basis diagonal element order acting without fixed points case action restriction natural dual action restriction either one shows embedded moving points natural permutation action contained conjugacy element order acting without fixed points hence holds case well higman theorem vector space field two elements next lemma examines rather strong hypotheses structure extensions certain subgroups stabilizer hyperplane lemma let aut let complement acting decomposably fixed point action let extension given action let preimage quotient map assume acts transitively cgl subgroup elementary abelian homocyclic order complement hxi proof let since commutator map determines linear isomorphism transitive nonzero vectors hence elementary abelian extraspecial center preserves squaring map case transitive nonzero vectors therefore elementary abelian assumption yields complement let preimage claim abelian assume contrary contained since elementary abelian assumption neither trivial similarly contained therefore squaring map linear isomorphism let inverse map given linear isomorphism commuting action structure map let map squaring map means hence pair gives thus abelian follows completes proof lemma examine structure simple systems occupying role theorem let isomorphic sylow generated involutions additional defining relations sylow subgroup mcl isomorphic sylow extension field automorphism semidirect product sylow subgroup isomorphic sylow extended unitary automorphism semidirect product hui sylow subgroup isomorphic sylow aut semidirect product relations denote isomorphic one hui recall thompson subgroup finite subgroup generated elementary abelian subgroups largest order lemma let mcl sylow order also suitable choice notation one following holds mcl autk autk pair autk satisfies assumptions lemma role involutions autk proof point holds inspection relations elementary abelian subgroups maximal rank prove suffices show elementary abelian subgroup maximal rank contained set identify inn write inndiag group automorphisms inndiag contains index corresponding size center universal version theorem also aut split extension inndiag generated generated graph automorphism field automorphism theorem theorems involution aut aut conjugate centralizers automorphisms isomorphic respectively theorems centralizers respectively since shows relations used defining involution contained one conclude description automizers follows table mcl lemma point follows lemma point follows burnside fusion theorem statement automizer weakly subgroup case controls center lemma let one sporadic groups mcl let sylow otherwise involution aut inn mcl automorphism centralizing member inner proof points follow inspection table lemma centralizers holds preliminary lemmas begin section proof theorem fix notation hypotheses hold throughout remainder paper let saturated fusion system let involution assume hxi fully cyclic set saturated fusion system remark lemma let sylow subgroup set assume fusion system one sporadic groups mcl since tamely realizes case quotient induces outer automorphisms theorem arguing contradiction assume component fix presentation section whichever case applicable note hxi assumption lemma notation may chosen fully proof repeatedly use lemma let still fully thus may assume fully normalized replacing conjugates let nns follows equality holds fully hence still fully still fully lemma hxi weakly proof assume contrary case otherwise contains properly moves follows contained center every subgroup hence alperin fusion theorem theorem conclude component contrary hypothesis lemma following hold hxi proof suppose hold choose hxi structure acts nontrivially particular lemma hence contradicts choice establishes lemma also part also established lemma proof note first lemma lemma thus lemma holds case case let note lemma normalizes centralize thus claimed lemma following hold conjugate case proof part let assume first extension axiom extends autf restricts automorphism lemma shows lemma hence shows case assume whereas lemma since hxi fully assumption conclude case either completes proof burnside fusion theorem imply hxi weakly thus holds case section shown continue notation set beginning section lemma hxi weakly proof assume lemma using burnside fusion theorem assumption see lemma hxi weakly inspection table one class involutions thus exactly three involutions namely lemma therefore holds lemma particular case fusion system proof assume also contrary lemma hxi weakly fixed automorphism subgroup therefore hxi component contrary assumption last statement follows follows lemma lemma assume fusion system mcl proof assume lemma fix extension khf defined khf system aut mcl aut theorem thus lemma conjugating khf necessary may assume fully khf lemma involutions khf ckhf particular semidihedral dihedral respectively order center fix four subgroup conjugate example element normalizer hence element structures fix homf hxi extension axiom extends morphism also call defined therefore element intersection nontrivial see distinct conjugate contradicts lemma completes proof proof theorem continue notation hypotheses set beginning section addition fix satisfying assumptions lemma guaranteed lemma set hxif lemma section finish proof theorem showing hypotheses lemma hold model normalizer appropriate via lemma forces least contrary hypothesis lemmas lemma autf autc proof represent autf apply lemma lemma following hold autf autc xautf autf autc proof represent autf autc cautf former transitive lemma also since conjugate conclude lemma xautf xzj size thus holds similarly autc cautf former transitive choice lemma lemma part lemma representing see kernel index element normalizes particular autf member choice another appeal lemma yields xautf size establishes lemma following hold hxi proof suppose contrary hxi choose fix also normal since hwi thus whereas lemma follows hxi contradiction establishes let one two elementary abelian subgroups rank set contains lemma hence part prove fix since hxi fully centralized restriction extension defined thus setting see previous paragraph fully contains centralizer means claimed since working remainder may assume replacing necessary fully hence lemma fix model lemma satisfies hypotheses lemma proof set autc observe autf lemma thus contains index acts transitively centralizes follows lemma orbits autf hence autf naut nontrivial split extension elementary abelian order standard action claim autf suppose case acts transitively commutator map defines isomorphism acts transitively since normalized autf see autf particular autf embeds cases consideration lemma autf therefore subgroup containing index however index contained index unique maximal subgroup contradiction therefore autf claimed thus shown autf contains subgroup index split extension naut thus subgroup index extension assumptions lemma hold via lemma choice proof theorem keep notation proof lemma lemma lemma hxi homocyclic order elementary abelian order isomorphic faithful action former case impossible lemma hence contrary hypothesis references alperin daniel gorenstein vanishing theorem cohomology proc amer math soc michael aschbacher radha kessar bob oliver fusion systems algebra topology london mathematical society lecture note series vol cambridge university press cambridge kasper andersen bob oliver joana ventura reduced tame exotic fusion systems proc lond math soc michael aschbacher finite group theory second cambridge studies advanced mathematics vol cambridge university press cambridge generalized fitting subsystem fusion system mem amer math soc classifying finite simple groups systems iccm fusion systems component type preprint carles broto ran levi bob oliver homotopy theory fusion systems amer math soc electronic david craven theory fusion systems cambridge studies advanced mathematics vol cambridge university press cambridge algebraic approach larry finkelstein finite groups standard component isomorphic algebra finite groups standard component isomorphic hjm algebra george glauberman justin lynd control fixed points existence uniqueness centric linking systems invent math daniel gorenstein richard lyons ronald solomon classification finite simple groups number part chapter mathematical surveys monographs vol american mathematical society providence almost simple graham higman odd characterizations finite simple groups lecture notes university michigan ann arbor justin lynd characterization system algebra bob oliver existence uniqueness linking systems chermak proof via obstruction theory acta math reductions simple fusion systems bulletin london mathematical society tameness fusion systems sporadic simple groups preprint robert wilson subgroup structure lyons group math proc cambridge philos soc institute mathematics university aberdeen fraser noble building aberdeen address department mathematics statistics saint louis university north grand saint louis address rainbolt
| 4 |
prediction short memory nov sham kakade university washington sham percy liang stanford university pliang vatsal sharan stanford university vsharan gregory valiant stanford university valiant abstract consider problem predicting next observation given sequence past observations consider extent accurate prediction requires complex algorithms explicitly leverage dependencies perhaps surprisingly positive results show broad class sequences algorithm predicts well average bases predictions recent observation together set simple summary statistics past observations specifically show distribution observations mutual information past observations future observations upper bounded simple markov model recent observations obtains expected error hence error respect optimal predictor access entire past knows data generating distribution hidden markov model hidden states bounded log quantity depend mixing time show trivial prediction algorithm based empirical frequencies length log windows observations achieves error provided length sequence log size observation alphabet also establish result improved upon even class hmms following two senses first hmms hidden states window length log necessary achieve expected error error second log samples required accurately estimate markov model observations drawn alphabet size necessary computationally tractable algorithm assuming hardness strongly refuting certain class csps memory modeling prediction consider problem predicting next observation given sequence past observations could complex dependencies sequential prediction problem one basic learning tasks encountered throughout natural language modeling speech synthesis financial forecasting number domains sequential chronological element abstract problem received much attention last half century multiple communities including tcs machine learning coding theory fundamental question consolidate reference memories past order effectively predict future given immense practical importance prediction problem enormous effort explore different algorithms storing referencing information sequence efforts led recurrent neural networks encode past real vector fixed length updated every specific classes networks long memory lstm networks recently popular models explicit notions memory include neural turing machines memory networks differentiable neural computer etc models quite successful see nevertheless seem largely unable consistently learn dependencies crucial many settings including language parallel efforts design systems explicitly use memory much effort neuroscience community understand humans animals able make accurate predictions environment many efforts also attempt understand computational mechanisms behind formation memories memory consolidation retrieval despite long history studying sequential prediction many fundamental questions remain much memory necessary accurately predict future observations properties underlying sequence determine requirement must one remember significant information distant past memory sufficient computational complexity accurate prediction answers questions depend metric used evaluate prediction accuracy aside intrinsic theoretical value questions answers could serve guide construction effective practical prediction systems well informing discussion computational machinery cognition nature work provide insights first three questions begin establishing following proposition addresses first two questions respect pervasively used metric average prediction error proposition let distribution sequences mutual information past observations future observations best order markov model makes predictions based recent observations predicts distribution next observation average error average error respect actual conditional distribution given past observations intuition behind statement proof general proposition following time either predict accurately unsurprised revealed predict poorly surprised value must contain significant amount information history sequence leveraged subsequent predictions etc sense every timestep prediction bad learn information past mutual information history sequence future bounded make consecutive bad predictions captured nearly amount information history hence going forward long window using spans observations expect predict well general proposition framed terms mutual information past future immediate implications number models sequential data hidden markov models hmms hmm hidden states mutual information generated sequence trivially bounded log yields following corollary proposition state proposition provides helpful reference point discussion general proposition corollary suppose observations generated hidden markov model hidden states best order markov model makes predictions based recent log predicts distribution next observation average error observations error respect optimal predictor knows underlying hmm access past observations setting observations generated according hmm hidden states best order markov model easy learn given sufficient data corresponds naive empirical model based previous observations specifically model given outputs observed empirical distribution observation followed length sequence predict comes next phrase defer details look previous occurrences subsequence predict according empirical frequency subsequent word following theorem makes claim precise theorem suppose observations generated hidden markov model hidden states output alphabet size exists window length absolute constant chosen uniformly random expected distance true distribution given entire history knowledge hmm distribution predicted naive empirical order markov model based bounded theorem states window length necessary predict well independent mixing time hmm question holds even model mix amount data required make accurate predictions using length windows scales exponentially condition theorem chosen uniformly lower bounds discussed section argue exponential dependency unavoidable interpretation mutual information past future mutual information past observations future observations intuitive parameterization complexity distribution sequences fact right quantity bit subtle tempting hope mutual information bound amount memory would required store information past observations relevant distribution future observations consider following setting given joint distribution random variables ast suppose wish define function maps ast binary advice string ast possibly variable length independent ast given ast shown harsha joint distributions ast even average minimum length string necessary task exponential mutual information ast setting also interpreted communication game one player generates ast generates given limited communication ability communicate ast given fact mutual information even upper bound amount memory optimal algorithm computationally unbounded complete knowledge distribution would require proposition might surprising implications proposition corollary results show markov model capture dependencies structure predict accurately distribution provided order markov model scales complexity distribution parameterized mutual information past future strikingly parameterization indifferent whether dependencies sequence relatively hmm mixes quickly hmm mixes slowly mix independent nature dependencies provided mutual information small accurate prediction possible based recent observation see figure concrete illustration result setting hmm mix dependencies figure depiction hmm states repeats given length binary sequence outputs hence mix corollary theorem imply accurate prediction possible based short sequences log observations time increasingly complex models recurrent neural networks neural turing machines vogue results serve baseline theoretical result also help explain practical success simple markov models smoothing crucial components machine translation speech recognition systems although recent recurrent neural networks yielded empirical gains see current models still seem largely incapable successfully capturing worth noting string sampled first ast defined random functions length related ast see latter setting generated first corresponds allowing shared randomness communication game however relevant sequential prediction problem one amusing example recent short film sunspring whose script automatically generated lstm locally sentence dialogue mostly makes sense though cohesion longer time frames overarching plot trajectory despite brilliant acting settings natural language capturing dependencies seems crucial achieving results indeed main message narrative conveyed single short segment generally intelligence seems ability judiciously decide aspects observation sequence worth remembering updating model world based aspects thus settings proposition interpreted negative average error good metric training evaluating models important note average prediction error metric ubiquitously used practice natural language processing domain elsewhere results suggest different metric might essential driving progress towards systems attempt capture dependencies leverage memory meaningful ways discuss possibility alternate prediction metrics section many settings financial prediction lower level language prediction tasks used ocr speech recognition average prediction error meaningful metric settings result proposition extremely positive matter nature dependencies financial markets sufficient learn markov model one obtains data one learn higher higher order markov model average prediction accuracy continue improve applications question becomes computational question naive approach learning markov model domain alphabet size might require space store data learn computational standpoint better algorithm properties underlying sequence imply models learned approximated efficiently less data computational lower bounds described provide perspective computational considerations lower bounds positive results show accurate prediction possible via algorithmically simple markov model depends recent learned algorithmically straightforward fashion simply using empirical statistics short sequences examples compiled sufficient amount data nevertheless markov model parameters hence requires amount data scales learn bound size observation alphabet prompts question whether possible learn successful predictor based significantly less data show even special case data sequence generated hmm hidden states possible general assuming natural assumption hmms hidden states output alphabet size defined via parameters samples sufficient information theoretic standpoint learn model predict accurately learning hmm computationally hard see begs question whether accurate average prediction achieved via computationally efficient algorithm amount data significantly less log naive markov model would require main lower bound shows exists family hmms dlog sample complexity requirement necessary computationally efficient algorithm predicts accurately average assuming natural assumption specifically show hardness holds provided problem strongly refuting certain class csps hard conjectured studied related works see section description class discussion conjectured hardness theorem assuming hardness strongly refuting certain class csps sufficiently large fixed constant exists family hmms hidden states output alphabet size polynomial time algorithm achieves average error respect optimal predictor random hmm family must observe log observations hmm mutual information generated sequence hmm hidden states bounded log theorem directly implies families distributions mutual information observations drawn alphabet size computationally efficient algorithm requires samples achieve average error bound holds large compared log different equally relevant regime alphabet size small compared scale dependencies sequence example predicting characters show lower bounds regime flavor theorem except based problem learning noisy parity function slightly subexponential algorithm blum task means lose least superconstant factor exponent comparison positive results proposition proposition let denote lower bound amount time samples required learn parity noise uniformly random inputs sufficiently large fixed constant exists family hmms hidden states algorithm achieves average prediction error respect optimal predictor random hmm family requires least log time samples finally also establish information theoretic optimality results proposition sense among even computationally unbounded prediction algorithms predict based recent observations average prediction error error respect optimal predictor necessary proposition absolute constant sufficiently large exists hmm hidden states possible obtain average prediction error less error less respect optimal predictor using recent log observations make prediction future directions mentioned settings capturing dependencies seems essential worth choice average prediction error metric used train evaluate models one possibility flavor evaluate algorithm chosen set time steps instead time steps hence naive markov model longer well predicting well time steps prediction easy context natural language processing learning respect metric intuitively corresponds training model well respect question answering task instead language modeling task fertile middle ground average error gives much reward correctly guessing common words like error might prediction error provides reward correctly guessing less common observations seems possible however techniques used prove proposition extended yield analogous statements error metrics given many settings average error natural metric prediction accuracy upper bounds proposition natural consider additional structure might present avoids conditional computational lower bounds theorem one possibility robustness example property markov model would continue predict well even observation obscured corrupted small probability lower bound instances theorem proposition rely parity based constructions hence sensitive noise corruptions learning product distributions well known connections noise stability approximation polynomials additionally polynomials learned agnostically arbitrary distributions via polynomial regression tempting hope thread could made rigorous establishing connection natural notions noise stability arbitrary distributions accurate polynomial approximations connection could lead significantly better sample complexity requirements prediction robust distributions sequences perhaps requiring poly data additionally approaches learning succinct representations large markov models may inform many practical prediction systems currently rely markov models related work parameter estimation interesting compare using markov model prediction methods attempt properly learn underlying model example method moments algorithms allow one estimate certain class hidden markov model polynomial sample computational complexity ideas extended learning neural networks rnns using different methods arora showed learn certain random deep neural networks learning model directly result better sample efficiency also provide insights structure data major drawback approaches usually require true distribution extremely close model family learning strong assumption often hold practice universal prediction coding theory end spectrum class online learning methods assume data generating distribution even adversarial however nature results fundamentally different whereas comparing perfect model look infinite past online learning methods typically compare fixed set experts much weaker much work sequential prediction based information theory statistics communities philosophy approaches often adversarial perspectives ranging minimum description length individual sequence settings model data distribution process assumed regards worst case guarantees data generation process regret notion optimality line work minimax rates performance bayesian algorithms latter favorable guarantees sequential setting regards minimax rates provides exact characterization minimax strategy though applicability approach often limited settings strategies available learner relatively small normalizing constant must exist generally considerable work regret statistical settings works regards broadly considerable work information consistency convergence distribution minimax rates regards statistical estimation parametric families settings minimax risk parametric settings characterizations terms mutual information also work universal lossless data compression algorithm celebrated algorithm setting rather different one coding entire sequence block setting rather prediction loss sequential prediction practice work initiated desire understand role memory sequential prediction belief modeling dependencies important complex tasks understanding natural language many proposed models explicit notions memory including recurrent neural networks long memory lstm networks models neural turing machines memory networks differentiable neural computers etc models quite successful practice see still largely fail capture many case lstms example difficult show forget past exponentially quickly stable gain insight problem began analyzing simplest markov predictor found surprise performed nearly well one could hope proof sketch theorem provide sketch proof theorem stronger proposition applies specifically sequences generated hidden markov model core proof following lemma guarantees markov model knows true marginal probabilities short sequences end predicting well additionally good expected prediction hold respect randomness hmm short window opposed randomness window begins general results settings financial forecasting additional guarantee particularly pertinent need worry possibility choosing unlucky time begin trading regime long plan trade duration spans entire short window beyond extra strength result hmms proof approach intuitive pleasing comparison direct proof proposition first state lemma sketch proof conclude section describing yields theorem lemma consider hmm hidden states let hidden state time chosen according arbitrary distribution denote observation time let denote conditional distribution given observations knowledge hidden state time let denote conditional distribution given corresponds naive sth order markov model knows joint probabilities sequences first observations probability least choice initial state log kop expectation respect randomness outputs proof lemma hinge establishing connection bayes optimal model knows hmm initial hidden state time predicts true distribution given naive order markov model knows joint probabilities sequences observations given initial state drawn according predicts accordingly latter model precisely model knows hmm distribution outputs conditional distribution given observations relate two models proceed via martingale argument leverages intuition time step either differ significantly expect sth observation contain significant amount information hidden state time zero improve submartingale precisely capture sense significant deviation expect probability initial state conditioned significantly probability conditioned formally let denote distribution hidden state time conditioned let denote true hidden state time show following expression submartingale log kop fact submartingale difficult define conditional distribution given observations initial state drawn according hidden state time note convex combination hence kop kop verify submartingale property note bayes rule change lhs time step log ratio probability observing output according distribution probability according distribution expectation related error using pinsker inequality high level proof proceed via concentration bounds azuma inequality show high probability error first log timesteps large also likely large case posterior distribution hidden state sharply peaked true hidden state unless negligible mass less distribution log several slight complications approach including fact submartingale construct necessarily nicely concentrated bounded differences first term submartingale could change arbitrarily address noting first term decrease much except tiny probability corresponds posterior probability true hidden state sharply dropping direction simply clip deviations prevent exceeding log timestep show submartingale property continues hold despite clipping proving following modified version pinsker inequality lemma modified pinsker inequality two distributions nand define divergence log min fixed log given lemma proof theorem follows relatively easily recall theorem concerns expected prediction error timestep based model memp corresponding empirical distribution length windows occurred connection lemma theorem established showing high probability memp close denotes empirical distribution unobserved hidden states distribution corresponding drawing hidden state generating provide full proof appendix definitions notation proving general proposition first introduce necessary notation random variable denote distribution mutual information two random variables defined entropy conditional entropy given conditional mutual information defined log dkl dkl log divergence distributions note slightly abusing notation dkl technically dkl ignore assignment conditioning clear context mutual information obeys following chain rule given distribution infinite sequences generated model random variable denoting output time use shorthand xji denote collection random variables subsequence outputs distribution stationary joint distribution subset sequence random variables invariant respect shifts time index hence xin xin process stationary interested studying well output predicted algorithm looks past outputs predictor maps sequence observations predicted distribution next observation denote predictive distribution time refer bayes optimal predictor using windows length hence prediction time naive order markov predictor provided true distribution data let bayes optimal predictor looking entire history model prediction time evaluate predictions respect long time window crucial property distribution relevant results mutual information past future observations stochastic process generated model define mutual information model mutual information past future averaged window process stationary time steps hence compare prediction predictor respect let measure distance two predictive distributions work consider distance relative loss two distributions kldivergence distance two distributions defined standard way define relative loss difference loss optimal predictor algorithm define expected loss predictor respect optimal predictor loss function follows also define algorithm fashion error estimating true conditional distribution model predicting well short windows establish general proposition applies beyond hmm setting provide elementary purely information theoretic proof proposition distribution mutual information past future observations best order markov model obtains average respect optimal predictor access infinite history also predictor average estimating joint probabilities windows length gets average error proof bound expected error splitting time interval blocks length consider block starting time find average error predictor time average across blocks begin note decompose error sum error due knowing past history beyond recent observations error estimating true joint distribution data length block consider time recall definition dkl dkl dkl therefore easy verify relation expresses intuition current output lot extra information past predict well using recent observations done using entire past upper bound total error window expand using chain rule note proposition follows averaging error across time steps using average blocks length window proposition also directly gives guarantees scenario task predict distribution next block outputs instead next immediate output kldivergence obeys chain rule following easy corollary relating error error yields following statement also trivially applies loss respect optimal predictor expected relative loss time step loss time step corollary distribution mutual information past future observations best order markov model obtains average respect optimal predictor access infinite history also predictor average estimating joint probabilities gets average error proof decompose error sum error estimating error due knowing past history using triangle inequality therefore pinsker inequality jensen inequality using proposition therefore using jensen inequality lower bound large alphabets lower bounds sample complexity large alphabet case leverage class constraint satisfaction problems csps high complexity class boolean defined via function instance variables collection sets clauses size whose elements consist variables negations instance satisfiable exists assignment variables predicate evaluates every clause generally value instance maximum assignments ratio number satisfied clauses total number clauses lower bounds based presumed hardness distinguishing random instances certain class csp versus instances csp high value much work attempting characterize difficulty notion leverage complexity class csps first defined studied definition complexity class defined predicate largest exists distribution supported support independent uniform independent distribution exists example classes corresponding respectively predicates pxor xor boolean inputs psat inputs predicates support uniform distributions uniform distributions hence complexity case uniform distribution restricted support pxor uniform distribution also supported random instance csp predicate instance clauses chosen uniformly random selecting variables uniformly independently negating variable probability random instance value close expectation uniform distribution contrast planted instance generated first fixing satisfying assignment sampling clauses satisfied uniformly choosing variables picking negations according independent distribution associated predicate hence planted instance always value noisy planted instance planted assignment noise level generated sampling consistent clauses probability random clauses probability hence high probability value hardness results based distinguishing whether csp instance random high value one would expect difficulty distinguishing random instances noisy planted instances decreases number sampled clauses grows following conjecture feldman asserts sharp boundary number clauses problem becomes computationally intractable remaining information theoretically easy notation made explicit appendix conjectured csp hardness conjecture let distribution variables complexity randomized algorithm given access distribution equals either uniform distribution noisy planted distribution planted distribution decides correctly whether probability least needs clauses feldman proved conjecture class statistical algorithms recently kothari showed polynomial time sos algorithm requires clauses refute random instances csp complexity hence proving conjecture semidefinite programming relaxation refutation note tight allen give sos algorithm refuting random csps beyond regime recent papers daniely daniely also used presumed hardness strongly refuting random random instances small number clauses derive conditional hardness learning results first attempt encode sequential model construct model outputs randomly chosen literals first time steps noisy predicate value final time step clauses csp correspond samples model algorithm would need solve csp predict final time step however outputs final time step random trivial prediction algorithm guesses randomly try predict output time would near optimal get strong lower bounds statistical algorithms extension statistical query model algorithms access samples distribution instead access estimates expectation bounded function sample oracle feldman point almost algorithms work random data also work limited access samples refer feldman details examples output functions literals time steps still ensuring functions remain collectively hard invert without large number samples use elementary results theory error correcting codes achieve prove hardness due reduction specific family csps conjecture applies choosing carefully obtain dependence mutual information error upper bounds implied proposition provide short outline argument followed detailed proof appendix sketch construction proof construct sequential model making good predictions model requires distinguishing random instances variables instances high value output alphabet size choose mapping characters variables negations clause planted assignment csp let string values assigned literals model randomly uniformly outputs characters time correspond literals csp hence outputs correspond clause csp specified later construct binary matrix correspond good code time steps probability model outputs mod clause associated outputs first time steps remaining probability model outputs uniformly random bits note mutual information outputs time predicted claim simulated hmm hidden states done follows every time step maintain hidden states corresponding hidden states corresponding states stores current value bits takes total hidden states use hidden states time step output bits finally need additional hidden states output uniform random bits time probability accounts total hidden states note larger respect higher cost terms average prediction error failing correctly predict outputs time tuning allows control number hidden states mutual information average error incurred computationally constrained predictor define csp terms collection predicates conjecture directly apply defined collection predicates instead single one later show reduction related csp defined single predicate conjecture holds predicate set satisfy mod hence clause additional label determines satisfying assignments label output sequential model time hence planted assignment set satisfying clauses csp clauses mod label clause define noisy planted distribution clauses first uniformly randomly sampling label sampling consistent clause probability otherwise probability sample uniformly random clause let uniform distribution uniformly chosen labels show conjecture implies distinguishing distributions hard without sufficiently many clauses gives hardness results desire sequential model algorithm obtains low prediction error outputs time used distinguish instances csp high value random instances algorithm obtains low prediction error random instances hence hardness strongly refuting csp implies hardness making good predictions sketch argument conjecture implies hardness strongly refuting csp define another csp show reduces predicate csp set mod hence planted assignment set satisfying clauses csp clauses nullspace planted distribution clauses uniform satisfying clauses probability probability add uniformly random construct set satisfying assignments vectors nullspace supports uniform distribution conjecture polynomial time algorithm distinguish planted distribution uniformly randomly chosen clauses less clauses show choosing matrix whose null space uniform corresponds finding binary linear code rate least relative distance existence guaranteed bound next sketch reduction key idea csps defined linear equations clause satisfied assignment variables clause mod therefore mod mod satisfies mod clause assignment mod variables obtained clause switching literal retaining hence label efficiently convert clause clause desired label satisfied particular assignment variables satisfied assignment variables also hard ensure uniformly sample consistent clause original clause uniformly sampled consistent clause provide small example illustrate sequential model constructed let let output alphabet model letter maps variable maps similarly let planted assignment defines particular model output model first three time steps corresponds clause literals final time step probability model outputs mod clause planted assignment probability outputs uniform random bit algorithm make good prediction final time step needs able distinguish output final time step always random bit dependent clause hence needs distinguish random instances csp planted instances theorem deferring proof appendix theorem assuming conjecture sufficiently large fixed constant exists family hmms hidden states output alphabet size polynomial time prediction algorithm achieves average error relative error less probability greater randomly chosen hmm family needs requires log samples hmm window length algorithm uses prediction lower bound small alphabets lower bounds sample complexity binary alphabet case based average case hardness decision version parity noise problem reduction straightforward parity noise problem bit inputs given examples drawn uniformly along noisy labels mod unknown support parity function classification noise noise level let distribution examples parity noise instance support parity function noise level let distribution examples labels label chosen uniformly independent example strength lower bounds depends level hardness parity noise currently fastest algorithm problem due blum runs time samples log define function definition define function uniformly random support probability least choice randomized algorithm distinguish success probability greater randomness examples algorithm requires time samples model natural sequential version parity noise problem example coupled several parity bits denote model time outputs model uniform let vector outputs time outputs next time steps given mod random noise entry random variable noise level note full chosen uniformly random distribution uniform also binary bits time predicted using past inputs higher alphabet case simulated hmm hidden states see section define set matrices specifies family sequential models let set matrices corresponding rows first columns full row rank need restriction lower bound otherwise could small dependence parity bits inputs time denote family models lemma shows high probability choice distinguishing outputs model random examples requires time examples lemma let chosen uniformly random set probability least choice randomized algorithm distinguish outputs model distribution random examples success probability greater randomness examples algorithm needs time examples proof proposition follows lemma similar proof theorem proposition defined definition sufficiently large fixed constant exists family hmms hidden states algorithm achieves average relative loss average loss average loss less probability greater randomly chosen hmm family needs requires log time samples samples hmm window length algorithm uses prediction information theoretic lower bounds show information theoretically windows length necessary get expected relative loss less expected relative loss loss bounded square automatically implies window length requirement also tight loss loss fact easy show tightness loss choose simple model emits uniform random bits time repeats bits time time one choose get desired error mutual information get lower bound loss use probabilistic method argue exists hmm long windows required perform optimally respect loss hmm state lower bound rough proof idea deferring details appendix proposition absolute constant sufficiently large exits hmm states information theoretically possible get average relative loss loss less using windows length smaller log loss less using windows length smaller log illustrate construction fig provide proof idea respect fig figure lower bound construction want show predictor using windows length make good prediction transition matrix hmm permutation output alphabet binary state assigned label determines output distribution states labeled emit probability states labeled emit probability randomly uniformly choose labels hidden states randomness choosing labels permutation show expected error predictor large means must exist permutation predictor incurs high error rough proof idea follows say markov model hidden state time unknown predictor outputs first three time steps predictor looks outputs time making prediction time show high probability choice labels hidden states outputs output hidden states close hamming distance label segment hidden states say hence predictor using past outputs distinguish whether string emitted hence make good prediction time actually need show many segments like whose label close proof proceeds via simple concentration bounds proof theorem theorem suppose observations generated hidden markov model hidden states output alphabet size exists window length absolute constant chosen uniformly random expected distance true distribution given entire history knowledge hmm distribution predicted naive empirical order markov model based bounded proof let distribution hidden states probability ith hidden state empirical frequency ith hidden state time normalized consider predictor makes prediction distribution observation given observations based true distribution hmm conditioned observations distribution hidden state time show expectation gets small error averaged across time steps respect optimal prediction distribution knows hidden state time order show need first establish true hidden state time small probability high probability choice lemma probability choice hidden state time probability least proof consider ordered set time indices hidden state sets corresponding hidden states probability less cardinality sum cardinality small sets hence probability uniformly random lies one sets consider set time indices corresponding hidden state probability least among first time indices set hidden state probability least fraction bad time steps corresponding hidden state probability least total fraction bad time steps therefore using union bound failure probability hidden state time probability least consider time index simplicity assume let denote conditional distribution given observations knowledge hidden state time let denote conditional distribution given given hidden state time distribution lemma true hidden state time probability least log kop expectation respect randomness outputs time lemma randomly chosen probability hidden state time probability less prior distribution hence using lemma expected average error predictor across log consider predictor predicts given according empirical distribution given based observations time argue predictions close expectation predictions recall prediction time true distribution hmm conditioned observations distribution hidden state time drawn let refer prediction time refer prediction time show small expectation using martingale concentration argument consider string length let empirical probability string time true probability string given hidden state time distributed aim show small define random variable denotes indicator function defined claim martingale respect filtration verify note therefore hence martingale also note hence using azuma inequality lemma note azuma inequality union bound strings length failure probability similarly strings length estimated probability string error failure probability conditional distribution given observations ratio joint distributions therefore long empirical distributions length length strings estimated error string probability least conditional distributions satisfy union bound strings total probability mass strings occur probability less therefore overall failure probability hence expected distance using triangle inequality fact expected average error log follows expected average error note expected average error average expected errors empirical markov models hence log must exist least markov model gets expected error proof lemma let prior distribution hidden states time let true hidden state time without loss generality refer output time let posterior probability ith hidden state time seeing observations time prior distribution hidden states time convenience denote define pis distribution output time conditioned hidden state time observations note define conditional distribution given observations initial distribution hidden state time pis note convex combination hence kop kop define kop proof relies martingale concentration argument order ensure martingale bounded differences ignore outputs cause significant drop posterior true hidden state time let thep set outputs time clog clog hence union clog note bound failure probability output clog emitted window length clog hence concern sequences outputs output emitted step satisfies clog let set outputs note let expectation random variable conditioned output sequence set consider sequence random variables log log defining log log let change seeing output time let output time first find expression posterior probabilities seeing output get updated according bayes rule let note output time write therefore write expectation log log min log log keep martingale differences bounded define equals truncated version define follows definition hfor two distributions define truncated log min fixed ready define martingale consider sequence random variables define note respect lemma expectation output time hence sequence random variables submartingale respect outputs log taking proof definition expectation respect sequences instead possible sequences removing hence events negative contribution apply lemma lemma modified pinsker inequality two distributions nand define divergence log min fixed log kop hence hence claim submartingale bounded differences lemma log log proof note definition log log log clog restrict sequences set hence also log clog log clog apply lemma let submartingale exp applying lemma show exp clog bound average error window failure probability randomness outputs let set sequences satisfy note log consider last point decreases remains every subsequent step window let point point define total contribution error every step step average error term error step note log log hence sequences log log log log log log log log log jensen inequality log total probability sequences outside whenever hidden state time probability least prior distribution proof modified pinsker inequality lemma lemma modified pinsker inequality two distributions nand define divergence log min fixed log proof rely following lemma bounds binary distributionslemma every log log log proof second result first observe log results follow standard calculus let let note log log log log log case log log log case log log log log log log log log log log log log log log log log log log log proof lower bound large alphabets csp formulation first notation use csp problems follow notation setup feldman consider following model generating random csp instance variables satisfying assignment defined predicate represent ordered literals repetition variables let set let string values assigned literals value literal assignment planted model draw clauses probabilities depend value let distribution satisfying assignments distribution defined followsq recall distribution satisfying assignments define complexity largest distribution uniform also referred independent literature uniform consider csp defined collection predicates let matrix full row rank binary field later choose ensure csp high complexity predicate set solutions system mod define uniform distribution consistent assignments satisfying mod planted distribution defined based according clause chosen first picking uniformly random clause distribution planted define distribution consistent clauses along labels let uniform distribution clause assigned uniformly chosen label define fixed noise level consider small constant less corresponds adding noise problem mixing planted uniform clauses problem gets harder becomes larger efficiently solved using gaussian elimination define another csp show reduces obtain hardness using conjecture label fixed zero vector hence distribution satisfying assignments uniform distribution vectors null space binary field refer planted distribution case let uniform distribution clause label planted assignment denote distribution consistent clauses define let problem distinguishing randomly uniformly chosen success probability least similarly let problem distinguishing randomly uniformly chosen success probability least thought problem distinguishing random instances csps instances high value note least hard problem refuting random csp instances corresponds case claim algorithm implies algorithm lemma solved time clauses solved time clauses let complexity demonstrate achieve next conjecture distinguishing requires least clauses discuss chosen ensure complexity ensuring high complexity csp let null space note rank subspace let randomly chosen vector ensure complexity suffices show random variables uniform use theory error correcting codes find matrix binary linear code length rank linear subspace notation different standard notation coding theory literature suit setting rate code defined generator matrix code matrix parity check matrix code matrix distance code weight minimum weight codeword relative distance defined codeword define dual codeword codeword generator matrix parity check matrix note rank dual codeword code rank use following standard result linear fact distance uniform hence job finding reduces finding dual code distance rank use bound argue existence code let binary entropy lemma bound every exists code rank relative distance taking hence exists code whenever setting interested choose generator matrix hence null space uniform hence complexity hence find ensure complexity sequential model csp sample complexity lower bound construct sequential model derives hardness hardness slightly differ outline presented beginning section base sequential model directly generating random without repetition increases mutual information formulate slight variation show least hard define csp instance allowing repetition different setting examined feldman hardness setting repetition follow hardness setting allowing repetition though converse true constructing sequential model consider following family sequential models chosen defined previously output alphabet models family size even choose subset size choice corresponds model family letter output alphabet encoded represents whether letter included set let vector stores encoding whenever letter let determine subset entry choose uniformly random choice represents subset hence model partition output alphabet subsets size first letters first subset next next subset let ith subset let set elements belong set time chooses uniformly random time model chooses letter uniformly random set otherwise chooses letter uniformly random probability outputs next time steps mod probability uniform random bits model resets time repeats process recall simulated hmm hidden states see section reducing sequential model csp instance reveal matrix algorithm corresponds revealing transition matrix underlying hmm encoding kept secret task finding encoding given samples naturally seen csp sample clause literal corresponding output letter whenever odd even refer reader outline beginning section example denote csp modification ith literal clause literal corresponding letter define distribution consistent clauses csp define uniform distribution additional constraint ith literal clause literal corresponding letter define note samples model equivalent clauses show hardness follows hardness lemma solved time clauses solved time clauses hence conjecture true solved polynomial time less clauses prove theorem using lemma theorem assuming conjecture sufficiently large fixed constant exists family hmms hidden states output alphabet size polynomial time prediction algorithm achieves average error relative error less probability greater randomly chosen hmm family needs requires log samples hmm window length algorithm uses prediction proof describe choose family sequential models value recall hmm hidden states let note let log choose log log solution log hence note let claim verify note therefore log log sufficiently large fixed constant hence proving hardness obtaining error implies hardness obtaining error choose matrix outlined earlier vector define family sequential models earlier let randomly chosen model family first show result relative loss idea algorithm good job predicting outputs time used distinguish instances csp high value uniformly random clauses possible make good predictions uniformly random clauses relate error time relative error time average error time steps get required lower bounds let average loss polynomial time algorithm output average relative loss outtime steps put time steps respect optimal predictions distribution possible get clauses label independent chosen uniformly random information theoretically possible get hence algorithm gets error used distinguish tween therefore lemma polynomial time algorithm gets probability greater choice needs least samples note optimal predictor gets therefore note average error time steps contribution error time steps also therefore hence polynomial time algorithm gets average relative loss less probability greater needs least samples result loss follows directly result relative loss next consider loss average error algorithm time steps let application jensen inequality pinsker inequality needs samtherefore previous argument algorithm gets hence polynomial time algorithm ples succeeds probability greater gets average loss less needs least samples lower bound linear function log express result directly terms log claim log follows log hence polynomial time algorithm needs log samples get average relative loss loss loss less proof lemma lemma solved time clauses solved time clauses proof show random instance transformed random instance time independently transforming every clause clause satisfied original csp assignment corresponding clause satisfied assignment every store random solution system mod let solution given clause choose uniformly random generate clause clause choosing literal linearity system clause consistent clause assignment clause consistent clause assignment next claim randomly generated clause distribution drawn randomly generated clause distribution drawn construction label clause chosen uniformly random note choosing clause uniformly random equivalent first uniformly choosing unnegated literals choosing negation pattern literals uniformly random clear clause still uniformly random adding another negation pattern uniformly random hence original clause drawn uniform distribution distributed according similarly choosing clause uniformly random equivalent first uniformly choosing unnegated literals choosing negation pattern uniformly random makes clause consistent original negation pattern corresponds randomly chosen null space final negation pattern adding corresponds negation pattern uniformly random chosen solution mod chosen therefore clause uniformly random chosen clause uniformly random chosen clause hence possible distinguish randomly chosen success probability least time clauses possible distinguish randomly chosen success probability least time clauses proof lemma lemma solved time clauses solved time clauses hence conjecture true solved polynomial time less clauses proof define event clause generated distribution csp property ith literal belongs set also refer property clause notational ease easy verify probability event claim conditioned event csp equivalent verified follows note uniform consistent clauses let set clauses probability set clauses probability furthermore satisfies constraint mod let set clauses similarly let set clauses note subset clauses satisfy set holds every consistent distributions uniform consistent clauses distribution clauses identical distribution clauses conditioned event equivalence conditioned also follows argument note chosen uniformly random satisfying high probability tuples property clauses problems equivalent conditioned event solved time clauses solved time clauses lemma conjecture solved polynomial time less clauses hence solved polynomial time less clauses constant respect solved polynomial time less clauses proof lower bound small alphabets proof lemma lemma let chosen uniformly random set probability least choice randomized algorithm distinguish outputs model distribution random examples success probability greater randomness examples algorithm needs time examples proof suppose chosen random entry distribution uniform let corresponding first columns rows recall set matrices full claim verify consider addition row one one probability ith row linearly dependent previous rows hence union bound full failure probability definition union bound parities algorithm distinguish outputs model uniformly chosen distribution random examples probability least choice needs time examples uniformly randomly chosen probability least choice algorithm distinguish outputs model distribution random examples success probability greater randomness examples algorithm needs time examples proof proposition proposition defined definition sufficiently large fixed constant exists family hmms hidden states algorithm achieves average relative loss average loss average loss less probability greater randomly chosen hmm family needs requires log time samples samples hmm window length algorithm uses prediction proof describe choose family sequential models value recall hmm hidden states let note let log choose log log solution log hence note let claim verify note therefore log log sufficiently large fixed constant hence proving hardness obtaining error implies hardness obtaining error choose matrix outlined earlier family defined model defined previously matrix chosen uniformly random set let average loss algorithm output time steps average relative loss output time steps respect optimal predictions distribution possible get clauses label independent chosen uniformly random information theoretically possible get hence algorithm gets error used distinguish therefore lemma algorithm gets probability greater choice optimal needs least time samples note note predictor gets therefore average error time steps contribution error time steps also therefore hence algorithm gets average relative loss less probability greater choice needs time samples result loss follows directly result relative loss next consider loss average error algorithm time steps let application jensen inequality pinsker inequality needs samples therefore previous argument algorithm gets hence algorithm gets average loss less needs time samples lower bound linear function log express result directly terms log claim log follows log hence algorithm needs log samples time get average relative loss loss loss less probability greater choice proof information theoretic lower bound proposition absolute constant sufficiently large exits hmm states information theoretically possible get average relative loss loss less using windows length smaller log loss less using windows length smaller log proof consider hidden markov model markov chain permutation states output alphabet hidden state binary state marked label let mapping hidden state label states labeled emit probability probability similarly states labeled emit probability probability fig illustrates construction provides proof idea figure lower bound construction note notation used rest proof respect example corresponds label case similarly case segments shaded nodes comprise set possible sequences states last outputs could come shaded nodes correspond states possible predictions next time step example assume multiple log constant regard constant respect let refer hidden states hji refers sequence hidden states show model looking past outputs get average loss less optimal prediction looking past outputs gets average loss hidden state time step determined arbitrarily high probability allowed look arbitrarily long past proves windows length suffice get average error less respect optimal predictions note bayes optimal prediction time minimize expected loss given outputs sequence time predict mode distribution outputs time also note hidden state time hence predictor weighted average prediction hidden state weight probability hidden state index state permutation tuple mod hence help predictor make prediction time providing index mod true hidden state time hence narrows set possible hidden states time fig set possible states given side information hidden states shaded states bayes optimal prediction time given outputs time index predict mode note definition bayes optimality average loss prediction using worse average loss prediction using hence need show predictor access side information poor refer predictor using show exists permutation average loss predictor argue using probabilistic method choose permutation uniformly random set permutations show expected average loss predictor randomness choosing permutation means must exist permutation average loss predictor permutation find expected average loss predictor randomness choosing permutation find expected average loss predictor given state time without loss generality let hence hidden state time fix sequence labels hidden states emitted hidden states time let string expected average error randomness rest predictor permutation also let expected error averaged across outputs argue set hidden states defines segment permutation let label segment excluding last bit corresponds predictions let set labels excluding first label set predicted bits refer fig example consider assignment begin show high probability output hamming distance output least set hidden states follows directly hoeffding inequality outputs independent conditioned hidden show decent probability label segment closer argue high probability hamming distance output hamming distance hence many segments closer segments assigned much weight predicting next output means output predicted high accuracy output bits corresponding different segments independent first find probabilityp segment corresponding label hamming distance less log fixed binary string length let probability getting least heads trails trial probability giving head bounded following standard exp mdkl dkl log log use lower bound log log log independent random variables lying interval case exp dkl log note dkl using inequality log simplify using log let set log fixed argue high probability randomness permutation large follows chernoff bound labels segments chosen note therefore fixed probability segments randomly chosen permutation hamming disthere tance less log note construction log log log hence segments closer hamming distance output therefore high probability randomly choosing segments subset segments segments hamming distance less pick consider set segments subset respect string permutations predictor places least much weight hidden states true hidden state prediction hidden state corresponding bit notice bits independent uniform used argument far average correlation equally weighted average independent uniform random bits one random bits hence randomness expected loss predictor least hence writep using equation assignment true assignments choices hidden states time using linearity expectations averaging hidden states expected average loss independent random variables lying interval exp case predictor randomness choosing permutation means must exist permutation average loss predictor permutation hence exists hmm states information theoretically possible get average error respect optimal predictions less using windows length smaller log fixed constant therefore sufficiently large exits hmm states information theoretically possible get average relative loss less using windows length smaller log result relative loss follows replacing setting result follows immediately expected relative loss less expected loss use pinsker inequality jensen inequality references yoshua bengio patrice simard paolo frasconi learning dependencies gradient descent difficult ieee transactions neural networks hochreiter schmidhuber long memory neural computation felix gers schmidhuber fred cummins learning forget continual prediction lstm neural computation alex graves greg wayne ivo danihelka neural turing machines arxiv preprint weston chopra bordes memory networks international conference learning representations iclr alex graves greg wayne malcolm reynolds tim harley ivo danihelka agnieszka sergio colmenarejo edward grefenstette tiago ramalho john agapiou hybrid computing using neural network dynamic external memory nature luong pham manning effective approaches neural machine translation empirical methods natural language processing emnlp pages schuster chen norouzi macherey krikun cao gao macherey google neural machine translation system bridging gap human machine translation arxiv preprint zhe chen matthew wilson deciphering neural codes memory sleep trends neurosciences zhe chen andres grosmark hector penagos matthew wilson uncovering representations hippocampal ensemble spike activity scientific reports matthew wilson bruce mcnaughton reactivation hippocampal ensemble memories sleep science prahladh harsha rahul jain david mcallester jaikumar radhakrishnan communication complexity correlation annual ieee conference computational complexity ccc pages ieee kneser ney improved language modeling international conference acoustics speech signal processing icassp volume pages chen goodman empirical study smoothing techniques language modeling association computational linguistics acl mossel roch learning nonsingular phylogenies hidden markov models theory computing pages vitaly feldman perkins santosh vempala complexity random satisfiability problems planted solutions proceedings annual acm symposium theory computing pages acm sarah allen ryan donnell david witmer refute random csp foundations computer science focs ieee annual symposium pages ieee pravesh kothari ryuhei mori ryan donnell david witmer sum squares lower bounds refuting csp arxiv preprint kim jernite sontag rush neural language models arxiv preprint avrim blum adam kalai hal wasserman learning parity problem statistical query model journal acm jacm ryan donnell analysis boolean functions cambridge university press eric blais ryan odonnell karl wimmer polynomial regression arbitrary product distributions machine learning adam tauman kalai adam klivans yishay mansour rocco servedio agnostically learning halfspaces siam journal computing hsu kakade zhang spectral algorithm learning hidden markov models conference learning theory colt anandkumar hsu kakade method moments mixture models hidden markov models conference learning theory colt sedghi anandkumar training recurrent neural networks spectral methods arxiv preprint janzamin sedghi anandkumar beating perils guaranteed training neural networks using tensor methods arxiv preprint arora bhaskara provable bounds learning deep representations international conference machine learning icml pages lugosi prediction learning games cambridge university press barron rissanen minimum description length principle coding modeling ieee trans information theory grunwald tutorial introduction minimum description length principle advances mdl theory applications dawid statistical theory prequential approach royal statistical society shtarkov universal sequential coding single messages problems information transmission azoury warmuth relative loss bounds density estimation exponential family distributions machine learning foster prediction worst case annals statistics opper haussler worst case prediction sequences log loss mathematics information coding extraction distribution nicolo gabor lugosi bounds logarithmic loss predictors machine learning vovk competitive statistics international statistical review kakade online bounds bayesian algorithms proceedings neural information processing systems seeger kakade foster bounds bayesian methods clarke barron asymptotics bayes methods ieee transactions information theory david haussler manfred opper mutual information metric entropy cumulative relative entropy risk annals statistics barron characterization bayes performance choice priors parametric nonparametric problems bernardo berger dawid smith editors bayesian statistics pages barron schervish wasserman consistency posterior distributions nonparametric problems annals statistics diaconis freedman consistency bayes estimates annals statistics zhang learning bounds generalized family bayesian posterior distributions proceedings neural information processing systems ziv lempel compression individual sequences via coding ieee transactions information theory rumelhart hinton williams learning representations errors nature bahdanau cho bengio neural machine translation jointly learning align translate arxiv preprint vitaly feldman elena grigorescu lev reyzin santosh vempala ying xiao statistical algorithms lower bound detecting planted cliques proceedings annual acm symposium theory computing pages acm amit daniely shai complexity theoretic limitations learning dnf annual conference learning theory pages amit daniely complexity theoretic limitations learning halfspaces proceedings annual acm sigact symposium theory computing pages acm
| 2 |
optimal algorithm range search multidimensional points department computer science engineering anna university chennai india jul abstract paper proposes efficient novel method address range search multidimensional points time number points reported space accomplished introducing new data structure called bits structure also supports fast updation takes time insertion log time deletion earlier best known algorithm problem logk time pointer machine model keywords bits threaded trie range search introduction introduced multidimensional binary search trees commonly used storing dimensional points also used perform search operations exact match partial match range queries range queries mostly used gis applications locate cities within certain region map similarly geometrical view database one use orthogonal range search perform query generally nodes height hence complexity insertion search high although many search structures found literature differ standard mainly methods used recall tree stores point data form tree splits primarily coordinate point even level corresponding coordinate odd level hence trees unbalanced efficient search operations also worst case time complexity range search tree number points reported dimensions general variants get unbalanced data clustered thereby affecting query operations tree bucket tree trees path level compressed trees trees used store point data however trees always balanced especially data clustered one dynamic versions tree divided trees range query time best known dynamically balanced tree uses bitwise interlaced data mapping dimensions one dimension although search time log reporting points bitwise interlacing leads discarded areas range search case squarish discriminant based longest side rectangle enclosing problem space instead alternating keys recently hybrid versions squarish relaxed median overcome problem height balancing amortized worst case efficiency range search email hema easwara corresponding hybrid squarish relaxed median trees partial match queries respectively experimental results match aforementioned theoretical results show hybrid median trees outperform variants however far query handling concerned structures perform partial match queries two dimensions efficiently recent work pointer machine model orthogonal range reporting data structure log log logn space address range queries log log log time range trees bentley maurer yet another class balanced binary search trees used rectangular range search showed improvement query time logk dimension set points number reported points later improved using fractional cascading layered range trees space requirements relatively high performs range search logk time proposed recently chan proposed two data structures orthogonal range search word ram model first structure takes space query time show improved performance previous results space query time space query time second data strucure based space answers queries time outperforms previous space data structure answers queries time furthermore also propose efficient data structure orthogonal range reporting space query time points rank space improves previous results space query time space query time points reported finally extended range search higher dimensions also since range queries common among queries database applications mainly considered orthogonal range search points contributions work make use bit segment tree variant performs stabbing range queries segments efficiently logarithmic time importantly distribution data points uniform skewed affect height bit turn facilitates faster search time actually use bit structure store points related dimension thereby form tree called bit addition certain nodes bit associate variant trie data structure called threaded trie facilitate fetching required node constant time unlike trees associate axis level wise comparison locate insert point instead tree first level nodes key distinct values first points therefore tree corresponds one dimensional data tree augmented another tree second level key values nodes associated distinct first two points general ith tree corresponds distinct first set points given moreover tree inorder sequence provides sorted sequence bit trees tree construction illustrated subsequent sections bit originally bit balanced inorder threaded segment tree dynamic structure stores segments also answers stabbing range queries efficiently unlike segment trees also permits insertion segment interval range figure set segments bit given segments definition bit height balanced binary tree satisfies following properties node represented range associated node list segments containing range given otherwise ranges either overlap end points overlap suppose appears inorder sequence special node called dummy node denoted range list empty suppose first last nodes inorder sequence respectively inp red insucc range say node contained functions inpred insucc respectively returns inorder predecessor successor sample bit shown figure note dangling threads actually point dummy node shown figure bit originally developed storing segments use different purpose storing points thus modify structure suit requirement described node replaced point pointer list collinear points dimension first however list maintained tree next level described section either null pointer threaded trie elaborated next section two points stored tree suppose appears inorder sequence per following definition definition let two points kdimensional space implies implies head head subsequent sections better clarity use hyphen certain parameter node denote particular parameter irrelevant respect context instance denotes list contents irrelevant point time threaded trie figure sample threaded trie threaded tries variants tries consists two types nodes viz trie node data node instance figure trie nodes rest data nodes unlike tries trie node field blank however trie nodes contain two segments one index pointer tag value either denotes corresponding index point thread otherwise null pointers replaced threaded pointers point next valid node one exists instance thread pointers node points node next valid node similarly thread pointers points data node note ordering nodes provides sorted sequence also data nodes appear level accomplished uniform width data instance data treated figure construction bit bit constructed using collection bit one level interlinking trees two consecutive levels specified manner due following definitions bit termed bit trees definition given point integer head tail defined respectively head tail also head leads head tail definition given set points set defined head set distinct values points general head definition point term said dimensional value set points used construct level bit figure bit spatial representation points bit tree points shown definition bit tree constructed follows create separate bit trees let node say list points node min head head term links cross links node cross link node one cross link node first node order sequence tree pointer always points node node pointer threaded tree cross link node otherwise set null every cross link node data node key say points node provides links nodes links termed trie links bit sample points figure shown figure since bit trees binary inorder threaded search trees level height trees level log also node cross link node least value ith respect head value node note least one point exists link useful locate list collinear points ith dimension associated point also trie links useful locate point given range window constant time cross link trie links also make structure much suitable address range queries efficiently normally perform insertion simple comparison respective coordinates level however deletion tedious due candidate replacement candidate replacement anywhere subtree also requires little work right subtree empty find candidate replacement required find smallest element left subtree avoid violation basic rules required perform swap left right subtrees many possible candidate keys exist left subtree handle situation make use collection bit one dimension deleting point may may require replacement inorder successor located time inorder links exist node also cross links exist two consecutive levels practically provide faster search next level trees another advantage structure node pruned particular level need considered subsequent levels nodes head values ignored subsequent levels best knowledge structure using balanced binary search trees threads introduced work storing point data perform range search efficiently search window query given rectangular range form window range query finds points lying within window let given query range first use trie stored first node cross link node tree level find smallest point larger equal point determined trie hand point located subsequent points fall within determined using inorder threads inorder sequence sorted order let say reported set points however dimensional value point greater implies absence required candidates using cross links node corresponds point search performed similar fashion note cross link node trie structure supports quick access node dimensional value case dimensional value cross link node within respective trie structure need looked instead inorder threads used find remaining candidates example instance let consider figure search range first use cross link node present dimension value lies within range use respective trie instead use inorder threads identify candidate points candidates search continued respectively corresponding cross link nodes looking tries cross link nodes find point whose dimensional value smallest one thus tries yields yields yields nothing performing inorder traversal final reported points also points reported notice one stop search without traversing candidate node within given range also applicable trees candidate node higher tree lower level trees need searched thus structure prunes search cases thereby practically reduces time reporting query range search range search points performed extending search similar case range search however need perform search described range search take query range search performed find candidates within range finally points reported important note search requires comparison keys within given range particular dimension simplifies subsequent searches next level implementation details two dimensions given set two dimensional points tree bit constructed time point may require two insertions one position insertion made could determined constant time described proof lemma thus insert nodes requires time also may required create cross link node case bit since points number cross links created exceed also number trie links created exceed number nodes moreover construction trie requires constant time height trie constant due fixed size key thus factors lie within log insertion regarding space requirements bit second tree one contains points fewer equal number points first tree also number trie nodes height trie constant due size number digits key thus obtain following lemma lemma construction bit points requires time space searching candidate node done trie requires constant time height trie fixed point identified subsequent points identified inorder threads thus identifying candidate points takes time candidate points using cross links nodes locate required tries constant time search done similar fashion described earlier thus leads following lemma lemma range search window query using bit addressed time stands number points reported higher dimensions straight forward extension bit dimensions made easy connecting cross links corresponding nodes tree next level unlike range trees build another range tree given node main tree maintain trees inorder traversal provides ordered sequence points stored tree definitely reduces overall time taken range search across dimensions described previous section time required find candidate point constant thus leads following lemma lemma let set points space range search bit reports points lie within rectangular query range time number points reported lemma given set points bit constructed time space proof since construct level nodes follows number nodes note levels correspond dimensions hence may used interchangeably also number trie nodes height constant therefore levels bit uses storage worst case constant construction bit considered sequence insertions insertion may may alter bit tree particular level however bit altered due insertion trees altered let least index table theoretical comparison divided trees range trees range dsltrees layered range trees proposed bit description divided trees range trees layered range trees bits storage construction log log logk update logk logk logk ins del log range search logk logk points points reported tree altered thus trie links cross links one determine required values already stored trees within constant time particular cross link followed trie link one find position new value requires constant time inserting value tree unbalanced atmost one rotation required balance tree requires constant time let new node inserted taking cross link inorder successor one determine position new node inorder predecessor cross link node inorder successor new node need trie created constant time process continued updation takes constant time hence insertion takes time construction bit points requires time lemma insertion deletion point bit respectively done log time proof per description given proof lemma insertion point bits takes time deletion finding node removed bit stree requires constant time however node leaf node cascading replacement inorder successor required reaching leaf node removed physically certainly number replacements done exceed log may require sequence rotations path physically removed leaf root log rotations deletion point bit requires log time performance table summarizes performance divided trees range trees bit tree proposed work furthermore theoretical comparison bit made adapted internal memory pointer machine model bulk loading ram model results give query time using bit shows reduction time compared existing bounds since try capitalize efficiency balanced search trees levels using cross links trie links ensure number nodes visited range query considerably reduced bit observe storage increased range trees bit still maintains notice update time bitsk reduced considerably summarize although storage requirements bit dtree comparable trees divided trees construction update time improved considerably moreover overall query time improved time number points reported prunes points falling outside query region dimension conclusion bit storing points update query operations efficiently proposed main advantage tree effectively handles collinear points result number nodes visited search much less compared variants either height balanced update operation complex case height balanced better search efficiency insertion tedious range dsl tree gives logarithmic amortized worst case search time efficient updates mainly partial match queries window queries bit overall insertion time moreover points dynamically updated level since dimensions level distributed using threaded tries quickly find points falling within query range also points falling search range pruned efficiently using cross links next level inorder threads similar bit addition threaded tries introduced work link node cross link means trie links find points within given range constant time therefore range search points rectangular region using bit tree takes time number points reported therefore logarithmic factor earlier worst case bounds reduced hence definitely remarkable improvement logk time range dsl trees references afshani arge larsen orthogonal range reporting rectangle stabbing pointer machine model proceedings twentyeighth annual symposium computational geometry pages acm agarwal range searching goodman orourke editors crc handbook discrete computational geometry crc press inc alstrup brodal rauhe new data structures orthogonal range searching foundations computer science proceedings annual symposium pages ieee bentley multidimensional binary search tress used associative searching communications acm bentley decomposable search problems information processing letters june bentley multidimensional binary search trees database applications ieee transactions software engineering bentley multidimensional divide conquer communications acm april berg cheong kreveld overmars computational geometry algorithms applications new york usa third edition chan persistent predecessor search orthogonal point location word ram proceedings annual symposium discrete algorithms soda pages siam chan larsen orthogonal range searching ram revisited proceedings annual symposium computational geometry socg pages new york usa acm crespo design analysis implementation new variants master thesis universitat politecnica catalunya departament llenguatges sistemes informatics devroye jabbour squarish siam journal computing new data structures orthogonal queries harvard university easwarakumar hema efficient data structure segment storage query processing international journal computers technology december lamoureux nicolson determinisitic skip lists range search technical report pages novemeber lee wong analysis region partial region searches multidimensional binary search trees balanced quad trees acta informatica pages nekrich orthogonal range searching linear space computational geometry nilsson experimental study compression methods dynamic tries algorithmica orienstein multidimensional tries used associative searching information processing letters june preparata shamos computational geometry introduction springerverlag new york consistent hierarchical representation vector data proceedings siggraph conference dallas volume pages august samet fundamentals metric data structures academic press new york usa samet design analysis spatial data structures addison wesley tropf multidimensional range search dynamically balanced trees applied informatics vieweg verlag germany van kreveld overmars divided trees algorithmica
| 8 |
slow links fast links cost gossip dec suman sourav national university singapore sourav peter robinson royal holloway university london seth gilbert national university singapore abstract consider classical problem information dissemination one nodes network information want distribute remainder network paper study cost information dissemination networks edges latencies sending message one node another takes amount time first generalize idea conductance weighted graphs defining critical conductance critical latency one goal paper argue characterizes connectivity weighted graph latencies much way conductance characterizes connectivity unweighted graphs give near tight lower upper bounds problem information dissemination polylogarithmic factors specifically show graph weighted diameter latencies weights maximum degree information dissemination algorithm requires least min time worst case show several variants lower bound graphs small diameter graphs small etc reduction simple combinatorial game give nearly matching algorithms showing information dissemination solved min log time achieved combining two cases show classical algorithm near optimal diameter maximum degree large case diameter maximum degree small give alternative strategy first discover latencies use algorithm known latencies based weighted spanner construction algorithms within polylogarithmic factors tight known unknown latencies easiest express bounds terms cases provide convenient definition conductance weighted graphs therefore give second nearly equivalent characterization namely average conductance introduction consider problem disseminating information distributed system nodes network information want others real world network communication often time delay model edges latencies latency edge captures long communication takes many rounds takes two neighbors exchange information low latency links imply faster message transmission whereas higher latency implies longer delays case unweighted graphs edges considered said unit latencies however true real life link latencies vary greatly fact even nodes connected directly might fastest route communication due large latency link might arise due poor connection quality hardware software restrictions etc often choosing lower latency path leads faster distribution information unweighted graphs without latencies exists significant amount literature characterizing connectivity graph referred conductance graph exactly indicates efficient information dissemination would like graphs latencies however due presence latencies edges regarded therefore connectivity alone longer enough thus introduce new notion critical conductance generalizes notion classical conductance using give nearly tight lower upper bounds information dissemination cases might convenient definition conductance weighted graphs alternatively give nearly equivalent characterization namely average conductance model model network connected undirected graph nodes node knows identities neighbors polynomial upper bound size network nodes communicate bidirectionally graph edges communication proceeds synchronous rounds edge said activated whenever node sends message edge latencies occur communication channel nodes simplicity assume edge latency integer latencies scaled rounded nearest integer also edge latencies symmetric problems arbitrarily large latencies least hard directed unweighted networks many tasks impossible achieve efficiently let weighted diameter graph latencies weights let max maximum edge latency consider cases nodes know latencies adjacent edges section cases nodes know latencies adjacent edges rest paper nodes know max round node choose one neighbor exchange information sends message neighbor automatically receives edge latency exchange takes time model within constant factors equivalent standard model involves first sending message latency receiving end sending response cost latency notice node initiate new exchange every round even previous messages yet delivered communication information dissemination focus paper information dissemination designated source node begins message rumor protocol completes every node received message classic examples include distributed database replication sensor network data aggregation systems fundamental problem widely studied various names information dissemination rumor spreading global broadcast real world settings nodes often aware neighbors however due fluctuations network quality hence latency node necessarily predict latency connection notice model communication essentially equivalent traditional node either push data neighbor pull data neighbor assume node always simultaneously without ability pull data easy see information exchange takes time star simple flooding matches lower bound multicast information spreading building block look local broadcast problem every node distributing message neighbors conductance weighted graphs goal paper determine long takes disseminate information graph latencies clearly running time depend weighted diameter graph typically algorithms also depend well connected graph normally captured conductance unfortunately conductance longer good indicator connectivity graph latencies slow edges large weights much worse fast begin generalizing idea conductance weighted graphs give two nearly equivalent definitions conductance weighted graphs refer critical conductance definition average conductance definition give approximately value every graph times one definition convenient fact show values closely related dlog max theorem compare definitions section use determining lower upper bounds information dissemination makes analysis simpler use relation determine bounds core goal paper argue notion defined herein well captures connectivity weighted graphs may useful understanding performance algorithms lower bounds constitute key technical contributions paper graph diameter maximum degree critical conductance critical latency show information dissemination algorithm requires min rounds worst case may take time distribute information however graph well connected may better time characterized critical conductance show lower bound holds even various special cases graphs small diameter small etc relation provided theorem determine lower bound terms average conductance min main technique use showing lower bounds reduction simpler combinatorial guessing game see demonstration variants guessing games used prove lower bounds radio networks first show guessing game takes large number rounds thereafter reduce problem solving game solving information dissemination via simulation upper bounds show nearly matching upper bounds algorithms solving information dissemination regard differentiate model two cases case nodes aware adjacent edge latencies show classical random phone call algorithm node initiates connection randomly chosen neighbor round completes log rounds using relationship give log max log upper bound terms case nodes know latencies incident edges obtain nearly tight bounds independent give algorithm within polylogarithmic factors trivial lower bound key idea algorithm build weighted spanner based spanner used distribute information algorithm however requires knowledge polynomial upper bound hence completeness also provide alternate algorithm appendix require knowledge takes additional log factor instead log making unsuitable graphs large diameters finally observe always discover latencies important adjacent edges notice might model edge weight path edges weight calculate conductance resulting graph get good characterization connectivity original graph different reasons consider ability imaginary nodes edge pull data endpoints use algorithm works latencies known hence even latencies unknown combining various algorithms always solve information dissemination min log time min log max log time matching lower bounds polylogarithmic factors respect critical conductance summary contributions best knowledge work provides first ever characterization conductance graphs latencies regard provide two different parameters namely note provide summary terms however case exists alternate version terms lower bounds show exists graphs log diameter maximum degree local broadcast requires rounds diameter critical conductance local broadcast requires rounds diameter information dissemination requires min rounds showing among various parameters affecting information dissemination upper bounds information dissemination show algorithm takes log rounds algorithm takes rounds view results step towards accurate characterization connectivity networks delays believe metrics prove useful solving graph problems prior work long history studying time message complexity disseminating information links latency interesting contrast achieved weighted case achieved unweighted case classic model studying information dissemination random phone call model introduced round node communicates single randomly selected neighbor knows rumor pushes information neighbor know rumor pulls neighbor see important special case graph clique pair nodes communicate directly seminal paper karp show rumor disseminated complete graph log rounds log log message complexity fraigniaud giakkoupis show simultaneously achieve optimal communication complexity except extremely small rumor sizes graph clique performance classical protocol wherein node exchanges information random neighbor round typically depends topology graph specifically well connected graph exciting sequence papers see references therein eventually showed rumor spreading manner takes time conductance graph question remained open whether careful choice neighbors lead faster information dissemination breakthrough result gave randomized algorithm solving information dissemination unweighted graph time polylogn nonweighted diameter graph note protocol dependence conductance graph diameter unavoidable two key ingredients solution first gave local broadcast protocol node exchanges information neighbors time second protocol obtain spanner use conjunction simulator defined therein achieve information dissemination polylogn time haeupler showed local broadcast could achieved time using simple deterministic algorithm conclusion unweighted graph unit latency edges information dissemination achieved time polylogn time log notation hides polylogarithmic factors arise due unknown related works problem well researched several settings well graphs modeling social networks doerr show log time bound solving broadcast case direct addressing haeupler malkhi show broadcast performed optimally log log rounds information dissemination random geometric graphs studied wireless sensor networks adhoc networks boyd sarwate dimakis gandhi giakkoupis study problem dynamic graphs conductance weighted graphs section provide two different approaches characterize conductance weighted graphs namely critical conductance average conductance show relationship sections follow determine bounds information dissemination use critical conductance makes analysis simpler corresponding bounds average conductance obtained application given relationship theorem critical conductance define critical conductance graph generalizing classical notion conductance given graph set edges define subset edges latency set nodes cut define subset edges across cut latency define volume vol degv degv refers degree node first define critical conductance cut given latency define conductance minimum critical conductance across cuts definition conductance consider graph cut set possible cuts graph integer define min vol vol conductance given min definition critical conductance define critical conductance maximum max call critical latency simply write instead graph clear context edges latency exactly equal classical graph conductance average conductance given graph first define dlog max different latency classes first class contains edges latency subsequent ith latency class consists edges latency range set nodes cut define subset edges across cut belonging latency class cut edges latency cut first define average cut conductance define average conductance minimum average cut conductance across cuts definition average cut conductance consider graph set nodes cut let min vol vol dlog max definition average conductance let set possible cuts graph define average conductance min simply write instead graph clear context edges latency exactly equal classical graph conductance comparing critical average conductances conductance general characterization bottleneck communication graph unweighted graphs bottleneck communication connectivity graph however weighted graphs bottleneck arise either due graph connectivity due edge latency even nodes directly connected slow edge might exist different faster path aim capture aspects bottleneck communication good connectivity facilitates faster communication whereas large latencies result ideally would want best connectivity along least slowdown faster communication obtain definition directly optimizing orthogonal parameters connectivity maximizes ratio defined critical conductance corresponding latency defined critical latency words captures bottleneck due connectivity whereas captures bottleneck due latency definition average conductance inspired classical notion conductance cut edge contribution towards overall connectivity normalized dividing latency rounded upper bound latency class account surprisingly see closely related show relationship first define number latency classes given graph latency class said least one edge graph latency maximum value take dlog max total number possible latency classes theorem proof consider weighted graph critical conductance critical latency first show upper bound let cut obtained let minimum volume among either side cut definition conductance definition know dlog max implies note definition terms corresponding empty latency classes becomes zero replace remaining term definition using inequality get combining fact minimum average cut conductance obtain next show lower bound consider cut determines let minimum volume among either side cut cut consider latency class critical latency say lies latency class implies definition conductance get rewriting definition max max comparing first terms observe term expression least large corresponding term upper bound also additional positive terms combining fact definition chosen minimum value among possible cuts obtain proves lower bound completes proof lower bounds proceed lower bound time completing information dissemination main goal section found theorems show every gossip algorithm requires min graphs diameter critical conductance critical latency throughout section assume nodes know latencies adjacent links nodes know latencies trivial lower bound sufficient begin defining combinatorial guessing game similar approach show lower bound construct several different graphs reduce guessing game solving information dissemination graphs thereby showing lower bound guessing game define guessing game played alice oracle conceptually game played bipartite graph nodes oracle selects subset edges target round alice guesses set edges oracle reveals target edges hit time edge target set guessed alice adjacent edges target set removed target set fix integer let two disjoint sets integers left right group nodes bipartite graph winning condition game depends predicate returns subset edges example randomp returns subset contains elements element chosen probability discarded probability results apply directly setting proposal set player must intersect target set exactly element contrast guessing game requires discover sufficiently many target elements every element target set occurs least define game guessing begins alice receives two disjoint sets oracle chooses target set returned predicate throughout assume alice access source unbiased random bits alice goal eliminate elements target set round alice submits set size round guesses oracle oracle replies revealing items guessed correctly oracle computes round target set removing items alice hit items item tra trb xrb concludes round next round begins game solved first round alice guesses result empty target set point oracle answers halt words game ends round every alice aim minimize number rounds target set becomes empty say protocol solves guessing probability rounds always terminates within rounds probability target set case call protocol guessing gossiping lower bound results use variants distributed network guessing game gadget nodes embedded subgraph gadget construction use predicate specify set hidden low latency edges call fast edges show execution gossip algorithm network simulated alice playing guessing game guessing use notation denote vertex construction unique given instance guessing game alice creates set nodes similarly maps integers ids vertex set fashion next alice creates complete bipartite graph sets adding cross edges adds clique vertices clique edges considered latency given integer parameters construct network way cross edges target set useful algorithm giving low latency whereas cross edges assigned large latency value formally latencies cross edge iff otherwise latency denote constructed gadget parameters refer size gadget low latency value high latency value predicate respectively also consider symmetric variant called gsym alice creates clique addition one see figure since alice know target set advance also know cross edge latency latency nevertheless implicitly latency assignments fixed priori target set unknown alice turn depends predicate whenever cross edge activated simulation alice submits pair vertices guess oracle whose answer reveals target set membership hence also latency lemma gossip protocol simulation suppose algorithm solves local broadcast given network contains gsym cross edges gadget form cut predicate protocol guessing terminates rounds proof argue alice simulate execution network particular subgraph gossip algorithm terminates oracle answers halt straightforward figure guessing game gadgets red edges correspond fast links whereas blue edges slow links high latency extend argument subgraph gsym time alice use behavior subgraph derive protocol guessing given instance guessing game alice creates network first assigning edges subgraph latency moreover creates edges subgraph described section see latency cross edges set first activated edge clique edge edge activated algorithm alice locally simulates bidirectional message exchange updating state nodes accordingly round gossip algorithm set cross edges activated vertices simulated alice activated cross edge alice uses one round guesses consider round suppose oracle returns empty set one alice submitted round guess contained oracle answer alice sets latency updating local state chosen round follows simple inductive argument state every vertex simulation equivalent executing algorithm network argue simulation gossip algorithm local broadcast solves game guessing rounds probability predicate recall guessing game ends becomes empty happens alice correct guesses included every least premise lemma cross edges form cut tells solve local broadcast without using cross edges since every neighbor node way receive local broadcast message via fast hence local broadcast algorithm terminates know hit one alice guesses guessing game lower bounds following lemma instrumental showing lower bound theorem holds assumptions critical conductance graph lemma let guessing guessing game target set single pair chosen uniformly random protocol protocol guessing number rounds terminates least proof sake contradiction suppose solves guessing rounds define time random variable number rounds termination given execution consider round protocol suppose game yet ended alice yet guessed correctly made incorrect guesses previous rounds let denote pairs chosen alice round since alice point view adversary chosen single element uniformly random elements probability alice guesses element round let correct denote event protocol correctly solves game follows time correct correct remainder proof lower bound probability event time observe time time correct correct time none rounds guesses alice successful time correct time correct applying round get time since running time never exceeds rounds time get contradiction next lemma bounds number guesses required target set less restricted edges form random subset cross edges allows derive lower bound local broadcast time complexity terms critical conductance theorem lemma guessing game input sets let randomp predicate defines target set adding element probability protocol solves guessing randomp requires rounds expectation hand alice uses protocol submits guesses round choosing uniformly random uniformly random logpm rounds required expectation proof recall game ends guesses alice hit element least whereas random variable let maximum number guesses required alice protocol sake analysis consider alice guesses occurring sequentially hence assume elements discovered one one define denote number guesses required guess element already guessed elements first consider general protocols considering edge target set probability assume target membership edge determined point alice submits guess recalling alice full knowledge remaining elements still needs guess assume guess successful probability guess edges potentially discover new element guessing strategy remains true independently current target set set previously discovered elements denote formally hence note part target edge probability since therefore follows considering alice guess elements per round follows time completes proof general algorithms consider case alice uses protocol submits guesses round choosing element uniformly random uniformly random note process selecting guesses done obliviously correct incorrect guesses far observe depends random variable size successful guess since number times protocol needs guess new element discovered distribution corresponds geometric distribution according alice protocol probability guessing new element given hence let number elements part edge initially last inequality follows positive random variable due jensen inequality since alice already correctly guessed elements discard elements intersect successful guesses updating target set end round according happen protocol discovers multiple elements using round guesses assumed happen sequentially analysis case target set updated guesses however easy see increase probability guessing new element get thus sum harmonic number log sufficiently large hence log law total expectation follows finally standard probability calculation shows happens large probability assuming sufficiently large constant time bound follows since alice submit guesses per round lower bounds information dissemination section show three different lower bounds together show properties cause poor performance information dissemination protocols graphs high degree cause poor performance theorem graphs poor connectivity cause poor performance theorem finally give family graphs see theorem begin result showing lower bound theorem network weighted diameter log maximum node degree algorithm requires rounds solve local broadcast constant probability proof consider network nodes consists guessing game gadget gsym predicate returns arbitrary singleton target set combined constant degree regular expander vertices one node connected vertices left side gadget edges connected expander latency latencies edges gadget assigned lemma clearly weighted diameter log diameter expander know guessing game protocol guessing requires rounds predicate returns exactly pair target set lemma tells gossip algorithm solves local broadcast must require rounds next show every local broadcast algorithm requires time least note get lower bound local broadcast information dissemination contrast results unweighted case following result given terms conductance thus also holds proof construct network corresponds bipartite guessing game graph target set edge fast probability way obtain network critical conductance hop diameter weighted diameter guessing game lower bound lemma tells cost information dissemination still depends theorem log network nodes weighted diameter critical conductance gossip algorithm requires rounds solving local broadcast expectation also solving local broadcast using requires log rounds expectation proof goal reduce game guessing local broadcast hence consider graph random guessing game gadget defined section since want show time bound log rounds high latency edges use value log log assign cross edge latency independently probability latency probability fast cross edges distribution target set implied predicate used show lower bound general protocols guessing lemma also stronger lower bound log random guessing protocols choose random edge vertex guesses straightforward see gossip corresponds exactly random guessing game strategy applying lemma means local broadcast requires expectation time general algorithms log time additional term theorem statement required actually send broadcast latency edge discovered since edge assigned latency probability log follows connected latency edge node high probability hence weighted diameter high probability remainder proof show conductance high probability point several previous works prove bounds network expansion however results shown random graphs employ results directly thus need adapt proof techniques show conductance guessing game gadget assume function noting assumption change asymptotic behavior bounds readability consider note extension general case straightforward construction randomf consists edges latencies last inequality follows assumption logn thus know hence need prove consider set vertices let first assume since number latency cross edges symmetric vertices subsequently remove assumption union bound argument vertex sets let set randomly sampled latency edges cut define given set goal show many latency edges originating endpoint assuming sufficiently many latency cross edges begin words need bound probability event conditioned sufficiently many latency cross edges claim sufficiently many latency cross edges exist constants events occur high probability proof according construction randomf latency cross edges chosen independently probability note cross edges assigned latency independently probability logn thus node expected number cross edges log standard chernoff bound know number latency cross edges high probability suitable constants taking union bound nodes conclude claim holds set conditioning equivalent choosing subset least edges among possible edges cut uniformly random assigning latency consider edge follows hence probability need exclude event bad subsets latency edges incident vertices addition need bound probability bad happens chosen ways choosing satisfy claim bad proof claim combining observations get bad first assume large sufficiently small positive constant apply stirling approximation form log binary entropy function thus sufficiently large get bad derive second inequality used facts since premise theorem log implies log together fact means term dominates exponent hence bad log next consider case applying upper bound form tells bad since get bad exp log log log exp log log assumption hence term log exponent negative moreover recall log thus assume log log sufficiently large constant term dominates terms exponent thereby completing proof claim considering bound implies least latency edges incident connected nodes outside probability least taking union bound possible choices values adhering shows observe latency cross edges constructed symmetrically left right side bipartite graph thus apply argument similar manner set conditioned thus conclude remove conditioning virtue claim since upper bound vol set take account cross edges node also need account incident clique edges yielding vol considering upper bound number latency cross edges given min min min vol inequality true high probability see bound observe know hence high probability required completes proof theorem finally give family graphs illustrate among parameters intuitively edge latencies larger makes sense search best possible path lower bound edge latencies smaller simply rely connectivity lower bound note individually obtain lower bound log using technique show exists graph diameter log unlike lower bound simply theorem given integer class networks nodes critical conductance maximum degree weighted diameter gossip algorithm solves broadcast least constant probability requires min rounds proof create network consisting series node layers wired together ring using guessing game gadgets introduced define implies layer consists nodes change asymptotic bounds simplify notation assuming integers figure guessing game gadgets wired together ring pair mod construct symmetric guessing game gadget gsym section simulating gossip algorithm solve game guessing create complete bipartite graph mod form cliques mod see figure assign latency every cross edge mod except uniformly random chosen edge forms singleton target set assign latency observe conductance maximal observation let graph proof layer call mod predecessor layer mod successor layer size layer node edges neighbors predecessor resp successor layer edges nodes layer means graph define cut divides ring two equal halves none internal clique edges cut edges slight abuse notation also use denote set vertices present smaller side partition created cut ties broken arbitrarily lemma proof since partitions two sets identical size volume determined considering either partition size thus focus node set also observation know volume calculated number cut edges latency construction according definition conductance given plugging value verify exactly equal using conductance bound lemma cut know proof next lemma show lemma conductance constructed ring network proof lemma know actual graph conductance always cut conductance show well observation know therefore set nodes volume vol exactly equal clearly implies two sets vol vol consider arbitrary cut suppose contains half nodes since nodes least cut edges using fact get done remainder proof show cut edges distinguish two cases classify node either good least adjacent edges across cut bad otherwise thus goal identify good nodes turn implies cut edges let arbitrary subset nodes nodes good done otherwise let bad node important note following properties true every bad node node layer contains least nodes inside successor layer least nodes inside see holds assume true would least neighbors layer across cut contradicting assumption bad similarly false would connected least nodes successor layer outside true predecessor layer let successor layer layer containing run following procedure invariant contains least nodes least half nodes good done terminate claim cut edges otherwise let bad node let successor layer layer start step assertion contains least nodes procedure ever terminates step done otherwise continues around every layer explored case invariant implies every layer contains least nodes implies nodes contradicts choice thus procedure terminate means must least cut edges implying let number nodes since volume node contains least neighbors outside since neighbors nodes cut size least thus conductance graph since clearly case wanted prove combining lemmas using cut argue critical latency lemma proof prove fact lemma need show end let consider cut defined show since conductance definition maximal get two latency cross edges cut volume calculated proof lemma thus need show constant inequality true long ensured premise theorem weighted diameter network since pair adjacent node layers connected latency edge internally layer forms latency clique using fact shown implying lemma consider source node layer initiates broadcast rumor node either spend time finding required fast edge assume done parallel instead instantly use edge latency forward rumor lemma tells finding single latency cross edge constant probability guessing game gadget corresponding pair node layers requires rounds forwarding rumor takes additional rounds alternatively algorithm forward rumor along latency edges across node layers spread rumor using latency edges within clique follows required time broadcast min obtain following corollary gives lower bound information dissemination terms either similar analysis application theorem corollary given integer class networks nodes average conductance maximum degree weighted diameter gossip algorithm solves broadcast least constant probability requires min rounds proof observe given graph exists edges latency either number latency classes theorem reduces implies case alternatively case replacing value theorem gives required corollary algorithms unknown latencies divide upper bounds information dissemination two later combine obtain unified result first analyze classical showing completes time optimal large alternatively graphs small give algorithm wherein node first spends time discovering neighboring latencies nodes use local information build spanner across data distributed time show time required information dissemination weighted graph using define set edges latency set incident edges vertex rounds network theorem protocol achieves broadcast critical conductance corresponding critical latency proof construct strongly graph generalization strongly vertex induced subgraph defined vertex set edges defined edge multiplicity function given otherwise easy see unweighted conductance corresponds node counted edges computing volume also define another unweighted graph derived dropping edge latencies consider markov chain process describing informed node set vertex set possession message originating vertex running formally state space markov chain consists possible informed node sets paths correspond monotonically growing informed node sets nonzero probability argue process resp dominates respective process graph observe node selects incident edge protocol probability probability choosing edge self loop case graphs clearly choosing self loop node help propagation message choosing corresponding edge might follows markov process reaching informed node set dominates one probability reaching informed node set using markov chain least large probability reaching set using markov chain translate result back actual network weighted edges charge round rounds similar arguments follows markov process informed node set given considering consecutive rounds time dominates one multiplicity edge called edge weight use different terminology avoid confusion latencies edges consider edge weight synonym edge latency instead known log rounds suffice solve broadcast hence achieving broadcast requires log rounds since analysis applies particular critical latency theorem follows combine theorem theorem obtain following corollary gives upper bound information dissemination using terms corollary protocol achieves broadcast rounds network avg average conductance number latency classes algorithm section provide algorithm solves information dissemination node knows latencies adjacent edges algorithm naturally extended case nodes know adjacent latencies first discovering edge latencies running algorithm known rounds node broadcasts request neighbor sequentially waits rounds response determine adjacent edge latency either values unknown guess double strategy described section used efficiently detect information dissemination completed correctly similar arguments section obtain algorithm solves information dissemination time algorithms known latencies section discuss case node knows latencies adjacent edges focus problem information dissemination instead information dissemination simplify certain issues solve seemingly harder problem course information dissemination also solves information dissemination information dissemination algorithms used solve information dissemination using collect disseminate data section use fact nodes know polynomial upper bound network size place rely assumption edge latencies known spanner algorithm described solves information dissemination differs trivial lower bound polylog factors spanner algorithm preliminaries initially assume weighted diameter known nodes later section away assumption via technique assumed every edge latency clearly want use edges latency local broadcast important building block algorithms local broadcast unweighted graphs randomized superstep algorithm deterministic tree gossip dtg algorithm haeupler solve problem make use dtg algorithm runs rounds unweighted graphs see appendix details observe unweighted case algorithm solves local broadcast rounds obtains direct consequence thereafter used propagating information however graphs latencies solving local broadcast might take time resulting leading solution information dissemination recall subgraph graph called two nodes distance distance weighted graphs mainly interested broadcast problem node disseminates information neighbors connected edges latency dtg assumes edges unweighted uniform weight execute protocol graph latencies simply ignoring edges latency larger simulating round dtg protocol rounds network refer protocol protocol follows immediately within time protocol ensures node disseminated information neighbors connected edges latency note trivially solve information dissemination problem time using protocol known simply repeating times challenge given restriction finding neighbors direct edge might costly somehow find sufficiently short paths show sufficient exploration local neighborhood log steps using favorable weights able obtain global spanner intermediate goal algorithm construct log obtain orientation edges node small log structure achieve information dissemination using flooding algorithm repeatedly activates order spanner construction broadcast seminal work baswana sen provide spanner construction algorithm weighted graphs weights correspond latency local model communication goal find low stretch low spanner modify algorithm carefully associating direction every edge added spanner node log deal latencies choose locally simulate algorithm individual nodes obtaining log neighborhood information using protocol show log neighborhood information sufficient obtaining required spanner algorithm also assumes distinct edge weights ensure using unique node ids break ties first show size obtained spanner increase significantly running algorithm estimate namely spanner construction algorithm node executes set rules adding edges explained time one rules triggered adds incident edges spanner assigning outgoing direction way obtain low stretch spanner undirected stretch nodes also low leverage subsequent phases algorithm given parameter algorithm computes performing iterations beginning iteration every node cluster center previous iteration chooses become active cluster probability poly note every node counts previously active center every active center broadcasts information cluster members cluster grows hop round message needs disseminated throughout every cluster member broadcasts membership information neighbors ensure every node aware adjacent active clusters adding edges spanner nodes also remember set incident clusters active iteration information hand every node adds incident edges set spanner edges also permanently discards edges follows clearly impossible guarantee small degree undirected sense example original graph star slight abuse notation use denote cluster centres cluster distinction clear context rule none adjacent clusters sampled iteration adds least weight edge cluster outgoing edge discards edges nodes every rule active adjacent clusters add edge cluster minimum weight among clusters adjacent cluster weight less node also adds one outgoing edge respective node edges nodes clusters discarded iteration every vertex adds least weight edge adjacent cluster lemma consider synchronous network nodes nodes know constant distributed algorithm based computes spanner terminates rounds local model node log proof note running time algorithm rounds used restricted message size log inspecting algorithm reveals computation node depends neighborhood graph also decision remove edge taken either node node needs simulate running algorithm neighbors know remove edge consideration hence simulate execution algorithm locally first collecting information regarding neighborhood rounds local model analyse difference running algorithm instead first observe sampling clusters probability affect stretch guarantee sake analysis assume spanner directed count every incident edge adds set spanner edges outgoing edge degree bound follow showing upper bound number outgoing edges node consider iteration phase algorithm call cluster sampled iteration among sampled clusters iterations every cluster sampled previous iteration sampled probability first iteration every node counts previously sampled cluster bound number edges contribute node consider clusters adjacent sampled iteration order increasing order weight least weight edge incident let event adds least edges outdegree iteration note occurs none clusters sampled iteration least active clusters iteration description phase first iterations algorithm add edge node cluster iteration happen taking union bound first iterations nodes follows probability node adding edges spanner first iterations exp log log choosing log log probability required phase final iteration every vertex adds least weight outgoing edge every cluster sampled iteration let indicator random variable vertex center cluster sampled iteration incident setting follows since since cluster sampled independently independent apply standard chernoff bound show sufficiently large constant depending holds log log taking union bound vertices see number edges vertex adds spanner phase log high probability combining bound derived phase completes proof theorem time algorithm gossip model yields log log edges moreover also computes orientation edges guarantees node log proof convert classic synchronous algorithm local model assumed lemma algorithm works gossip model latencies use protocol simulate log iterations spanner algorithm first discovering log neighborhood neighborhood discovery takes rounds model computations done locally broadcast directed spanner use broadcast algorithm deterministic exchange information among nodes node sends rumors known neighbors one one round robin fashion algorithm parameter run directed spanner graph without edges latency broadcast vertex parallel iteration equals propagate rumor set along length round robin fashion add received rumors algorithm broadcast edges edges edges edges figure example message propagation node node lemma execution broadcast algorithm parameter directed spanner graph two nodes distance exchanged rumors one another rounds maximum node proof consider path node another node distance less clearly edges path would weight therefore work without edges latency well without affecting correctness algorithm also let assume number hops would since fractional weights let latency hop denoted shown figure messages reach next node either nodes initiate bidirectional exchange example rumor could reach node either request initiated node depending upon direction edge worst case nodes try links initiating connection along required edge maximum node connection initialized takes time exchange rumors generalization observe model delay incurred rumor exchange among two adjacent nodes worst case way rumor proceeds towards individual steps step incurring maximum cost node might receive multiple rumors propagate next round adds rumor set forwards neighbors round robin fashion total worst case delay rumor exchange among node would represented know maximum value equal therefore conclude two nodes rumor would reached rumor would reached nodes forward rumors round robin fashion rounds created spanner stretch log maximum distance two nodes log since maximum log get following corollary corollary broadcast algorithm constructed spanner takes time solves information dissemination combine previously defined techniques single algorithm called efficient information dissemination eid eid vertex parallel iteration log perform call spanner construction algorithm call algorithm broadcast log gain neighborhood information executed locally algorithm efficient information dissemination lemma graph diameter efficient information dissemination eid algorithm takes time solving information dissemination known nodes unknown diameter unknown diameter apply standard strategy begin initial guess try algorithm see succeeds terminate otherwise double estimate repeat challenge correctly determine termination condition particular node determine whether information dissemination achieved nodes early termination might lead partial dissemination whereas late termination might cause time complexity increase critical observation follows two nodes communicate one execution information dissemination protocol broadcast given estimate diameter must edge path one execution able communicate two cases able communicate aware unreachable neighbor flag issue next time communicate node learns problem otherwise communicate next time communicate node learns node hear previously either case knows estimate correct continue node also checks whether heard neighbors raises error flag repeat broadcast nodes check everyone rumor set one raised error flag total checking termination asymptotic complexity algorithm checks every node contacts contacted either directly indirectly whether node exactly rumor set value flag bit flag bit node set neighbor node present rumor set node yet exchanged rumors known presently neighbors distance current estimate say condition easily checked either additional affect complexity checked parallel execution broadcast conditions met node sets status failed uses broadcast algorithm propagating failed message broadcast algorithm given parameter able broadcast collect back information nodes distance used easily seen broadcast satisfies criteria used case note broadcast achieved algorithm execution broadcast however algorithm described later invokes broadcast achieved execution sequence also described later rumor set known particular vertex denoted represents neighbors whereas refers nodes connected edge latency less also initially nodes set default node node exchanged rumors set flag bit lag else set flag bit lag broadcast gather responses node neighborhood lag set failed broadcast failed message neighborhood received message failed set failed algorithm prove following regarding termination detection lemma node terminates exchanged rumors nodes moreover nodes terminate exact round proof suppose node terminates without exchanged rumors node considering path node node let farthest node hop distance exchanged rumors let next node path case exchanged rumors implies also exchanged rumors condition nodes exchange rumors one another rumor set thus contradicting fact farthest node path exchanged rumors case exchanged rumors exchanged rumors would set flag bit would detected broadcast would terminated also gives contradiction thus node exists terminates exchanged rumors nodes second part proof let consider nodes set termination set status failed algorithm whereas iteration node set status failed hence set continue show two nodes round node set status failed implying nodes exchanged rumors exactly set rumors none nodes set flag bit addition receive failed message node first part know set nodes exchanged rumors entire vertex set graph implies also exchanged rumors node also exact set rumors essentially rumors nodes set flag bit current iteration node broadcasted failed message would received resulting nodes set status failed since rumor sets nodes identical nodes would observe flag bits nodes node also satisfy termination condition set status failed gives contradiction completes proof repeat call algorithm eid call algorithm failed set default else terminate algorithm code vertex combining dissemination protocol termination detection get following theorem exists randomized gossip algorithm solves information dissemination problem terminates rounds alternative information dissemination algorithm propose alternate algorithm solve information dissemination without global knowledge polynomial upper bound need known takes log time algorithm works even nodes initiate new exchange every round wait till acknowledgement previous message communication blocking algorithm involves repeatedly invoking algorithm different parameters determined particular pattern intuition behind choice pattern make minimal use heavier latency edges collecting much information possible near heavier latencies making use edge pattern derived according sequence recursively defined follows show sequence run particular pattern length guarantees node graph distance exchanged rumors one another overall pattern values parameter value perform protocol sequence calls varying parameters according known pattern lemma execution node weighted graph exchanged rumors nodes distance less proof proceed induction path length base case recall running subgraph induced edges latency node exchanged rumors distance neighbors inductive step suppose claim true running sequence node exchanged rumors nodes weighted distance prove claim consider various possibilities forming path length case path consists edges latencies distinguish two case exists node equidistant end points see figure induction hypothesis nodes would exchanged rumors node initial next node propagates rumors received path length path length figure case case node middle exists depicted figure initial node must exchanged rumors node due induction hypothesis invocation node propagates rumors gained also propagates rumors gained information travels final path length less edge length less path length less figure case case exists one edge latency value situation yield one following two case edge located one end path see figure induction hypothesis node would exchanged rumors initial gets know rumors also gets know rumors next node propagates rumors gained path length less edge length figure case case edge located two inner nodes path see figure case induction hypothesis node exchanged rumors whereas node exchanged rumors node initial node propagates rumors gained moreover propagates rumors gained rumors propagate final path length less path length less figure case lemma known diameter solving information dissemination executing sequence takes log time proof way sequence constructed observe recurrence relation using standard methods solve recurrence completes proof graph diameter known nodes nodes invoke solve information dissemination completeness also present algorithm called uses sequence invocations solve information dissemination graph diameter unknown algorithm similar flavour algorithm described section also makes use algorithm albeit different broadcasting technique calling rather broadcast repeat execute sequence call algorithm failed set default else terminate algorithm code vertex lemma algorithm takes log time solve information dissemination applying techniques similar section complexity easily shown case unknown diameter well unified upper bounds combining results run spanner algorithm parallel obtain unified upper bounds known unknown latencies cases however point single source broadcast works small message sizes whereas spanner algorithm reliance dtg also exchanging messages help spanner good robustness properties whereas inherently quite robust theorem exists randomized gossip algorithms solves information dissemination problem min log time latencies unknown min log time latencies known corollary exists randomized gossip algorithms solves information dissemination problem min log time latencies unknown min log time latencies known conclusion presented two different new concepts namely critical conductance average conductance characterize bottlenecks communication weighted graphs believe parameters useful variety applications depend connectivity question remains whether running time information dissemination improved using better spanner constructions efficient local broadcast save polylogarithmic factors recall unweighted case information dissemination protocols run polylogn time another interesting direction would development reliable robust algorithms regard another issue whether reduce number incoming messages round recently daum considered restricted model yielding interesting results would also interesting look bounds node allowed connections per round whether initiated node neighbor acknowledgment thank george giakkoupis helpful conversations useful ideas appendix dtg local broadcast protocol section describe detail dtg protocol originally developed well algorithm clear algorithm solves local broadcast keeps contacting new neighbors exchanged rumors neighbors author makes use binomial trees derive time complexity better explain working algorithm key idea used deriving time complexity show information propagated pipelined manner along binomial trees created node still active ith iteration binomial tree order depth see figure rooted furthermore shown two different nodes still active iteration vertex disjoint since formed joining two growth rate exponential limits number iterations log also node average needs contact log nodes nodes ith round thus overall complexity algorithm becomes case additional waiting time increases time complexity figure seen witness structures provides explanation node active particular iteration rooted particular node built recursively rounds progress essentially store information nodes communicated one another particular round viewed root node example figure labels edges denote time node higher level contacted lower level node observed root node root contacts nodes first level rounds according label nodes first level similarly contact nodes second level rounds according label observation also helps realization key idea node active ith round rooted nodes first level contact root previously busy contacting nodes second level nodes second level contact nodes first level busy contacting nodes third level figure edge labels shown pseudo code initial push sequence message propagated decreasing order connection round number observed root node given labels edges figure helping roots message nodes similarly initial pull sequence message nodes pipelined root subsequent sequence helps maintaining symmetry algorithm node learns node node also learns node finally collection rumors updated union rumors collected aforementioned sequences integer run modified dtg algorithm rather contains edges length lets denote algorithm algorithm presented node belonging runs parallel considered neighborhood comprising set nodes node neighbors link new neighbor push downto send rumors wait time receive rumors add received rumors pull send rumors wait time receive rumors add received rumors perform pull push algorithm references john augustine gopal pandurangan peter robinson scott roche eli upfal enabling robust efficient distributed computation dynamic networks ieee annual symposium foundations computer science focs berkeley usa pages surender baswana sandeep simple linear time randomized algorithm computing sparse spanners weighted graphs random structures algorithms stephen boyd arpita ghosh balaji prabhakar devavrat shah randomized gossip algorithms trans june milan robert tobias friedrich thomas sauerwald alexandre stauffer efficient broadcast random geometric graphs proceedings annual symposium discrete algorithms soda pages philadelphia usa keren bernhard haeupler jonathan kelner petar maymounkov global computation poorly connected world fast rumor spreading dependence conductance proceedings annual acm symposium theory computing stoc pages new york usa acm keren hadas shachnai fast information spreading graphs large weak conductance proceedings annual symposium discrete algorithms soda pages siam flavio chierichetti silvio lattanzi alessandro panconesi almost tight bounds rumour spreading conductance proceedings acm symposium theory computing stoc pages usa acm flavio chierichetti silvio lattanzi alessandro panconesi rumour spreading graph conductance proceedings annual symposium discrete algorithms soda pages usa siam sebastian daum fabian kuhn yannic maus rumor spreading bounded pages springer international alan demers dan greene carl hauser wes irish john larson scott shenker howard sturgis dan swinehart doug terry epidemic algorithms replicated database maintenance proceedings annual acm symposium principles distributed computing podc pages new york usa acm benjamin doerr mahmoud fouz tobias friedrich social networks spread rumors sublogarithmic time proceedings annual acm symposium theory computing stoc pages new york usa acm benjamin doerr mahmoud fouz tobias friedrich rumors spread quickly social networks commun acm june uriel feige david peleg prabhakar raghavan eli upfal randomized broadcast networks algorithms volume lecture notes computer science pages springer berlin heidelberg pierre fraigniaud george giakkoupis bit communication complexity randomized rumor spreading proceedings annual acm symposium parallelism algorithms architectures spaa pages usa acm gandhi mishra parthasarathy minimizing broadcast latency redundancy hoc networks networking transactions aug george giakkoupis tight bounds rumor spreading graphs given conductance proceedings international symposium theoretical aspects computer science stacs pages march george giakkoupis thomas sauerwald alexandre stauffer randomized rumor spreading dynamic graphs pages springer berlin heidelberg berlin heidelberg bernhard haeupler simple fast deterministic gossip rumor spreading proceedings annual symposium discrete algorithms soda pages siam bernhard haeupler dahlia malkhi optimal gossip direct addressing proceedings acm symposium principles distributed computing podc pages new york usa acm shlomo hoory nathan linial avi wigderson expander graphs applications bull amer math mark jerrum alistair sinclair conductance rapid mixing property markov chains approximation permanent resolved proceedings annual acm symposium theory computing stoc pages new york usa acm karp schindelhauer shenker vocking randomized rumor spreading foundations computer science proceedings annual symposium pages david kempe jon kleinberg alan demers spatial gossip resource location protocols proceedings annual acm symposium theory computing stoc pages new york usa acm damon devavrat shah computing separable functions via gossip proceedings annual acm symposium principles distributed computing podc pages new york usa acm calvin newport radio network lower bounds made easy distributed computing international symposium disc austin usa october proceedings pages sarwate dimakis impact mobility gossip algorithms infocom ieee pages april salil vadhan pseudorandomness foundations trends theoretical computer science vol
| 8 |
cooperative control systems locate source odor nov abhinav sinha rishemjit kaur ritesh kumar amol bhondekar work targets problem odor source localization systems hierarchical cooperative control put forward solve problem locating source odor driving agents consensus least one agent obtains information location source synthesis proposed controller carried hierarchical manner group decision making path planning control decision making utilizes information agents using conventional particle swarm algorithm information movement filaments predict location odor source predicted source location decision level utilized map trajectory pass information control level distributed control layer uses sliding mode controllers known inherent robustness ability reject matched disturbances completely two cases movement agents towards source consensus formation discussed herein finally numerical simulations demonstrate efficacy proposed hierarchical distributed control index source localization systems mas sliding mode control smc homogeneous agents cooperative control ntroduction overview inspiration odor source localization problem stems behavior biological entities mate seeking moths foraging lobsters prey tracking mosquitoes blue crabs aimed locating source volatile chemical behaviors long mimicked autonomous robot chemical source tracking attracted researchers around globe due applications civilian military domains plethora applications possible include detection forest fire oil spills release toxic gases tunnels mines gas leaks industrial setup search rescue victims clearing leftover mine armed conflict plume containing filaments odor molecules generally referred downwind trail formed consequence mixing contaminant molecules kind movement air dynamical optimization problem odor source localization effectively solved using multiple robots working cooperation obvious advantages leveraging multiagent systems mas increased probability success sinha school mechatronics robotics indian institute engineering science technology central scientific instruments organization csio india email kaur kumar bhondekar csio emails riteshkr amolbhondekar redundancy improved overall operational efficiency spatial diversity distributed sensing actuation motivation odor source localization three stage sensing maneuvering control reported literature odor source localization date back larcombe discussed applications nuclear industry considering chemical gradient based approach works relied heavily sensing part using techniques chemotaxis infotaxis anemotaxis fluxotaxis efficiency algorithms limited quality sensors manner used techniques also failed consider turbulence dominated flow resulted poor tracking performance algorithms reported maneuver agents include braitenberg style coli algorithm zigzag dung beetle approach silkworm moth style variants tremendous growth research attention towards cooperative control witnessed past decade addressed problem locating source odor hayes proposed distributed cooperative algorithm based swarm intelligence odor source localization experimental results proved multiple robots perform efficiently single autonomous robot particle swarm optimization pso algorithm proposed marques tackle odor source localization problems avoid trapping local maximum concentrations jatmiko proposed modified pso algorithms based electrical charge theory neutral charged robots used proposed distributed coordination control protocol based pso address problem noted simplified pso controllers type controller operating region gets limited global local best needs complicated obstacle avoidance algorithms results high energy expenditure also proposed cooperative control scheme coordinate multiple robots locate odor source particle filter used estimate location odor source based wind information movement trajectory planned finally cooperative control scheme proposed coordinate movement robots towards source motivated studies implemented robust powerful hierarchical cooperative control strategy tackle problem first layer group level information source via instantaneous sensing swarm intelligence obtained second layer designed maneuver agents via simplified silkworm moth algorithm third layer based cooperative sliding mode control information obtained first layer passed third layer reference tracking controller contributions major contributions paper summarized opposed existing works cooperative control locate source odor considered general formulation taking nonlinear dynamics mas account uncertain function zero problem reduces stabilizing integrator dynamics control layer designed paradigms sliding mode robust powerful control inherent robustness disturbance rejection capabilities reaching law well sliding manifold study nonlinear novel resulting smoother control faster reachability manifold use sliding mode controller also helps achieving finite time convergence opposed asymptotic convergence equilibrium point proposed control provides stability ensures robustness even presence bounded disturbances matched uncertainties odor propagation odor arrives packets leading wide fluctuations measured concentrations plumes also dynamic turbulent odor tends travel downwind direction wind provides effective information relative position source hence used wind information based measurement model describing movement filaments concentration information swarm intelligence locate source odor formation keeping agents locate source odor also demonstrated work paper organization introduction study section remainder work organized follows section provides insights preliminaries spectral graph theory sliding mode control section iii presents dynamics mas mathematical problem formulation followed hierarchical distributed cooperative control scheme section results discussions carried section followed concluding remarks section reliminaries spectral graph theory systems directed graph also known digraph represented throughout paper nonempty set finite number vertices nodes contained denotes directed edge represented weighted adjacency matrix possibility existence edge occurs iff vertex receives information supplied vertex hence termed neighbours set contains labels vertices neighbours vertex adjacency matrix laplacian matrix central consensus problem given degree matrix diagonal matrix diag whose entries directed path vertex vertex defines sequence comprising edges distinct vertices incidence matrix also diagonal matrix entries entry exists edge leader agent agent otherwise furthermore inferred path two distinct vertices uniquely determined however distinct node contains directed path every distinct node directed graph said spanning tree consequently matrix full rank physically agent modelled vertex node line communication two agents modelled directed edge sliding mode control sliding mode control smc known inherent robustness switching nature control used nullify bounded disturbances matched uncertainties switching happens hypergeometric manifold state space known sliding manifold surface hyperplane control drives system monotonically towards sliding surface trajectories emanate move towards hyperplane reaching phase system trajectories reaching hyperplane get constrained future time sliding phase thereby ensuring system dynamics remains independent bounded disturbances matched uncertainties order push state trajectories onto surface proper discontinuous control effort usm needs synthesized satisfying following inequality positive referred reachability constant usm usm motion state trajectories confined manifold known sliding sliding mode exists state velocity vectors directed towards manifold neighbourhood consideration manifold called attractive trajectories starting remain future time trajectories starting outside tend asymptotic manner hence sliding motion usm usm ueq solution generally referred equivalent control actual control applied system thought control must applied average maintain sliding motion mainly used analysis sliding motion iii dynamics ulti ystems roblem ormulation consider first order homogeneous mas interacting among environment directed topology interconnection information predicted location source odor instantaneous plume sensing available globally however local information obtained communication among agents whenever least one agent attains information interest governing dynamics first order homogeneous mas consisting agents described nonlinear differential equations usmi assumed locally lipschitz fairly large domain lipschitz constant denotes uncertain nonlinear dynamics agent also domain origin contained usmi state ith agent associated control respectively represents bounded exogenous disturbances enter system input channel problem odor source localization viewed cooperative control problem control laws usmi need designed conditions kxi kxi satisfied represents probable location odor source accuracy parameter ierarchical istributed ooperative ontrol cheme order drive agents towards consensus locate source odor propose following hierarchy group decision making layer utilizes concentration wind information predict location odor source final probable position source described oscillation centre according simple particle swarm optimization pso algorithm captures information wind denotes additional weighting coefficient remark arguments represent data captured instants sensors equipped agents receive data discrete instants noted tracking reference fed controller present detailed description obtaining simple pso algorithm commonly used practice following form upso inertia factor represent respective velocity position ith agent commonly used form pso also used type controller however disadvantages mentioned earlier use pso final controller pso control law upso described upso denotes previous best position denotes global best position neighbours ith agent time acceleration coefficients since every agent mas get information magnitude concentration via local communication position agent global best easily known idea pso compute oscillation centre arg arg max max max aij thus upso clearly controller proportional gain highlighted earlier order compute movement process single filament consists several order molecules modelled denotes position filament time represent mean airflow velocity random process model described without loss generality shall regard start time experiment denotes real position odor source assumption assume presence single stationary odor source thus implications remark require implemented instants hence remark accumulated average also considered possible filament releasing time distributed control control layer design robust powerful controller paradigms sliding mode worthy mention based instantaneous sensing swarm information different times agent take role virtual leader whose opinion needs kept agents provided controller reference tracked tracking error formulated terms graph theory reformulate error variable relationship viewed information noise hence point onward shall denote next formulate sliding manifold tanh therefore constructed path planning since detection information interest tied threshold value defined sensors next state updated taking threshold value account thus blueprints path planning described terms three types behavior surging ith agent receives data well threshold say clues location source detected predicted position source seen ith agent given xsi next state agent given mathematically xsi casting ith agent fails detect information particular instant next state obtained using following relation kxi xsi xsi nonlinear sliding manifold offering faster reachability surface represents speed convergence surface denotes slope nonlinear sliding manifold coefficient weighting parameters affect system performance forcing function taken sign small offset argument function remains non zero gain controller parameter facilitates additional gain tuning general novel reaching law contains nonlinear gain provides faster convergence towards manifold moreover reaching law smooth chattering free highly desirable mechatronic systems ensure safe operation theorem given dynamics mas connected directed topology error candidates sliding manifold stabilizing control law ensures accurate reference tracking consensus described usmi sign search exploration agents fail detect odor clues time segment time interval clues detected constraint wait time placed start experiment next state updated xsi random parameter standard deviation mean sup remark mentioned earlier ensures hence non singularity argument tanh always finite satisfies thus also invertible moreover non singularity established directly digraph contains spanning tree leader agent root proof write tanh tanh defined theorem simplified usmi using control brings state trajectories sliding manifold written usmi sign thus derivative lyapunov function candidate negative definite confirming stability sense lyapunov since ksi due nature arguments therefore together provide implications surface globally attractive ends proof esults discussions interaction topology agents represented digraph shown figure associated graph matrices described computer simulation performed assuming agent appears virtual leader agents making topology fixed directed study noted theory developed far extended case switching topologies shall dealt future concludes proof remark control practically implemented contain uncertainty term crucial analyze necessary sufficient conditions existence sliding mode control protocol used regard system sliding mode time system trajectories brought upon manifold constrained time thereafter sliding motion occurs theorem consider system described error candidates sliding manifold control protocol sliding mode said exist vicinity sliding manifold manifold attractive trajectories emanating outside continuously decrease towards stating alternatively reachability surface ensured reachability constant moreover stability guaranteed sense lyapunov gain designed sup proof let take account lyapunov function candidate taking derivative along system trajectories yield usmi fig topology agents connected agents following dynamics sin cos sin cos sin cos sin cos called reachability constant sup ksi ksi sin cos substituting control protocol sign study advection model given used simulate plume additive multiplicative disturbances initial conditions simulation taken large values far away equilibrium point time varying disturbance taken sin accuracy parameter maximum mean airflow velocity key design parameters mentioned table agents progressing towards source source info position agents direction movement agents towards source direction movement filaments released source true odor source time progression agents sec fig agents consensus locate source odor agents progressing towards source parallel formation position agents formation gap true odor source source info movement away source odor source location agent initial point agent initial point agent initial point agent initial point agent initial point agent terminal point agent terminal point agent terminal point agent terminal point agent terminal point time progression agents sec fig agents formation locate source odor tracking errors control signals norm error variables time sec fig norm tracking errors time sec fig control signals consensus position source sliding manifolds surface variables time sec fig sliding manifolds consensus table values design parameters used simulation figure shows agents coming consensus finite time locate source odor figure shows agents moving parallel formation locate odor source norm tracking errors depicted figure evident magnitude error small plot control signals consensus shown figure plot sliding manifolds shown figure oncluding remarks problem odor source localization mas dealt hierarchical manner work problem translates cooperative control problem wherein agents driven towards consensus locate true odor source finite time computer simulations confirmed proposed strategy faster provides accurate tracking even presence time varying disturbances eferences larcombe robotics nuclear engineering computer assisted teleoperation hazardous environments particular reference radiation fields united states graham trotman inc rozas morales vega artificial smell detection robotic navigation advanced robotics robots unstructured environments fifth international conference june genovese dario magni odetti self organizing behavior swarm intelligence pack mobile miniature robots search pollutants proceedings international conference intelligent robots systems vol jul buscemi prati sandini cellular robotics behaviour polluted environments proceedings international symposium distributed autonomous robotic systems russell laying sensing odor markings strategy assisting mobile robot navigation tasks ieee robotics automation magazine vol sep russell andrew odour detection mobile robots river edge usa world scientific publishing russell shepherd wallace comparison reactive robot chemotaxis algorithms robotics autonomous systems vol online available http vergassola villermaux shraiman infotaxis strategy searching without gradients nature vol online available https farrell pang plume mapping via hidden markov methods ieee transactions systems man cybernetics part cybernetics vol dec pang farrell chemical plume source localization ieee transactions systems man cybernetics part cybernetics vol oct zarzhitsky approach chemical source localization using mobile robotic swarms dissertation braitenberg vehicles experiments synthetic psychology boston usa mit press lytridis virk rebour kadar odorbased navigational strategies mobile agents adapt vol apr online available http ishida suetsugu nakamoto moriizumi study autonomous mobile sensing system localization odor source using gas sensors anemometric sensors sensors actuators physical vol online available http russell chemical source location robomole project australian robotics automation association marques almeida electronic odour source localization international workshop advanced motion control proceedings cat april marques nunes almeida mobile robot navigation thin solid films vol proceedings international school gas sensors conjunction european school nose network online available http ren beard consensus seeking multiagent systems dynamically changing interaction topologies ieee transactions automatic control vol may chen ren kurths zheng distributed higher order consensus protocols multiagent dynamical systems ieee transactions circuits systems regular papers vol aug hayes martinoli goodman swarm robotic odor localization optimization validation real robots robotica vol online available http kennedy eberhart particle swarm optimization neural networks ieee international conference vol nov marques nunes almeida particle swarmbased olfactory guided search autonomous robots vol jun online available https jatmiko sekiyama fukuda mobile robot odor source localization dynamic obstacles environment theory simulation measurement ieee computational intelligence magazine vol may liu qiu distributed architecture two layers odor source localization systems ieee congress evolutionary computation july han system odor source localization iecon annual conference ieee industrial electronics society nov fan chung spectral graph theory ser cbms regional conference series mathematics ams cbms vol online available http david young vadim utkin umit ozguner control engineer guide sliding mode control ieee transactions control systems technology vol may cao meng zeng consensus based distributed summation algorithm gasleakage source localization using wireless sensor network proceedings chinese control conference july
| 3 |
coresets dependency networks alejandro molina first last dortmund department dortmund germany alexander munteanu first last dortmund oct department dortmund germany kristian kersting last darmstadt dept centre cogsci darmstadt germany abstract many applications infer structure probabilistic graphical model data elucidate relationships variables train graphical models massive data set paper show construct data sets used proxy original data provably bounded worst case gaussian dependency networks dns cyclic directed graphical models gaussians parents variable markov blanket specifically prove gaussian dns admit coresets size independent size data set unfortunately extend dns members exponential family general prove poisson dns admit small coresets despite result provide argument coreset construction dns still work well practice count data corroborate theoretical results empirically evaluated resulting core dns real data sets results demonstrate significant gains naive even case count data introduction artificial intelligence machine learning achieved considerable successes recent years number disciplines rely data ubiquitous great value understanding data building probabilistic graphical models elucidate relationships variables big data era however scalability become crucial useful machine learning approach paper consider problem training graphical models particular dependency networks heckerman massive data sets cyclic directed graphical models parents variable markov blanket proven successful various tasks collaborative filtering heckerman phylogenetic analysis carlson genetic analysis dobra phatak network inference sequencing data allen liu traffic well topic modeling hadiji specifically show dependency networks one prominent type distribution statistical machine coresets size independent size data set coresets weighted subsets data guarantee models fitting also provide good fit original data set studied clustering badoiu feldman lucic classification reddi regression drineas dasgupta geppert smallest enclosing ball problem badoiu clarkson feldman agarwal sharathkumar refer phillips recent extensive literature overview contribution continues line research generalizes use coresets probabilistic graphical modeling unfortunately coreset result extend dependency networks members exponential family general prove dependency networks poisson random variables allen liu hadiji admit sublinear size coresets every single input point important model needs appear coreset important negative result since count primary target poisson center many scientific endeavors citation counts web page hit counts counts procedures medicine count births deaths census counts words document count gamma rays physics modeling one event number times certain lab test yields particular result provide idea number potentially invasive procedures need performed patient thus elucidating relationships variables yield great insights massive count data therefore despite result provide argument coreset construction dependency networks still work well practice count data corroborate theoretical results empirically evaluated resulting core dependency networks cdns several real data sets results demonstrate significant gains naive even count data proceed follows review dependency networks dns prove gaussian dns admit sublinear size coresets discuss possibility generalize result count data concluding illustrate theoretical results empirically dependency networks existing machine learning literature graphical models dedicated binary multinominal certain classes continuous gaussian random variables undirected models aka markov random fields mrfs ising binary random variables potts multinomial random variables models found lot applications various fields robotics computer vision statistical physics among others whereas mrfs allow cycles structures directed models aka bayesian networks bns required acyclic directed relationships among random variables dependency networks dns focus present concepts directed undirected worlds due heckerman specifically like bns dns directed arcs allow networks cycles arcs akin mrfs makes dns quite appealing many applications build multivariate models univariate distributions allen liu yang hadiji still permitting efficient structure learning using local estimtatiors gradient tree boosting generally data fully observed learning done locally level conditional probability distributions variable mixing directed indirected needed based local distributions samples joint distribution obtained via gibbs sampling indeed gibbs sampling neglects question consistent joint probability distribution instead makes use local distributions generated samples however often sufficient answer many probability queries formally let denote random vector instantiation dependency network pair directed possibly cyclic graph node corresponds random variable set directed edges edge models dependency variables edge variables conditionally independent given variables indexed network refer nodes edge pointing parents denoted pai set conditional probability distributions associated variable pai example local model consider poisson conditional probability distributions illustrated fig left pai highlights fact mean functional form dependent parents often refer simply construction local conditional probability distribution similar multinomial bayesian network case however case dns graph necessarily acyclic typically infinite range hence represented using finite table probability values finally full joint distribution simply defined product local distributions also called pseudo likelihood poisson case reads note however guarantee existence consistent joint distribution joint distribution conditionals bengio however recently proven existence consistent distribution per given evidence known closed form long unordered gibbs sampler converges core dependency networks argued learning dependency networks dns amounts determining conditional probability distributions given set training instances representing rows data matrix variables assuming pai parametrized generalized linear model glm mccullagh nelder amounts estimating parameters glm associated variable since completely determines local distributions pai possibly depend variables network dependencies define structure network view training dns fitting glms data allows develop core dependency networks cdns sample coreset train certain members glm family sampled corest relative frequency fit data data fit number goals figure illustration dependency networks dns using poissons left number goals scored soccer games follows poisson distribution plot shows distribution home goals season german bundesliga home team home team scored average goals per game right example structure poisson conditional distribution count variable given neighbors poisson distribution similar bayesian network poisson directed however also contains cycles best viewed color coreset possibly weighted usually considerably smaller subset input data approximates given objective function candidate solutions definition let set points universe let set candidate solutions let measurable function set introduce formal framework need towards design coresets learning dependency networks useful structural property based objective loss functions concept embedding definition embedding embedding columnspace matrix construct sampling matrix forms embedding constant probabilty following way let orthonormal basis columnspace basis obtained singular value decomposition svd data matrix let rank rank define leverage scores fix sampling size parameter log sample input points probability min reweight contribution loss function note sum squares loss corresponds defining diagonal sampling matrix sii probability sii otherwise also note expected number samples log also holds constant probability markov inequality moreover give intuition works note fixed significantly stronger property forming embedding according definition follows matrix approximation bound given rudelson vershynin drineas lemma let input matrix rank let sampling matrix constructed stated sampling size parameter log forms embedding columnspace constant probability proof let svd theorem drineas exists absolute constant log log used fact orthonormality last inequality holds choice log large enough absolute constant since log log log log log log log log log log log application markov inequality rescaling assume constant probability show implies embedding property end fix first inequality follows submultiplicativity second rotational invariance spectral norm finally conclude proof inequality question arises whether better log one show reduction coupon collectors theorem lower bound log matching upper bound dependency hard instance orthonormal matrix scaled canonical basis stacked times leverage scores equal implying uniform sampling distribution probability basis vector rank preserving sample must comprise least one exactly coupon collectors theorem coupons lower bound log motwani raghavan fact sampling without replacement change since reduction holds arbitrary large creating sufficient multiple copies element simulate sampling replacement tropp know constant probability randomness construction algorithm satisfies embedding property given input matrix structural key property show actually coreset gaussian linear regression models dependency networks consider gaussian dependency network gdn collection gaussian linear regression models arbitrary digraph structure heckerman logarithm likelihood besag model given maximum likelihood estimate obtained maximizing function respect equivalent minimizing gdn loss function theorem given embedding columnspace constructed gdn loss function proof fix arbitrary consider affine map defined clearly extends argument dimensions inserting entry position leaving entries original order let note vector thus triangle inequality universal quantifier definition guarantee claim follows substituting identity noteworthy computing one single coreset columnspace sufficient rather computing coresets different subspaces spanned theorem straightforward show minimizer found coreset good approximation minimizer original data corollary given gdn loss function let holds min proof let first third inequalities direct applications coreset property second holds optimality coreset last follows moreover coreset affect inference within gdns recently shown bayesian gaussian linear regression models entire multivariate normal distribution parameter space approximately preserved embeddings geppert generalizes implies coreset yields useful pointwise approximation markov chain monte carlo inference via random walks like sampler heckerman negative result coresets poisson dns naturally following question arises sublinear size coresets exist dependency networks exponential family general unfortunately answer indeed sublinear size coreset simpler problem poisson regression implies result poisson dns show formally reduction communication complexity problem known indexing end recall negative poisson regression mccullagh nelder winkelmann exp theorem let data structure approximates likelihood queries poisson regression exp requires bits storage proof reduce indexing problem known randomized communication complexity jayram alice given vector produces every points denote nth unit roots plane vertices regular radius cos canonical order corresponding counts set builds sends size bob whose task guess bit chooses query cos note affine hyperplane separates scaled unit roots since passes exactly mod mod also points within distance construction consequently hyperplane thus exist cost exp expensive halfspace distance exactly cos cos cost bounded exp exp exp given bob distinguish two cases based data structure deciding whether strictly smaller larger exp consequently since solves indexing problem note bound given bit complexity restricting data structure sampling based coreset assuming every data point expressed log bits means still lower bound logn samples corollary every sampling based coreset poisson regression approximation factor exp theorem requires least logn samples point seems likely similar argument used rule constant approximation algorithm remains open problem core dns count data still work far quite pessimistic view extending cdns beyond gaussians gaussian setting loss measured squared euclidean distance number important points significantly large leverage scores bounded essentially implicit original early works drineas explicitly formalized later langberg schulman clarkson woodruff crucial understand inherent property norm function thus holds arbitrary data poisson glm contrast shown loss function come properties scratch constructed worst case scenario basically every single input point important model needs appear coreset usually case statistical models data assumed generated generating distribution fits model assumptions consider instance data reduction gaussian linear regression via leverage score sampling uniform sampling shown given data follows model assumptions gaussian distribution two approaches behave similarly put another way leverage scores quite uniform presence outliers generated heavier tails tdistributions leverage scores increasingly outperform uniform sampling poisson model poi exp though standard model count data suffers inherent limitation equidispersed data since exp count data however often overdispersed especially large counts due unobserved variables problem specific heterogeneity poisson model known inferior data specifically follows poisson model turns powerful modeling effects captured simple poisson model wide applications instance econometric elasticity problems review poisson model count data winkelmann poi exp exp natural choice parameters distribution case exp exp exp follows exp exp exp constant independent controls amount overdispersion taking limit arrive simple model since distribution tends deterministic dirac delta distribution puts mass inference might aim poisson model directly zhou performed maximum likelihood estimation simple poisson model latter provides consistent estimator long mean function correctly specified even higher moments possess limitations inherent simple poisson model winkelmann summing review count modeling perspective learn preserving loglinear mean function poisson model crucial towards consistency estimator moreover modeling counts model gives intuition leverage score sampling capture underlying linear model accurately poisson model follows distribution thus holds cdn uniform full training data sample size percentage cdn uniform full training data sample size percentage negative log pseudo likelihood cdn uniform full cdn uniform full training data sample size percentage cdn uniform full log time hours log time minutes log rmse root mean square error log rmse root mean square error negative poisson pseudo negative gaussian pseudo training data sample size percentage cdn uniform full training data sample size percentage rmse training data sample size percentage training time figure performance lower better gaussian cdns mnist upper row poisson cnds traffic dataset lower row shown negative log pseudo likelihood left squared error loss middle well training time right different proportions data sampled axis please note jump one see cdns blue quickly approach predictive performance full dataset full black uniform sampling uniform red perform well cdns moreover cdns orders magnitude faster dns full dataset scale similar uniform sampling also supported vertical lines denote mean performances left better top axes best viewed color independence observations implies omitting bias intercept term cast notice yields ordinary least squares problem defined columspace still missing piece argumentation previous section used coreset construction embedding columnspace whole data set including dependent variable face two problems first implicitly given data explicitly available second vector derived setting might different instances fortunately shown via complicated arguments drineas sufficient good approximation sampling done obliviously dependent variable intuition comes fact loss point subspace expressed via projection onto subspace spanned residual projection good approximation subspace implicitly approximates projection fixed vector applied residual vector orthogonal projection solves first problem since necessary subspace embedding second issue addressed increasing sample size factor log boosting error probability taking union bound sample portion mnist gcdn gudn traffic pcdn pudn table comparison empirical relative error lower better best results per dataset bold gaussian gcdns poisson pcdns cdns recover model well fraction training data uniformly sampled dns udns lag behind sample size drops empirical illustration intention corroborate theoretical results investigating empirically following questions performance cdns compare dns access full training data set uniform sample training data set empirical error behave according sample sizes coresets affect structure recovered aim implemented dns python calling experiments ran linux machine cores gpus ram benchmarks mnist traffic data considered two datasets first experiment used data set handwritten labeled digits employed training set consisting images pixels total measurements trained gaussian dns second data set considered contains traffic count measurements selected roads around city cologne germany ide consists timestamped measurements taken sensors total measurements dataset trained poisson dns dataset performed fold training full full using data leverage score sampling coresets cdns uniform samples uniform different sample sizes compared predictions made dns time taken train predictions mnist dataset clipped predictions range dns traffic dataset computed predictions bxc every measurement rounded largest integer less equal fig summarizes results one see cdns outperform dns trained full data orders magnitude faster compared uniform sampling coresets competitive actually seen traffic dataset cdns predictive power optimal model using full data line mahoney observed coresets implicitly introduce regularization lead robust output table summarizes empirical relative errors dns dns trained data cdns clearly recover original model fraction training data overall answers affirmatively relationship elucidation investigated performance cdns recovering graph structure word interactions text corpus purpose used http https skill item loss component rotation direction tree cell digit map skill set document item disparity object pca oscillator neuron distance pyramid tangent loss component estimator dialogue routing saliency road iiii policy context fuzzy option wavelet speaker user control call star letter delay building attractor controller subscriber block eeg neural evidence potential classifier spike channel instruction rule stress rules chip net lesion circuit light rotation direction tree cell digit map skill set document item disparity object pca oscillator neuron distance pyramid tangent estimator dialogue routing saliency road iiii policy context fuzzy option wavelet speaker user control call star letter delay building attractor controller subscriber block eeg neural evidence potential classifier spike channel instruction rule stress rules chip net lesion circuit light analog set document loss component disparity object estimator pca oscillator neuron dialogue distance routing pyramid saliency tangent road iiii policy context fuzzy option wavelet speaker user control call star letter delay building attractor controller subscriber block eeg neural evidence potential classifier spike channel instruction rule stress rules chip net lesion circuit light rotation direction tree cell digit map analog analog gaussian cdn poisson cdn skill item loss component rotation direction tree cell road digit map routing policy object pca neuron pyramid skill set document item disparity estimator oscillator distance dialogue saliency tangent context user iiii fuzzy option wavelet speaker control call star letter delay building attractor controller subscriber block eeg neural evidence potential classifier spike channel instruction rule stress rules chip net lesion circuit light analog loss component rotation direction tree cell road digit map routing policy object pca neuron pyramid skill set document item disparity estimator oscillator distance dialogue saliency tangent context user iiii fuzzy option wavelet speaker control call star letter delay building attractor controller subscriber block eeg neural evidence potential classifier spike channel instruction rule stress rules chip net lesion circuit light analog loss component rotation direction tree cell road digit map routing policy set document object pca neuron pyramid disparity estimator oscillator distance dialogue saliency tangent context user iiii fuzzy option wavelet speaker control call star letter delay building attractor controller subscriber block eeg neural evidence potential classifier spike channel instruction rule stress rules chip net lesion circuit light analog figure elucidating relationships random variables shown positive dependency structures gaussian top poisson bottom cdns nips different learning sampling sizes using left middle right edges show top thresholded positive coefficients glms colors edges represent modularity one see cdns elucidate relationships among words make semantically sense approach structure learned using full dataset quantitative assessment see tab best viewed color dataset contains documents vocabulary words considered frequent words fig illustrates results qualitatively shows three cdns sampling sizes gaussians top log transformation poissons bottom cdns capture well gist nips corpus table confirms quantitatively shows frobenius norms dns cdns capture gist better naive uniform sampling answers affirmatively summarize empirical results answers questions show benefits cdns conclusions inspired question train graphical models massive dataset studied coresets estimating dependency networks dns established first rigorous guarantees obtaining compressed gaussian dns large data sets proved worstcase impossibility results coresets poisson dns review poisson modeling counts provided deep insights coreset construction still performs well count data practice sample portion udn gaussian cdn poisson gaussian poisson table frobenius norm difference adjacency matrices lower better recovered dns trained full data trained uniform subsample udn resp coresets cdns training data best results per statiscal type bold cdns recover structure better udns experimental results demonstrate resulting core dependency networks cdns achieve significant gains naive even case count data making possible learn models much larger datasets using hardware cdns provide several interesting avenues future work conditional independence assumption opens door explore hybrid multivariate models variable potentially come different glm family link function massive data sets used hint independencies among variables multivariate setting making useful many large data applications generally results may pave way establish coresets deep models using close connection dependency networks deep generative stochastic networks bengio networks poon domingos molina well statistical models build multivariate distributions univariate ones yang acknowledgements work supported deutsche forschungsgemeinschaft dfg within collaborative research center sfb providing information analysis projects references pankaj agarwal sharathkumar streaming algorithms extent problems high dimensions algorithmica doi url https genevera allen zhandong liu local poisson graphical model inferring networks sequencing data ieee transactions nanobioscience issn mihai badoiu kenneth clarkson smaller balls proc soda pages mihai badoiu kenneth clarkson optimal balls computational geometry doi url https mihai badoiu sariel piotr indyk approximate clustering via proceedings stoc pages bengio laufer alain yosinski deep generative stochastic networks trainable backprop proc icml pages julian besag statistical analysis data journal royal statistical society series jonathan carlson zabrina brumme christine rousseau chanson brumme philippa matthews carl myers kadie james mullins bruce walker richard harrigan philip goulder david heckerman phylogenetic dependency networks inferring patterns ctl escape codon covariation gag plos computational biology kenneth clarkson david woodruff low rank approximation regression input sparsity time proc stoc pages anirban dasgupta petros drineas boulos harb ravi kumar michael mahoney sampling algorithms coresets regression siam journal computing doi url https adrian dobra variable selection dependency networks genomewide data biostatistics petros drineas michael mahoney muthukrishnan sampling algorithms regression applications proc soda pages url http petros drineas michael mahoney muthukrishnan cur matrix decompositions siam journal matrix analysis applications doi url https dan feldman matthew faulkner andreas krause scalable training mixture models via coresets proc nips dan feldman melanie schmidt christian sohler turning big data tiny data coresets pca projective clustering proc soda pages dan feldman alexander munteanu christian sohler smallest enclosing ball probabilistic data proc socg pages doi url http leo geppert katja ickstadt alexander munteanu jens quedenfeld christian sohler random projections bayesian regression statistics computing fabian hadiji alejandro molina sriraam natarajan kristian kersting poisson dependency networks gradient boosted models multivariate count data mlj sariel simple algorithm maximum margin classification revisited url http arxiv sariel dan roth dav zimak maximum margin coresets active noise tolerant learning proc ijcai pages heckerman chickering meek rounthwaite kadie dependency networks density estimation collaborative filtering data visualization journal machine learning research christoph ide fabian hadiji lars habel alejandro molina thomas zaksek michael schreckenberg kristian kersting christian wietfeld lte connectivity vehicular traffic prediction based machine learning approaches proc ieee vtc fall jayram ravi kumar sivakumar communication complexity hamming distance theory computing doi url https michael langberg leonard schulman universal integrals proc soda mario lucic olivier bachem andreas krause strong coresets hard soft bregman clustering applications exponential family mixtures proc aistats pages ping michael mahoney bin statistical perspective algorithmic leveraging jmlr url http michael mahoney randomized algorithms matrices data foundations trends machine learning doi url https peter mccullagh john nelder generalized linear models chapman hall alejandro molina sriraam natarajan kristian kersting poisson networks deep architecture tractable multivariate poisson distributions proc aaai rajeev motwani prabhakar raghavan randomized algorithms cambridge univ press isbn aloke phatak harri kiiveri line harder clemmensen william wilson netrave constructing dependency networks using sparse linear regression bioinformatics jeff phillips coresets sketches handbook discrete computational geometry hoifung poon pedro domingos networks new deep architecture proc uai sashank reddi alexander smola communication efficient coresets empirical loss minimization proc uai pages mark rudelson roman vershynin sampling large matrices approach geometric functional analysis journal acm doi url http joel tropp improved analysis subsampled randomized hadamard transform advances adaptive data analysis doi url https rainer winkelmann econometric analysis count data springer edition isbn eunho yang pradeep ravikumar genevera allen zhandong liu graphical models via univariate exponential family distributions jmlr mingyuan zhou lingbo david dunson lawrence carin lognormal gamma mixed negative binomial regression proceedings icml url http pdf
| 2 |
proc eur conf python science euroscipy catos computer aided system jinook apr animal behavioral biology several cases autonomous system would useful observation certain species continuously documenting specific events happen irregularly longterm intensive training animals preparation behavioral experiments training testing animals without human interference eliminate potential cues biases induced humans primary goal study build system named catos computer aided system could used situations proof concept system built tested pilot experiment cats trained press three buttons differently response three different sounds human speech receive food rewards system built use months successfully training two cats one cat learned press particular button three buttons obtain food reward percent correctness index training animal observing automatic device ntroduction often case animal behavioral biology large amount human resources time data storage video recordings required animal observation training representative examples cases observation certain species continuously monitoring specific events occur irregularly behavior certain species time period specific time period nocturnal behaviors investigated certain experiments require prolonged training period sometimes year type experiment requires reliable responses may correspond usual behavior patterns animals tasks therefore training may require long period time subject ready tested additionally long periods human supervised training introduce unintended cues biases animals first case autonomous system observing animals save human resources reduce amount data storage reduced amount data also conserve types human resources investigation maintenance data attempts build autonomous observing surveillance systems fields biology kritzler work corresponding author cognitive biology university vienna jinook article discopyright tributed terms creative commons attribution license permits unrestricted use distribution reproduction medium provided original author source credited http security systems belloto vallejo instance also commercial products surveillance systems various degrees automation incorporating artificial intelligence however intelligence system difficult apply specific systems novel situations without considerable adjustments second case autonomous system prolonged intensive training also save human resources eliminate potential cues biases caused humans training autonomous system extension traditional operant conditioning chambers many modern elaborated versions developed used markham takemoto kangas steurer fagot bonte however many previous devices use commercial software also possess observational features developed current project would useful relatively modularized system could customized observation training experimentation animal subjects various species catos system built present study fulfills necessities difference previous systems catos computer aided system present work animals captured transported separated space specific time order trained disadvantages separating animals primates include stress animals separated group moved usual confines risky catching procedure animal human fagot bonte similar arguments apply animal species especially social automatic learning device monkeys aldm described fagot bonte similar trainer aspect catos described present work catos different following features first aimed opensource based modular easily adjusted adopted different species experiments another feature catos equipped various observational features including visual auditory recording recognition video camera microphone make system able interact subjects reacting immediately subject motion detection camera sound recognition microphone catos offer following advantages system flexible terms adjustability extendibility various projects species proc eur conf python science euroscipy software software hardware components modularized much possible thus system reassembly researchers animal behavioral biology practical system various observational features applicable broad range animal species observational purposes system perform continuous monitoring record video sound set particular conditions fulfilled would reduce amount data produced procedure system actuators react certain situations allows act human designs procedure adjusting parameters modules actual performance done system way system could help reducing amount time required training eliminating might induced human interferences system animal transported certain space separated group training animals able choose start trial two catos prototypes built study first build catos pushbuttons main input device cats second build main input device first build initial attempt build test system second build final product study basic structures two builds less differences second version improved functions uses touchscreen instead pushbuttons first build catos tested domestic cats felis catus train press three different buttons differently depending auditory stimuli three different human speech sounds final goal training investigate human speech perception cats doubt many animal species recognize words human speech examples speech perception dogs chimpanzees found work kaminski heimbauer respectively cases animals even properly produce words specific purposes example speech perception production parrot found work pepperberg despite findings ongoing debate whether perceptual mechanisms used speech recognition humans animals fitch investigate issue animals trained show different reliable responses different human speech sounds test features human speech necessary different animal species understand thus final aim training study would obtain cats showing different responses different human speech sounds statistical significance percent reaching final goal several smaller steps goals required fig overall system diagram rief description catos omputer ided raining bserving ystem overall system composed combination software hardware components software components mainly composed python script named version program microcontroller runs necessary processes communicates microcontroller program microcontroller program operates sensors actuators communicates program hardware components composed various devices directly connected computer via usb cables devices gpio general purpose input output pins therefore connected microcontroller microcontroller connected computer via usb cable hardware devices directly connected via usb cables accessed using various software modules imported program access devices using gpio pins performed microcontroller program simply communicates microcontroller program via serial connection sending commands actuators receiving values sensors software system called agent animals software build helps many external libraries opencv starts seven processes launched using multiprocessing package python runs user terminates program multiprocessing used heavy calculation image processing multiple webcams concerned number processes changed turned processes include process camera process process process schema process process figure even though processes quite simple tasks separated order prevent interfering becoming bottleneck system process visual auditory sensory motor information simultaneously recognize change environment catos computer aided system fig respond properly output data captured video input images recorded wav files csv files trial results log file temporarily stored output folder daily session finished output files archiving process include restricted generating movies generating images movement analysis labeling sound files moving different types files categorized subfolders archiving folder named timestamp besides combining modules implementing common functions one python program implemented facilitate process analyzing recorded data program called dataviewer based wxpython gui toolkit matplotlib drawing graphs figure loads log file result csv comma separated values file containing results trial csv files movie files wav files one folder containing data collected one session day video clip jpeg image showing movements blobs circles image represent positions blobs color represents black corresponding beginning movie white end movie line connecting multiple circles means blobs occurred time another feature program ability generate graph selected sessions archive folder contains data session select sessions button clicked window appears selecting multiple folders result data selected archive folder drawn graph using matplotlib visualizing data certain period helps trainer experimenter quickly assess current status training procedure two feeders used study device mainly comprising arduino microcontroller refer http microcontroller servomotor frame encasing whole feeder feeder variants work similar way rotating servomotor certain number degrees although second feeder shows better performance terms consistent amount fig automatic feeder fig circuit microcontroller food released due usage archimedes screw initially estimate amount food left food container obtained using distance sensor feature discarded second build since distance information sensor accurate enough application second feeder confirms emission food reward via piezoelectric sensor positioned right archimedes screw figure communication arduino chip main computer accomplished using arduino module program circuit figure temperature sensor measures temperature inside protective wooden platform photocell sensor measures ambient light level light bulb turned photocell sensor indicates ambient light level threshold two fans turned temperature sensor indicates temperature high platform proc eur conf python science euroscipy piezoelectric sensor read servomotor actuating order confirm occurrence food reward sensor reading required occasionally food dispensing fails due combination short motor activation time seconds shape dry food pieces fit pieces easily fail emerge servomotor responsible food dispense turning archimedes screw back forth esults building catos testing domesticated cats hardware software built tested software available https gnu general public license version hardware software curretnly alpha stage although potential used train test animal cognition tested usage seemed promising save human resources certain situations hardware software developed practically used experimenting animal cognition two observed experimental area hours per day months middle october middle march movement records movie files jpeg image files wav sound files generated period took giga bytes storage obtain rough idea degree reduction data storage achieved using system number recorded frames video recording assessed data days taken calculate total observation period seconds corresponding hours number frames recorded average fps frame per second therefore approximately video recordings stored seconds hours percent entire observation period specific numbers meaningful since fluctuate increase decrease subject movements point meaningless recordings successfully filtered catos human presence session necessary data transfer one computer another maintenance modification system requires human interaction time effort required concerning training testing sessions one attends sessions periodic analysis animal performance system required simple assessment much food animals took specifically many correct incorrect trials occurred done quickly since information already stored result csv file displaying number correct incorrect trials generated timestamps end session also utility program displays timestamps jpeg image presents brief report movement detected recorded thus simply browsing jpeg images often enough assess session enough one obtain detailed assessment playing recorded around trial times fig recent performance trained cat three human speech discrimination task two domesticated cats trained testing system cats learned approaching feeder playback sound could lead food reward one cat learned pressing one three buttons could lead food reward training association three different sound stimuli three different buttons ongoing process recent performance data figure shows percent overall performance also performance button significantly higher percent chance level eferences bellotto sommerlade benfold bibby reid roth fernandez gool gonzalez distributed camera system surveillance proc third int conf distributed smart cameras icdsc bradski opencv library dobb journal software tools nov jones oliphant peterson others scipy open source scientific tools python fagot paleressompoulle automatic testing cognitive performance baboons maintained social groups behavior research methods may fagot bonte automated testing cognitive performance monkeys use battery computerized test systems troop baboons papio papio behavior research methods may fitch speech perception chimpanzee weighs current biology july heimbauer beran owren chimpanzee recognizes synthetic speech significantly reduced acoustic cues phonetic content current biology june hunter matplotlib graphics environment computing science engineering kaminski call fischer word learning domestic dog evidence fast mapping science june kangas bergman novel apparatus behavioral studies unrestrained squirrel monkeys journal neuroscience methods august kritzler jabs kegel krger indoor tracking laboratory mice via framework proc first acm international workshop mobile entity localization tracking environments markham butt dougher computer touchscreen apparatus training visual discriminations rats journal experimental analysis behavior catos computer aided system pepperberg evidence conceptual quantitative abilities african grey parrot labeling cardinal sets ethology steurer aust huber vienna comparative cognition technology vcct innovative operant conditioning system various species experimental procedures behavior research methods december takemoto izumi miwa nakamura development compact experimental apparatus screen use evaluating cognitive functions common marmosets journal neuroscience methods july vallejo albusac jimenez gonzalez moreno cognitive surveillance system detecting incorrect traffic behaviors expert systems applications september proc eur conf python science euroscipy
| 5 |
graphical nonconvex optimization optimal estimation gaussian graphical models qiang kean ming han tong jun abstract consider problem learning gaussian graphical models graphical lasso one popular methods estimating gaussian graphical models however achieve oracle rate convergence paper propose graphical nonconvex optimization optimal estimation gaussian graphical models approximated sequence convex programs proposal computationally tractable produces estimator achieves oracle rate convergence statistical error introduced sequential approximation using convex programs clearly demonstrated via contraction property rate convergence improved using notion sparsity pattern proposed methodology extended semiparametric graphical models show numerical studies proposed estimator outperforms popular methods estimating gaussian graphical models keywords adaptivity graphical nonconvex optimization nonconvexity semiparametric sequential convex approximation introduction consider problem learning undirected graph contains nodes represent random variables edge set describes pairwise conditional dependence relationships among random variables gaussian graphical models widely used represent pairwise conditional dependencies among set variables let random variables gaussian assumption graph encoded sparse concentration matrix sparse inverse correlation matrix correlation matrix diagonal matrix department operations research qiangs department operations research kmtan department operations research usa hanliu tencent lab shen zhen guangdong financial engineering princeton university princeton financial engineering princeton university princeton financial engineering princeton university princeton china tongzhang diagonal elements particular well known jth kth variables conditionally independent given variables element equal zero thus inferring conditional dependency structure gaussian graphical model boils estimating sparse inverse covariance correlation matrix number methods proposed estimate sparse concentration matrix gaussian assumption example meinshausen proposed neighborhood selection approach estimating gaussian graphical models solving collection sparse linear regression problems using lasso penalty addition yuan cai proposed graphical dantzig clime solved efficiently perspective yuan lin friedman proposed graphical lasso methodology penalized likelihood based approach estimate concentration matrix directly various extensions graphical lasso proposed theoretical properties also studied among others banerjee rothman ravikumar gaussian graphical models literature vast refer reader cai drton maathuis recent reviews topic despite large literature using graphical lasso estimate concentration matrices gaussian graphical models graphical lasso achieve oracle rate convergence specifically belived optimal rate convergence spectral norm graphical lasso order log rothman sample size number nodes number edges true graph fact graphical lasso aforementioned methods based lasso penalty well known convex penalties usually introduce estimation bias example linear regression setting fan zhang fan shown nonconvex penalized regression able eliminate estimation bias attain refined statistical rate convergence based insights consider following penalized maximum likelihood estimation nonconvex regularizers argmin log det symmetric definite cone formed sample covariance symmetric positive definite matrices dimensions matrix nonconvex penalty denotes trace however computational perspective minimizing folded concave penalized problem complicated due intrinsic nonconvex structure indeed shown solving general concave penalty scad fan mcp zhang strongly words exist fully approximation scheme problem unless structures assumed recently loh wainwright proposed algorithm obtain good local optimum additional convex constraint depends unknown true concentration matrix imposed moreover failed provide faster rate convergence statistically due taking signal strength account paper instead directly solving nonconvex problem propose approximate sequence adaptive convex programs even though proposed approach solving sequence convex programs regularity conditions show proposed estimator estimating sparse concentration matrix achieves oracle rate convergence treating locations nonzeros known priori achieved contraction property roughly speaking convex program gradually contracts initial estimator region oracle rate convergence even bad initial estimator used first place oracle rate contraction inverse correlation matrix estimator convex approxip mation denotes frobenius norm constant referred oracle rate iteration proposed method helps improve accuracy dominates statistical error error caused iteration clearly demonstrated via proven contraction property rescaling inverse correlation matrix using estimated marginal variances obtain estimator concentration matrix spectral norm convergence rate order log max used denote maximum exploiting novel notion called sparsity pattern sharpens rate convergence spectral norm rest paper proceeds follows section propose new methodology implementation section devoted theoretical studies show proposed methodology extended semiparametric graphical models section numerical experiments provided support proposed methodology section conclude paper section proofs technical details collected supplementary material notation summarize notation used regularly throughout paper given vector define kukq set let denote cardinality matrix use indicate positive definite use kakq maxu kaukq denote operator norm index sets define matrix whose entry equal zero otherwise use aij bij denote hadamard product two matrices let diag denote diagonal matrix consisting diagonal elements use sign denote sign sign sign otherwise two scalars use denote case cgn cgn two positive constants say used denote bounded probability use denote constants may vary line line sequential convex approximation let zero mean gaussian random vector density parameterized concentration matrix inverse correlation matrix family gaussian distributions respects edge structure graph sense family known random field respect graph problem estimating edge corresponds parameter estimation problem identifying edge set set corresponds problem model selection given independent identically distributed observations zero mean random vector interested estimating inverse correlation matrix concentration matrix let diag sample covariance matrix let estimate propose adaptively solve following sequence convex programs argmin log det adaptive regularization matrix given tuning parameter weight function indicates number total convex programs needed weight function taken folded concave penalty scad mcp proposed fan zhang respectively obtain estimate concentration matrix estimator rescale convex program rescaling helps back significantly eliminating introimprove rate convergence duced unpenalized diagonal terms detailed routine summarized algorithm algorithm sequential convex approximation graphical nonconvex optimization regularization parameter input sample covariance matrix step obtain sample correlation matrix diagonal matrix diagonal elements step solve sequence graphical lasso problem adaptively argmin log det step obtain estimate complexity step algorithm per iteration complexity algorithm solving graphical lasso problem show latter section number iteration chosen log log based theoretical analysis algorithm implemented using existing packages glasso theoretical results section study theoretical properties proposed estimator start assumptions needed theoretical analysis assumptions let support set elements thus also support set elements first assumption need concerns structure true concentration covariance matrices assumption structural assumption assume min max min max max maxj min minj assumption standard existing literature gaussian graphical models see instance meinshausen yuan cai yuan lin ravikumar need min max bounded guarantee reasonable performance concentration matrix estimator rothman throughout section treat constants simplify presentation second assumption need analysis concerns weight functions used adaptively update regularizers step algorithm define following class weight functions nonincreasing assumption weight function exists weight function satisfies constant assumption weight functions easily satisfied example satisfied simply taking folded concave penalty scad mcp fan zhang next impose assumption magnitude nonzero entries inverse correlation matrix assumption minimal signal strength recall true support set minimal signal satisfies min constant appears assumption assumption rather mild design case taken order log diminishes quickly increases analogue minimal signal strength assumption frequently assumed nonconvex penalized regression problems fan zhang taking signal strength account obtain oracle rate convergence main theory present several main theorems concerning rates convergence proposed estimator sparse inverse correlation concentration matrices following theorem concerns rate convergence estimator obtained algorithm proposition estimator let log assumption log probability least proof proposition collect proof proposition appendix supplementary material proposition indicates statistical error frobenius norm estimator order log believed unimprovable convex regularization used rothman ravikumar however sequence convex programs used proposal rate convergence improved significantly demonstrated following theorem theorem contraction property suppose log take log assumptions satisfies following contraction property probability least krl oracle rate contraction moreover log log log proof theorem proof collected appendix supplementary material theorem establishes contraction property convex approximation contracts initial estimator towards true sparse inverse correlation matrix reaches oracle rate convergence achieve oracle rate need solve approximately log log convex programs note log log grows slowly increases thus practice need solve convex programs get better estimator existing method graphical lasso rate convergence better existing literature methods estimating sparse inverse correlation matrices rothman lam fan ravikumar rescaling obtain concentration matrix estimator faster rate convergence theorem faster rate spectral norm conditions theorem log proof theorem proof deferred appendix supplementary material theorem provides optimal statistical rate estimating sparse concentration matrices using likelihood based methods rothman lam fan ravikumar extra log term consequence estimating marginal variances sharpen obtained theory using novel notion called sparsity pattern defined definition sparsity pattern matrix aij say asp asp corresponding sparsity pattern matrix aij aij aij otherwise let sparsity pattern matrix next theorem provides improved rate convergence using newly defined notion sparsity pattern theorem improved convergence rate using sparsity pattern suppose log take log let log assumptions log proof theorem proof deferred appendix supplementary material theorem suggests rates convergence bounded using spectral norm sparsity pattern matrix sometimes much sharper provided theorems demonstrate observation consider sequence chain graphs specified following sparsity pattern matrices mck entry otherwise identity matrix let total sparsity mck plot ratio two rates convergence estimating theorems kmc versus figure figure see ratio goes total sparsity increases demonstrates convergence rate theorem indeed much sharper theorem least chain graphs constructed also observe similar less significant improvement graphs figure give geometric illustration star chain graphs chain graph kmck figure convergence rates using sparsity pattern matrix mck total sparsity star graph chain graph figure illustration star chain graphs extension semiparametric graphical models section extend proposed method modeling semiparametric graphical models focus nonparanormal family proposed liu nonparametric extension normal family specifically replace random variable transformation variable assume follows multivariate gaussian distribution definition nonparanormal let set monotone univariate functions let correlation matrix diag random variable nonparanormal distribution npnd aim recover precision matrix main idea behind procedure exploit kendall tau statistics directly estimate without explicitly calculating marginal transformation functions consider following kendall tau statistic sign kendall tau statistic represent nonparametric correlations empirical realizations random variables invariant monotone two independent copies population formations let sign need version kendall tau given corr sign following lemma taken liu connects kendall tau statistics underlying pearson correlation coefficient lemma assuming npnd sin sbjk motivated lemma define following estimators unknown correlation matrix sin sbjk ready prove optimal spectral norm rate gaussian copula graphical model results provided following theorem theorem assume log let log assumptions satisfies following contraction property krl probability least optimal rate contraction log log log proof theorem proof deferred appendix supplementary material numerical experiments compare proposal graphical lasso glasso friedman neighborhood selection meinshausen approaches learns gaussian graphical model via penalty edge evaluate performance across methods define true positive rate proportion correctly identified edges graph false positive rate proportion incorrectly identified edges graph addition calculate difference estimated true concentration matrix frobenius norm compute quantity approach since estimate concentration matrix directly proposal consider iterations scad penalty proposed fan takes following form otherwise simulation studies pick methods involves sparsity tuning parameter applied fine grid tuning parameter values obtain curves shown figure consider cases two adjacency matrix random graph elements set band graph use adjacency matrix create matrix aij eij otherwise set given matrix set equal emin emin smallest eigenvalue standardize matrix diagonals equal one finally generate data according present results averaged data sets two simulation settings figure random graph proposal glasso false positive rate proposal glasso proposal glasso proposal glasso proposal glasso false positive rate false positive rate false positive rate proposal glasso band graph false positive rate random graph frobenius norm frobenius norm band graph band graph false positive rate random graph false positive rate proposal glasso true positive rate frobenius norm frobenius norm true positive rate true positive rate true positive rate random graph band graph proposal glasso false positive rate figure row true false positive rates averaged data sets random band graphs respectively row estimated true inverse covariance matrices frobenius norm curves obtained varying sparsity tuning parameter methods row figure see proposal competitive relative existing proposals estimating gaussian graphical models terms true false positive rates across simulation settings row figure contains estimated true inverse covariance matrices frobenius norm function false positive rate random graph see minimum error frobenius norm proposal smaller graphical lasso increase number observations minimum error two proposals apparent interestingly region proposal lower frobenius norm graphical lasso primary region interest ideal estimator one low false positive rate maintaining high true positive rate low error frobenius norm contrast region graphical lasso better frobenius norm primary region interest due high false positive rate see similar results band graph setting conclusion discussions propose graphical nonconvex optimization approximated sequence convex programs estimating inverse correlation concentration matrices better rates convergence comparing existing approaches proposed methodology sequential convex nature thus computationally tractable yet surprisingly produces estimators oracle rate convergence global optimum penalized nonconvex problem could obtained statistically contraction property established convex program contracts previous estimator optimal statistical error reached work applied many topics low rank matrix completion problems quantile regression many others conjecture aforementioned topics similar sequential convex approximation proposed possibly give faster rate controlled computing resources also interesting see algorithm works distributed systems fundamental statistical efficiency communication algorithmic complexity leave future research projects references banerjee ghaoui aspremont model selection sparse maximum likelihood estimation multivariate gaussian binary data journal machine learning research cai liu luo constrained minimization approach sparse precision matrix estimation journal american statistical association cai ren zhou estimating structured covariance precision matrices optimal rates adaptive estimation electronic journal statistics cai liu zhou estimating sparse precision matrix optimal rates convergence adaptive estimation annals statistics drton maathuis structure learning graphical modeling annual review statistics application fan variable selection via nonconcave penalized likelihood oracle properties journal american statistical association fan liu sun zhang sparse learning simultaneous control algorithmic complexity statistical error annals statistics press friedman hastie tibshirani sparse inverse covariance estimation graphical lasso biostatistics wang yin strong result regularized problems concave penalty functions arxiv preprint lam fan sparsistency rates convergence large covariance matrix annals statistics sparsistency rates convergence large covariance matrix estimation annals statistics liu han yuan wasserman semiparametric gaussian copula graphical models annals statistics loh wainwright regularized nonconvexity statistical algorithmic theory local optima journal machine learning research meinshausen graphs variable selection lasso annals statistics ravikumar wainwright raskutti covariance estimation minimizing divergence electronic journal statistics rothman bickel levina zhu sparse permutation invariant covariance estimation electronic journal statistics yuan high dimensional inverse covariance matrix estimation via linear programming journal machine learning research yuan lin model selection estimation gaussian graphical model biometrika zhang nearly unbiased variable selection minimax concave penalty annals statistics zhang analysis convex relaxation sparse regularization journal machine learning research supplementary material graphical nonconvex optimization optimal estimation gaussian graphical models qiang sun kean ming tan han liu tong zhang abstract supplementary material collects proofs main theoretical results main text additional technical lemmas proofs proposition theorems collected section section provides proof theorem proofs related semiparametric graphical models given section various concentration inequalities preliminary lemmas postponed sections respectively rate convergence frobenius norm section presents upper bound adaptive estimator frobenius norm turn helps establish scaling conditions needed achieve optimal spectral norm convergence rate proofs proposition theorems section collect proofs proposition theorems order suppress noise step necessary control min high dimensions construct entropy set analyze mag nitude entropy set stage defined min thus constant assumption seen thus entropy set proposition follows slightly general result establishes rate convergence estimator sparse inverse correlation matrix proposition estimator assume assumption holds suppose take log suppose log probability least must satisfy log kmax proof proposition define event event applying lemma taking obtain take log log lemma event hold probability least result follows plugging choice theorems follow form slightly general result chare spectral acterizes rate convergence frobenius norm norm theorem assume assumptions suppose take log probability least satisfies moreover log krl optimal rate contraction log min min max proof theorem conditions theorem combining proposition lemma obtain following contraction property solutions next introduce inequality induction analysis specifically krl krl obtain krl respec sequel bound krl tively proposition moreover let log log log side krl follows lemma therefore combining results obtains apply lemma obtain achieve statistical rate taking bound terms respectively proceed apply lemma union sum bound obtain max exp exp log suppose suppose log take log obtain log log max therefore use assumption maxi max max log since diagonal thus commutative note two event holds therefore log max log max min min using lemma yields log max min min max log taking min max min letting get assumption max min log log thus therefore obtain max min min max log similarly following facts min min applying results terms obtain max log min min min log max min min min therefore combining rate terms obtain final result technical lemmas define symmetrized bregman divergence loss function matrix let diagonal matrix diagonal entries equal diagonal mtrix lemma symmetrized bregman divergence defined proof lemma use vec denote vectorized form matrix mean value theory exists min vec vec standard properties kronecker product weyl inequality horn johnson obtain min min finally observing obtain plugging definition obtains final bound using localized following lemma characterizes upper bound analysis lemma suppose take assume kmin krl kmax let solution must satisfy proof lemma start introducing extra local parameter satp isfies possible since assumption based local parameter construct taken mediate estimator otherwise applying lemma obtains bound right hand side inequality use lemma obtain tdl note norm evaluated consists set symmetric matrices sign entry conditions exists plugging adding term sides obtain hrl hrl iii next bound terms iii respectively set let denote complement respect full index set term separating set consisting support diagonal elements using matrix inequality obtain last term equality term separating support obtain plugging applying matrix inequality yields use second equality last inequality term iii using optimality condition iii plugging bounds term iii back find observing facts simplify inequality min dividing sides max krl use krl equality last inequality follows inequality fact kmax assumption kmax therefore definition implies obtain construction thus satisfies desired error bound recall definition bound terms lemma sequential bound assumptions conditions lemma must satisfy proof lemma assume following defined min krl using matrix inequality obtain max krl kmax therefore kmax kmax second inequality due assumption krl kmax error bound given lemma taking last inequality due therefore need prove hold induction thus implies hold assume hold since implies assumption since must therefore induction hypothesis obtain second last inequality follows lemma fact hold implies kmin krl completes induction step next lemma establishes relationship adaptive regularization parameter estimator previous step lemma assume let frobenius norm proof lemma assumption otherwise therefore following inequality always hold applying triangle inequality obtain last technical result concerns contraction property namely sequential approach improves rate convergence adaptively proposition contraction property assume assumptions hold assume kmax satisfies following contraction property krl proof proposition conditions theorem proof lemma yields kmin defined thus applying lemma next bound term separating support using triangle inequality obtain kmax terms plugging bound yields obtain side lemma bound krl moreover following facts first inequality assumption know krl kmax plugging bounds results krl krl following similar argument lemma bound therefore term bounded krl plugging upper bound obtain krl observing kmin thus matrix entry equals defined similarly notice complete proof improved convergence rate using sparsity pattern develop improved spectral norm convergence rate using sparsity pattern section collect proof theorem first give technical lemmas needed proof proof theorem proof theorem let define introij duced let lemma implies thus must therefore applying lemma using fact kmax obtain side implies exploiting fact bound terms induction obtain max since log log must right hand side inequality smaller implies therefore estimator enjoys strong oracle property using lemma obtains applying lemma finishes proof theorem max technical lemmas start definitions constants notational simplicity let define oracle estimator argmin log det supp recall smax maxj maximum degree lemma suppose weight function satisfies defined assume smax kmax must proof lemma assume following defined kmin krl kmax using lemma obtain max therefore assumption lemma implies replacing lemma using inequality thus implies hold assume hold since implies assumption since decreasing must obtain therefore induction hypothesis last inequality follows definition hold implies kmin krl kmax completes induction step completes proof inequality abuse notation let following inequality bounds regularization parameter terms functionals lemma let set must satisfy proof triangle inequality bound otherwise since thus implies therefore using cauchy schwartz inequality completes proof define following optimization problem argmin log det lemma let satisfy krl kmax kmin must chosen proof construct intermediate solution otherwise satisfies lemma implies use lemma upper bound right hand side inequality tdl plugging inequality obtain control right hand side inequality exploiting first therefore order optimality condition right hand side using optimality adding subtracting term condition obtains suffices bound separately term therefore bound decomposing support using matrix inequality min vec using optimality condition max vec plugging upper bound back min max vec assumption know kmin krl kmax implies second term right hand side inequality positive thus obtain since construction must thus recall sparsity pattern matrix corresponding kmax sequence lemma max suffices show proof lemma let max show construct intermediate estimator otherwise choose max kmax matrix let matrix agreeing elsewhere using two term taylor expansion know exists vec vec implies vec let vec vec vec define vec vec matrix expansion formula vec reduces vec using triangle inequality obtain vec max etj applying inequality single term right hand side displayed inequality etj max max use fact smax kmax therefore obtain vec max kmax triangle inequality implies kmax vec fact utilizing kkt condition obtain kmax max smax smax kmax smax smax kmax kmax smax smax contradiction thus satisfies desired maximum norm bound spectral norm bound utilize lemma obtain proof finished max semiparametric graphical model proof theorem need follows lemma taken liu provides nonasymptotic probability bound estimating using lemma let constant log probability least log npn sup rest proof adapted theorem thus omitted concentration inequality section establish concentration inequalities key technical tools large probability bounds section lemma tail bound let random vector covariance variance proxy exists constants satisfies following tail probability bound associated sample covariance exp proof lemma definition sample covariance matrix bij therefore dep compose bij applying union sum bound obtain bij sequel bound separately term following argument lemma bickel levina exists constant depending exp satisfying next bound term linear structure random variables obtain therefore applying lemma obtain random variable norm bounded give explicit bounds bound tail probability bounded following exp every random variable integration parts yields identity apply obtain change variables exp indicates gamma function defined therefore obtain similary bound max max max define zij write taylor expansion series let max expoential function obtain exp zij max max use last second inequality exponenting using markov inequalty yields zij zij zij zij exp using result boudn zij exp exp combing bounds taking min min obtain exp completes proof develop large deviation bound marginal variances lemma large deviation bound marginal variance let random vector covariance subn gaussian variance proxy samples let log must exp proof write let therefore function next control tail probability respectively tail probability applying lemma obtain exp supt log log similarly obtain tail probability exp supt log algebra obtain log otherwise let min log therefore combing twon inequalities union bound obtain exp note thus obtain exp next results characterizes large deviation bound sample correlation matrix lemma large deviation bound sample correlation let random vector covariance matrix variance proxy independent identically denote sample covariance distributed copies let denote sample correlation matrix diagonal let element matrix diagonal elements respectively define min min min maxi exp proof lemma denote sample correlation prove tail probability bound suffices prove tail probability bound respectively start tail probability bound let assume using basic probability argument thus obtain next bound term simple algebra bounded let mini defined lemma apply lemma better constant lemma defined lemma must exp exp log let min min maxi taking using inequality log obtain exp exp exp similar fashion obtain following tail probability bound continue bound term next take min maxi obtain thus exp exp log exp min min min combining two cases min maxi exp similar fashion obtain tail probability bound min maxi thus proof completed lemma conditions lemma following result hold lim lim sup krl max proof lemma easy check applying lemma union sum bound min maxi defined lemma obtain exp exp log max taking log min maxi inequality obtains lim lim sup max implies lemma concentration inequality sample correlation matrix let defined lemma suppose log take must log log defined lemma satisfy max therefore applying lemma proof easy check union sum bound obtain min maxi defined lemma exp max min min definedqin lemma taking log sufficiently large kmax kmax obtain proof completed lemma conditions lemma lim lim sup max max proof lemma proof similar lemma thus omitted preliminary lemmas section state prove technical lemmas used previous sections following lemma establishes tail bound type product two random variables let defined vershynin lemma two random variables absolute value product random variable kxk proof lemma show suffices prove bounded definition sup need use inequality follows two random functions choose inequality right hand side bounded sup sup sup therefore obtain proof completed lemma let tdl proof lemma let since derivative respect hrl derivative therefore bregman divergence written plugging function equation special case assume convex thus tdl therefore proof completed remains prove convex function property inner product function using linearity property following equality hold side convexity loss function obtain adding together using definition function obtain indicates convex function thus complete proof lemma let square matrices next lemma characterizes upper bound matrix norm terms lemma let invertible matrix norm need following lemma bounding respect kronecker product lemma let matrices dimension min proof lemma carried using definitions thus omitted simplicity matrix aij say asp asp corresponding sparsity pattern matrix aij aij aij otherwise lemma let matrix kakmax let asp corresponding sparsity pattern matrix kasp proof lemma let aij entry matrix entry following definition spectral norm matrix obtain sup sup sup sup aij sgn aij aij kasp thus proof completed definite random matrix lemma let positive definite deterministic matrix min min commutative assume min min proof lemma first write property spectral norm min min follows thus min weyl inequality obtain min min thus event min min min hold thus follows min min min proves first desired probability bound assume commutative event min min therefore prove third result following lemma taken dembo zeitouni leads concentration bound empirical means random copies define logarithmic moment generating function associated log log exp lemma large deviation inequality let logarithmic moment generating function defined define dual sup exp exp inf inf references bickel levina regularized estimation large covariance matrices annals statistics dembo zeitouni large deviations techniques applications vol springer science business media horn johnson matrix analysis cambridge university press liu han yuan wasserman semiparametric gaussian copula graphical models annals statistics vershynin introduction analysis random matrices arxiv preprint
| 10 |
feb group kernels gaussian process metamodels categorical inputs mines umr cnrs limos france alpestat france france arpajon france london school economics england abstract gaussian processes widely used metamodel emulating computer codes focus problems involving categorical inputs potentially large number levels typically several tens partitioned groups various sizes parsimonious covariance functions kernels defined block covariance matrices constant covariances pairs blocks within blocks however little said positive definiteness matrices may limit practical usage paper exploit hierarchy provide parameterization valid block matrices based nested bayesian linear model model used assumption within blocks relaxed giving flexible parametric family valid covariance matrices constant covariances pairs blocks show positive definiteness equivalent positive definiteness small matrix size obtained averaging block illustrate application nuclear engineering one categorical inputs atomic number mendeleev periodic table levels introduction research motivated analysis computer code nuclear engineering depending continuous categorical inputs one levels final motivation inversion problem however due heavy computational cost direct usage simulator hardly possible realistic approach use statistical emulator metamodel thus first step investigate metamodelling computer code precisely consider gaussian process regression models also called kriging models sacks rasmussen williams successfully used sequential strategies uncertainty quantification see chevalier whereas flourishing literature regression part concerned categorical inputs remains quite limited refer zhang notz review continuous inputs covariance functions kernels usually built combination ones often multiplication rarely addition deng question comes constructing valid kernel finite set positive semidefinite matrix effort spent parameterization general covariance matrices pinheiro bates parsimonious parameterizations smaller classes pinheiro bates block form also proposed qian order deal potential large number levels however validity investigated furthermore best knowledge applications regression limited categorical inputs levels typically less guided application investigate deeply group kernels cited qian defined block covariance matrices constant covariances pairs blocks within blocks exploit hierarchy revisiting nested bayesian linear model response term sum group effect level effect leads parameterization automatically positive definite interestingly assumption within blocks relaxed obtain parameterization wider class valid group kernels positive definiteness condition also explicited equivalent positive definiteness smaller covariance matrix obtained replacing block average mentioned work connections bayesian linear models well linear mixed effect models see lindley smith smith hierarchical view related works concern hierarchical gps tree structure instance particular forms group kernels obtained multiresolution models fox dunson park choi given two resolution levels spatial partition parent corresponding lowest resolution serves trend children gps corresponding highest resolution children gps independent conditionaly parent covariance structure lengthscale parameter decreasing diameter result given resolution covariance matrix block form given sum nested block diagonal covariance matrices comparison corresponding categorical input assume conditional independence children block form covariance matrices general paper structured follows section gives background regression mixed categorical continuous inputs section presents new findings group kernels section illustrates synthetic examples section devoted application motivated work section gives conclusions perspectives future research background notations gps continuous categorical variables consider set continuous variables defined hypercubic domain set categorical variables levels without loss generality assume levels numbered denote consider regression models defined product space written respectively trend part noise term exist wide variety trend functions linear models main focus centered characterized kernel cov kernels obtained combining kernels kernels standard valid combinations product sum anova thus kcont denotes kernel continuous variables kcat kernel categorical ones examples valid kernels written product sum anova kcont kcat kcont kcat kcont kcat consiseness denote one operations sum product anova three formula summarized kcont kcat turn kcont kcat defined applying operations kernels continuous variables famous kernels include squared exponential rasmussen williams denote kcont kernels categorical variable notice positive semidefinite function finite space kernel positive semidefinite matrix denote matrix size corresponding kernels thus examples expressions kcont kcat written kcont kcont kcont kcat formulation given equations general one since kernels always obtained combining ones nevertheless encompasses models used literature computer experiments categorical inputs generalizes kernels often used sum used recently deng categorical part also contains heteroscedastic case since matrices assumed constant diagonal contrarily existing works zhang notz useful application section variance material level dependent remark combining kernels needs care obtain identifiable models instance product kernels kernel depending one variance parameter model identifiable new parameter initial parameters kernels categorical variables consider single categorical variable levels recall kernel positive semidefinite matrix kernels ordinal variables categorical variable ordered levels called ordinal case levels viewed discretization continuous variable thus obtained interval using transformation also called warping consequently covariance matrix written depends distance depends distance levels distorted general case defined parameters however parsimonious parameterization may preferred based cdf flexible probability distribution normal beta refer mccullagh examples regression qian illustrations computer experiments remark notice usual continuous kernels values necessary condition valid radial kernels dimensions consequence kernels ordinal variables built warping allow negative correlations levels kernels nominal variables simplicity present homoscedastic case constant diagonal immediately extended situations variance depends level considering correlation matrix general parametric covariance matrices several parameterizations matrices based spectral choleky decompositions spectral decomposition written pdp diagonal orthogonal standard parameterizations involve cayley transform eulerian angles householder transformations givens rotations detailed khuri good shepard another general parameterization provided cholesky decomposition lower triangular variance depend level columns norm represent points sphere spherical parameterization possible one variance term angles representing correlations levels see pinheiro bates parsimonious parameterizations general parametrizations described require parameters parsimonious ones used additional model assumptions among simplest forms compound symmetry often called exchangeable covariance matrix assumes common correlation levels see pinheiro bates matrix variance covariance defined generalizes kernel obtained substituting gower distance gower exponential kernel corresponding covariance matrix treats equally pairs levels important limitation especially flexibility obtained considering groups levels assume levels partitioned groups denote group number corresponding level desired parameterization given block matrix see qian terms correlations correlations notice additional conditions necessary ensure valid covariance matrix developed next section block covariance matrices levels grouping consider framework section denotes categorical variable whose levels partitioned groups various sizes without loss generality assume interested parsimonious parameterizations covariance matrix written block form diagonal blocks contain covariances blocks constant matrices containing covariances denote jng denotes matrix ones means betweengroup covariances depends groups levels also consider particular case diagonal blocks covariance matrices variance covariance subclass covariances depends groups levels variance term groups obtain block matrices form special case although block matrices form may covariance matrices positive semidefinite general next section provide proper characterization well parameterization matrices automatically fulfills positive semidefinite conditions use following additional notations given integer identity matrix size matrix ones size vector ones size finally vector matrix denote real number equal average coefficients gaussian model covariance matrices first focus case matrix denote cjl matrix common variance term common covariance term positive definite instance one check eigenvalues multiplicity eigenvector multiplicity eigenspace notice matrix positive definite range negative values correlation term consider following gaussian model random variables assumed independent direct computation shows covariance matrix covariance matrix clearly characterizes subclass positive definite covariance matrices full parameterization including negative values range obtained restricting average level effects zero detailed next proposition proposition related covariance conditional zero average errors matrix variance covariance conversely given covariance matrix variance covariance exists representation covariance conditional zero average errors parameterization centered covariance matrices usage model describe covariance matrices involves gaussian vectors sum zero linked centered covariance matrices covariance matrices detailed next proposition give parameterization centered covariance matrices proposition let covariance matrix size centered iff exists gaussian vector cov case let matrix whose columns form orthonormal basis written unique way ama covariance matrix size particular centered covariance matrix choose vil choice prop free obtained normalizing columns helmert contrast matrix venables ripley hierarchical gaussian model block covariance matrices let return general case levels partitioned groups convenient use hierarchical notation indicating belongs group consider following hierarchical gaussian model random variable represent effect group random variables represent effects levels group assume vector normal vectors normal vectors independent vectors independent extension prop next proposition cor show gives parameterization positive semidefinite matrices form diagonal blocks additional assumption average level effects zero group generally obtain large parametric family positive semidefinite matrices form proposition covariance matrix conditional form jng jng centered positive semidefinite matrix equal cov therefore jng positive semidefinite conversely consider positive semidefinite matrix block form jng positive semidefinite diagonal blocks matrix obtained averaging block let exists representation covariance conditional zero average errors cov jng corollary positive semidefinite matrices form diagonal blocks exactly correspond covariance matrices conditional constraints cov ing obtain simple condition validity block covariance matrices form interestingly involves small matrix whose size number groups proposition let matrix block form jng positive semidefinite positive semidefinite positive semidefinite positive definite diagonal positive definite blocks positive definite furthermore diag xtx matrix remark results depend conditional distribution thus flexibility choice since several matrices lead conditional covariance matrix cov remark groups size prop still valid groups size indeed degenerate equal thus positive semidefinite related works model shares similarities bayesian models linear mixed effect models see lindley smith gaussian priors effects centering constraints also standard identifiability conditions models furthermore particular case covariance matrices corresponds exchangeable assumption corresponding random variables typically framework linear modelling model could written additionals grand mean errors however framework similar goal different linear modelling aim quantify effects estimating posterior distribution hand aim investigating form covariance matrix response part equivalently covariance matrix likelihood summary comments results previous sections show wide class valid block covariance matrices parameterized family covariance matrices smaller sizes class formed positive definite matrices form jng positive semidefinite contains case diagonal blocks covariance matrices algorithm summarized generate covariance matrix size set else generate covariance matrix size compute centered matrix matrix whose columns form orthonormal basis compute blocks blocks steps covariance matrices general obtained one parameterizations however specific form matrices also chosen depending number groups sizes different levels parsimony obtained table summarizes possibilities notice may hard choose parametric setting order account specified constraint block matrix homoscedasticity alternative use economic constraint prop indeed positive definiteness size equivalent positive definitiness small matrix parametric setting ing general ing general general general resulting form general general number parameters table parameterization details valid matrices form examples considering application nuclear engineering consider two toy functions one continuous input one categorical input reproduce two specificities application first function mimics situation output variance depends level categorical input second one investigates level grouping number levels large example heretoscedastic case consider deterministic function cos cos cos expression adapted han scaling three output curves according level thus output variance clearly depends level visible figure three curves strongly dependent positive link negative figure test function black red green design points correspond one realization sliced lhd aim compare accuracy four models reconstructing evaluations kernel rasmussen williams chosen first model ind consists three independent gps corresponding levels ones tensor product kernels three different covariance matrices first two ones assume constant variance defined covariance structure general spherical parameterization sph see section finally consider heteroscedastic spherical parameterization covariance matrix defined general variance vector spherical parameterization correlation matrix order benefit strong link levels use design spreads points levels instance information given may useful estimate without computing precisely used random sliced latin hypercube design slhd qian points level total budget points parameter estimation maximum likelihood likelihood surface may multimodal launched several optimizations different starting points chosen random domain model accuracy measured test set formed regular grid size terms criterion criterion similar expression computed test set denote observations test set mean predictions negative model performs worst mean positive otherwise tends predictions close true values finally process repeated times order assess sensitivity result design observe heteroscedastic model clearly outperforms ones expected estimated variances model whereas constant variance wrongly estimated around sph moreover represented figure estimation correlations levels deduced parameterized covariance matrix representativeness chosen one designs used generate figure corresponding closest median values see strong dependence link levels recovered correctly poor estimation correlation parameters ones example levels grouping second function defined cos ind sph figure criterion four models based repetitions design visible figure two groups curves corresponding levels strong correlations strong negative correlations aim reconstructing five models based levels grouping first one uses covariance matrix corresponding single group second one considers two groups third model based five groups two variants correlation constant general case fourth model uses spherical parameterization leading groups last one considers ordinal paramaterization design experiments slhd remaining simulation settings example estimated correlation parameters shown figures right correlation structure well recovered two groups five groups different correlations model thirteen groups involves estimation parameters hard achieve especially points visible erratic values estimated correlations values seem meaningful opposite considering one group five groups common correlation oversimplifies fails recovering right correlations ordinal sph figure estimated correlation parameters among levels design experiments corresponding median figure test function nel recovers two blocs curves detect negative correlation remark figure see best tradeoff prediction accuracy parsimony obtained two groups whereas reduces number observations group notice rather good performance ordinal model cost larger number parameters warping parameterized affine function see section noticeable since negative correlations possible see may due larger number levels combinations reduces influence negative correlations group groups groups groups groups ordinal figure estimated correlation parameters among levels based representative design experiments design median application nuclear engineering position problem presented introduction research originally motivated solving inverse problem confronting experimental measurements nuclear engineering numerical simulation precisely analysis concerns identification mass present particular waste container using nuclear detection technique gamma spectrometry knoll case energy level quantity interest provided gamma transmitter attenuation coefficient depends source group groups groups groups groups ordinal figure six models based repetitions design number parameters used bloxplot order ronment denoted practice discrete values interest corresponding natural energy levels kev based previous studies guillot real source environment parameterized following input variables equivalent geometric shape nuclear waste sphere sph cylinder cyl parallelepiped par equivalent material waste characterized chemical element atomic number bulk density waste distance measurement container measurement device mean width lateral surfaces logarithmic scale crossed gamma ray rotation object normalization characteristics input space summed table name input distance density width surface energy shape chemical element variation domain sph cyl par table description input variables nuclear application recapture notation previous sections let vectors gathering respectively continuous categorical inputs given value monte carlo simulation codes mcnp goorley used model measured scene approach value mass eventually searched solution following optimization problem arg min obs classical euclidian norm obs respectively gather values six values used measurements solve therefore necessary compute high number points however evaluation mcnp code extremely demanding several minutes several hours cpu one evaluation thus surrogate models introduced emulate function investigated frame gaussian process regression refer clement second step namely treatment inversion problem model settings pedagogical purpose dataset large size computed mncp code construction design experiments guided categorical inputs combinations levels appears times completed latin hypercube size define values four continuous inputs full dataset training set size extracted selecting random observations chemical element remaining points serve test set sph function energy cyl par function geometric shape figure function energy geometric shape model settings motivated graphical analysis figure output displayed function energy geometric shape observe successive energy levels correspond close values fact confirms energy ordinal use warped kernel defined influence geometric shape less obvious chosen exchangeable covariance structure figure displayed function chemical elements ordered atomic number two important facts high number levels heteroscedasticity purpose chemical elements divided groups provided expert knowledge represented colors partition suggests use group kernel form blocks covariance matrices order handle heteroscedasticity variance assumed depend group number influence continuous variables observed panels represented reveal useful information purpose kernel set continuous inputs expect output regular function continuous inputs indeed kernel corresponding gaussian process two times differentiable finally three candidate kernels obtained combining kernels input variables defined sum product anova see section figure function chemical elements ordered atomic number results following model settings detailed figure panel presents results obtained random designs size three operations kernels furthermore implemented three kernels chemical element order compare model choices categorical input first panel grouped levels single group second one kept kernel forced covariances common value finally fourth panel considered levels ordered atomic number used warped kernel normal transform figure several models based random designs corresponding different model choices chemical element first panel single group second panel groups common covariance third panel groups fourth panel ordered levels total number parameters used panel order prod add prod anova prod first comparing three operations kernels remark panels additive kernels provide worst results suggests existence interactions different inputs simulator second anova combination produces slight improvements compared standard terms accuracy stability respect design choice comparing four panels see gathering levels single group least efficient strategy kernel gives good performances especially covariances vary freely constraining equal degrades result surprisingly ordinal kernel gives best performance indeed application intuitive experts chemical element viewed ordinal variable simply sorted atomic number confirmed correlation plots figure corresponding model median score see estimated correlations levels seems decrease difference levels increases indication levels may ordered atomic number finally report several results first estimated transformation energy levels figure concave flat near high values corresponds behaviour observed figure left panel addition last three levels lead similar results figure corresponds fact energy high gamma ray almost always crosses nuclear waste leading high value output second estimated correlation among sphere cylinder parallelepiped high figure justifies considering covariance structure categorical input rather using three independent models three levels conclusion framework regression continuous categorical inputs focused problems categorical inputs may potentially large number levels partitioned groups various sizes provided new results parsimonious block covariance matrices defined covariance parameters groups common betweengroup covariance groups general figure estimated correlation parameters among chemical element transformation correlations figure estimated correlation parameters energy figure estimated correlation parameters geometric shape revisited nested bayesian linear model response term defined sum group effect level effect obtained flexible parameterization block covariance matrices automatically satisfy positive definiteness conditions particular case recover situations covariance structures compound symmetry possible negative correlations furthermore showed positive definiteness given block covariance matrix checked verifying small matrix size obtained averaging block positive definite criterion useful proposed block matrix desirable constraint homoscedasticity directly handled proposed parameterization applied findings several toy functions well application nuclear engineering continuous inputs categorical inputs one levels corresponding chemical numbers mendeleev table application groups defined experts results measured terms prediction accuracy outperform obtained oversimplifying assumptions gathering levels group hand categorical input viewed ordinal one plugging right order warped kernels lead slightly better results experiments several perspectives work firstly one future direction find technique recover groups levels may easy task due small number observations available context regression similarly order levels infer data secondly trend models fixed constant complex forms based linear models could explored software information acknowledgements implementations done packages mixgp kergp deville illustrations use wickham corrplot wei simko research conducted within frame chair applied mathematics oquaido gathering partners technological research brgm cea ifpen irsn safran storengy academia cnrs ecole centrale lyon mines university grenoble university nice university toulouse around advanced methods computer experiments appendix proof proposition vector centered gaussian vector covariance matrix hence conditional distribution knowing centered gaussian vector covariance matrix cov using independence deduce cov recognize covariance matrix covariance matrix positive semidefinite furthemore conditions positive definiteness satisfied conversely let positive definite matrix define direct sense obtain covariance matrix proof proposition first part proposition obtained remarking cov cov thus assuming centered equivalent probability second part notice means orthogonal thus one write expansion orthonormal basis defined denoting coordinates gives cov cov follows cov prove unicity observe definition starting ama multiplying left right get showing unique let since obtain notice resubstituting ama gives finally vil properties conditional gaussian vectors lead immediately cov proof proposition expressions obtained directly using independence assumptions notice covariance matrix knowing centered proposition gives jng positive semidefinite conversely let positive semidefinite matrix form jng positive semidefinite let also matrix obtained averaging block positive semidefinite matrix indeed since positive semidefinite covariance matrix covariance matrix vector vector obtained averaging group thus exists centered gaussian vector whose covariance matrix define jng jng observe assumption positive semidefinite hence proposition exists centered gaussian vector cov assume independent independent finally set direct sense obtain covariance matrix conditional proof corollary let positive semidefinite matrix form diagonal blocks diagonal matrices positive semidefinite leading thus jng ing jng positive semidefinite matrix hence prop obtained model cov ing jng prop last part choose ing conversely ing prop cov covariance matrix result follows prop proof proposition direct sense already derived proof prop furthermore inspecting proof see positive semidefinite admits representation thus covariance matrix positive semidefinite proof available roustant deville notice need add condition positive definite however adding equivalent condition namely positive semidefinite necessary indeed consequence fact positive semidefinite jng positive semidefinite implies finally direct references chevalier bect ginsbourger vazquez picheny richet fast parallel stepwise uncertainty reduction application identification excursion set technometrics clement saurel perrin stochastic approach radionuclides quantification epj web url https deng lin liu rowe additive gaussian process computer models qualitative quantitative factors technometrics deville ginsbourger roustant kergp gaussian process laboratory url https contributors durrande package version fox dunson multiresolution gaussian processes pereira burges bottou weinberger editors advances neural information processing systems pages curran associates url http goorley fensin mckinney users manual version may gower euclidean distance geometry math sci guillot quantification gamma par phd thesis blaise pascal france han santner notz bartel prediction computer experiments quantitative qualitative input variables technometrics issn khuri good parameterization orthogonal matrices review mainly statisticians review paper south african statistical journal knoll germanium detectors volume john wiley sons lindley smith bayes estimate linear model discussion part journal royal statistical society ser mccullagh regression models ordinal data journal royal statistical society series methodological park choi hierarchical gaussian process regression sugiyama yang editors proceedings asian conference machine learning volume proceedings machine learning research pages pinheiro bates models statistics computing springer new york pinheiro bates unconstrained parametrizations variancecovariance matrices statistics computing qian sliced latin hypercube designs journal american statistical association qian gaussian process models computer experiments qualitative quantitative factors technical report department statistics university wisconsin rasmussen williams gaussian processes machine learning mit press roustant deville validity parametric block correlation matrices constant within group correlations working paper preprint may url https sacks welch mitchell wynn design analysis computer experiments statistical science shepard brozell gidofalvi representation parametrization orthogonal matrices journal physical chemistry smith bayes estimates models biometrika venables ripley modern applied statistics springer edition wei simko corrplot visualization correlation matrix package version wickham elegant graphics data analysis new york isbn url http zhang notz computer experiments qualitative quantitative variables review reexamination quality engineering
| 10 |
may survey trapping sets stopping sets may aiden price joanne hall member ieee codes used many applications however error correcting capabilities limited presence stopping sets trappins sets trappins sets stopping sets occur specific error patterns cause decoder fail trapping sets first discovered investigation error floor margulis code possible solutions constructions avoid creating trapping sets progressive edge growth peg methods remove trapping sets existing constructions graph covers survey examines trapping sets stopping sets ldpc codes channels bsc bec awgnc index codes trapping sets stopping sets qcldpc codes margulis codes awgnc peg algorithm graph covers ntroduction technology advances wish communicate longer distances ability stay connected even poor communication channels codes one best ways achieve performance many cases limited presence trapping sets stopping sets trapping sets stopping sets cause iterative decoding methods fail relatively errors finding ways avoid remove trapping sets stopping sets improve already high performance ldpc codes bring performance curves even closer shannon limit performance optmization becoming incresingly crucial world moves digital age increase speed digital communication occurs modern applications wifi drastic implications overall productivity world gallager introduced codes ldpcs ldpc codes class binary linear block codes sparse matrix advantage using ldpc codes able provide error control close capacity many different channels categorizes ldpc codes one capacityapproaching codes error correction methods allow noise channel set close theoretical maximum maintaining ability performance error correction method based upon two properties performance code channel variable noise optimal bit error ratio ber code sufficient ratio snr optimal ber known error floor code price science engineering faculty queensland university technology queensland qld australia hall school science royal melbourne institute technology melbourne manuscript received date year revised date year discussed different papers terms bit error rate ber frame error rate fer block error rate symbol error rate depending application addressed see examples error floor analysis consideration error floor one important aspects constructing ldpc code analysis performance ldpc codes binary erasure channel bec led discovery stopping sets margulis construction improved upon performance gallager codes though weakness construction led high error floor additive white gaussian noise channel awgnc compared performance constructions time high error floor due presence stopping sets stopping sets bec described became well understood problem led definition trapping sets defined awgnc bsc early works trapping sets called words trapping sets stopping sets important topic worthy survey reliminaries otation order engage literature stopping sets trapping sets overview preliminaries necessary provide short review literature surrounding ldpc codes common transmission channels common decoding techniques definition binary linear code subspace vector space used provide structure message vector transmission channel order transmit messages communication channels using error correcting code encode message using generator matrix definition generator matrix code matrix dimensions rows correspond linearly independent code words form basis one important aspects error correction process decoding matrix allows identify whether errors introduced transmission matrix also represented tanner graph definition matrix matrix generates nullspace code means code word code iff null vector dimensions definition matrix may represented bipartite graph variable node set check node set bipartite graph denoted may columns indicate variable nodes rows indicate check nodes bipartite graph known tanner graph check nodes variable nodes see fig also refer individual variable nodes let variable node check node definition matrix code sparse corresponding code called ldpc code note classification sparse used context ldpc codes fewer ones matrix zeros sparse nature ldpc codes means decoding processes fast fewer operations compute compared matrix two important features tanner graphs neighbours node degrees definition variable node check node say nodes neighbours degree node tanner graph defined number edges connected node degree definition also define regular ldpc codes definition ldpc code called variable node degree check node degree denote ldpc code form code ldpc codes designed used methods variety communication channels three communication channels discussed paper binary erasure channel bec binary symmetric channel bsc additive white gaussian noise channel awgnc though channels handle data transmission different ways encoding decoding goals upper bound error correcting ability ldpc code determined minimum distance code order define minimum distance code first define hamming weight hamming distance definition hamming weight vector number elements hamming weight binary vector therefore number ones vector hamming distance two vectors number places differ written literature hamming weight hamming distance often referred using terms weight distance weight code words affects number operations performed decoding distance code words affects many errors corrected definition minimum distance code defined smallest hamming distance two code words code encoding verification distance codewords given take following error vector process transforming message vector associated code word known encoding every code word expressed mesage vector code word original information bits well additional parity bits give code word length bits matrix nullspace code use verification method test recieved vector code word product denoted syndrome given vector iff must even number ones components product add give known constraint code word transmitted channel party receives vector use error correcting techniques attempt correct recover min following theorem corollary describe code error detection correction abilities using minimum distance theorem code detect errors code word code correct errors code word corollary code minimum distance used either detect errors correct errors code word minimum distance code small provide sufficient error correction demonstrated example example let two code words given added resulting code word identical demonstrating importance minimum distance minimum distance ldpc code also related large code rate lowers upper bound minimum distance code definition code rate ldpc code portion information bits sent comparison entire code vector sent written code rate minimum distance often determine error correcting capability ldpc code though decoding algorithm plays direct part time takes decode messages may fig irregular tanner graph used demonstrate decoding algorithm received vector bec nodes interest highlighted gray shows steps algorithm first iteration shows changes made step step second iteration shows changes made step time revealing erasures exist algorithm terminates step iteration thus successfully correcting received vector iterations communication channels decoding basics communication channel transmission occurs impacts error correction algorithms chosen communication channel modelled triple contains input alphabet output alphabet probability transition symbol input alphabet symbol output alphabet binary erasure channel bec one simplest channel models definition binary erasure channel bec communication channel two input symbols three output symbols erasure symbol bec erasure probability given input output defined probability formulae analysis bec significantly advanced modern understanding error correction example simple decoding process bec edge removal algorithm definition let binary code word transmitted bec received vector edge removal algorithm proceeds follows initial step value received vector bit assigned variable node tanner graph check nodes count number erased bits neighbours tanner graph check node neighbours one symbol even parity constraint uniquely determines original value variable node repeat steps either erasures recovered every check node neighbour erased bit neighbour least two erased bits step latter occurs decoder failed due presence stopping set see section provide example decoding process fig represents variable node check node example found decoding example use irregular paritycheck matrix demonstrate decoding received vector using algorithm bec another communication channel binary symmetric channel bsc definition binary symmetric channel bsc communication channel two input symbols two output symbols also bsc error probability given input output defined probability formulae example decoding process bsc gallager algorithm definition let binary code word transmitted bsc received vector gallager algorithm proceeds follows initial step value received vector bit assigned variable node tanner graph check node sends neighbouring variable nodes sum mod adjacent variable nodes except node degree check node variable node sends following adjacent check nodes messages check nodes target check node message equal sends message back otherwise resends prior value repeat steps either variables nodes send values two consecutive iterations max iteration count reached may fig tanner graph used demonstrate gallager decoding algorithm received vector represent sent along edge dashed line full black edge shows step gallager algorithm well check node calculation taken addition mod incoming message variable nodes adjacent check node denoted shows step lastly shows step due complexity algorithm one full iteration shown though step definition describes decoding continues gallager algorithm offers improved decoding additional step loop within algorithm degree check node loop prechosen threshold value throughout steps involved check node variable node adjacent check node least neighbours excluding sent information previous round sends information otherwise sends received value algorithm special case algorithm independent round throughout decoding procedure max iteration count reached without completion decoder failed due existence trapping set see section example first steps gallager algorithm see fig demonstrates differences decoding considerations made bec bsc complex channel considered binary input additive white gaussian noise channel expressed commonly either awgnc definition let message vector denotes arbitrary length additive white gaussian noise channel awgnc maps input vector vector adds result gaussian white noise give output vector code symbol carries signal noise ratio snr conditional distribution gives output alphabet awgnc bec bsc would like indication errors channel introducing code word metric used awgnc log likelihood ratio llr describes likelihood positive thus input estimate methods map decoding methods used bsc implemented awgnc however high performing decoding algorithms maximumlikelihood decoders algorithm maxproduct algorithm utilize llr information improve decoding speed bsc used channels llr values defined bsc though awgnc closely models influence communication channels favoured high performance simulations example bsc conditional llr function bit flipping probabilities well understood outputs lbsc noise determines values awgnc llr channel defined decoding algorithms used awgnc far complex bec bsc provide overview various methods rather detailed definitions examples decoding methods used awgnc tend message passing algorithms nodes send information neighbours correct errors based structure matrix original algorithm example flooding may ability code well frame error ratio fer fer ratio frames whole messages transmitted fully corrected versus total number frames transmitted largest contributors error floor stopping sets trapping sets fig tanner graph irregular matrix given example induced subgraph highlighted stopping set consistent labelling schedule iteration variable nodes subsequently check nodes pass new messages neighbours another example flooding schedule algorithm see improved schedule variable nodes check nodes send messages throughout single iteration known many names including serial scheduling layered scheduling sequential scheduling algorithms offer improved decoding performance information moving tanner graph frequently examples decoding algorithms using scheduling include algorithm mpa belief propagation algorithm bpa bpa widely used ldpc code analysis based likelihood node takes value given current value values nearby nodes previous iterations error correction must implemented differently channel edge removal algorithm example deals erasures thus suitable bsc errors occur transmission corrected decoding algorithm form known bit error ratio ber ratio bits corrected versus total number bits transmitted order test performance ldpc codes simulate transmission messages increasing ratio snr calculate ber code varying conditions snr grows larger ber code suddenly decrease depending conditions channel error correcting capability ldpc code use curve known waterfall region best scenario correcting errors probability error transmission channel negligible implemented error correcting code correct many errors definition waterfall region eventually ends ber graph curves anomalous errors cause decoders fail even high snr ratio lowest ber becomes levelling called error floor code ber standard way analyze error correcting iii ycles irth decoding method choose direct implications accuracy efficiency decoding cycles first known negative characteristic ldpc codes extensively studied impacted accuracy high performance ldpc codes cycle graph sequence connected nodes form closed loop initial final node edge used cycle length number edges cycle contains length smallest cycle graph denoted girth cycles exist within tanner graph paritycheck matrix iterative belief propagation decoding technique always successful sufficient iterations however neighbours node conditionally independent belief propagation methods become inaccurate inferred solution construct matrix cycles however discussed section vii unessecary cycles negatively impact decoding efficiency ldpc codes fact restriction girth lead constraints structure code impedes decoding efficiency cycles negatively impact decoding efficiency ldpc codes combine form known stopping sets trapping sets sets lead high error floor otherwise efficienct ldpc code constructions throughout various communication channels affect high performing decoding algorithms topping ets bec stopping sets collections variable check nodes tanner graph ldpc code greatly reduce error correcting ability sets cause decoding fail certain variable nodes affected errors transmission stopping sets first described researching average erasure probabilities bits blocks bec definition let tanner graph set variable nodes stopping set subset neighbours connected least twice empty set also stopping set space stopping sets closed union stopping sets following lemma describes stopping set performance ldpc code decoding algorithm lemma let generator ldpc code bec denote subset set variable nodes erased channel transmission message set erasures remain may fig effect stopping set decoding process irregular tanner graph right hand side line shows steps algorithm first iteration shows changes made step step second iteration finally shows erasures corrected thus see received vector produces scenario tanner graph algorithm retrieve original code word definition know due presence stopping set decoder stops equal unique maximal stopping set definition widely accepted given bec erasure probability performance code bec completely determined presence stopping sets since stopping sets combinatorial characterization distributions various tanner graphs analyzed rigorously definition let denote collection stopping sets tanner graph stopping number size smallest stopping set stopping number code aids analysis code error floor known performance ldpc code bec dominated small stopping sets graph larger value lower error floor code cases stopping number increases linearly number variable nodes tanner graph seen easily using stopping ratio definition let tanner graph variable nodes stopping number stopping ratio tanner graph defined ratio stopping number number variable nodes stopping set matrix ldpc code shown example example let code following check matrix columns highlighted belong stopping set tanner graph stopping set highlighted shown fig stopping set must either empty least contain two variable nodes stopping number therefore stopping ratio example showing impact stopping set decoder shown fig edge removal decoding algorithm used bec solutions problem stopping sets covered section vii involve either avoiding removing small stopping sets tanner graph leaving ldpc codes large stopping sets stopping sets well defined solutions exist minimize effect error floor ldpc codes terminology support channels without erasure rapping ets bsc awgn trapping sets much like stopping sets also collections variable nodes check nodes impede error correcting ability ldpc code small elementary trapping sets impact error floor ldpc codes bsc awgnc clustering definition trapping sets came shortly stopping sets defined similarly bec decoding bsc awgnc sometimes maximum iteration count reached small set variable nodes error experiments argulis codes lead definition trapping sets definition let tanner graph received vector length define failure set set bits eventually correct using arbitrary iterative decorder decoding successful definition trapping set specifically trapping set variable nodes induced contains check nodes may fig trapping set left critical number trapping set right critical number values found using gallager decoding algorithm may vary decoding algorithms applied iterative techniques bsc awgnc distinguish trapping sets stopping sets bec iteration decoding algorithm become trapped notion trapping sets becomes irrelevant lemma let code using maximum likelihood decoder trapping sets precisely code words though channel bec iterative decoding failure said due stopping sets making stopping sets trapping sets equivalent bec lemma let code using belief propagation algorithm bec trapping sets precisely stopping sets lemma important bridge trapping sets stopping sets allowing relate bec bsc awgnc decoding failure ldpc code bsc awgnc largely due existence trapping sets trapping sets pose real threat error correcting ability ldpc codes even though may nodes error transmission enough nodes belong trapping set decoder fail definition let trapping set critical number minimal number variable nodes initially error decoder become trapped important note variables nodes initially error necessarily belong trapping set possible iteration trapping set entered causing decoder fail order become trapped decoder must finite number iterations error least one variable node every iteration thereafter trapping sets small number variable nodes check nodes impact ldpc codes definition trapping set code small trapping set small trapping sets contribute larger errorfloor small trapping sets also elementary form definition elementary trapping set code trapping set check nodes induced subgraph either degree one two exactly check nodes check nodes odd degree larger one possible unlikely within small trapping sets techniques find remove elementary trapping sets become crucial constructing high perfoming codes two examples trapping sets shown fig fig trapping set right smaller number variable nodes one left however gallager decoding algorithm larger trapping set smaller critical number thus performance code limited larger trapping set idea quite unintuitive shows depth consideration must made attempting improve error floor ldpc codes problems trapping sets stopping sets introduce ldpc code ares important research solve exist methods constructing ldpc codes avoiding removing trapping sets stopping sets however methods come cost restraining properties code length density error correcting ability influence topping ets rapping ets ldpc ode erformance original ldpc codes proposed gallager construction methods allowed varied code rates definition gallager code ldpc code constructed using matrix uniform row weight uniform column weight code length code words code rate gives matrix columns rows naive analysis indicated failed decoding due received vectors containing many errors decoding algorithm analysis range error patterns determined always case leading definition stopping sets bec variety analyses gallager codes shown high performance construction margulis promised improved performance awgnc prime let special linear group whose elements consist matrices determinant group elements margulis code length code rate rows matrix indexed elements columns indexed two copies detailed following definition definition let generated following matrices index row matrix one placed columns corresponding left hand side matrix also columns corresponding right hand side matrix results matrix margulis code example matrix generated using margulis construction shown fig demonstrate may matrix margulis code frame error rate fer fig matrix generated using margulis construction setting give code blue dots represent ones matrix remaining white space representing zeros sparse nature ldpc codes another example margulis matrix found corresponds code code higher performance random gallager code error floor still quite high error floor claimed due words comparison margulis code random gallager code seen fig definition let matrix vector weight hxt weight word words different stopping sets typical words contain check nodes connected variables nodes words margulis code words high error floors margulis code reproduced bit approximation belief propagation algorithm words account error floor performance margulis code near code words trapping sets trapping sets often clustered one trapping set found often contain nodes belong another trapping set makes search trapping sets somewhat simpler finding stopping trapping sets problems makes solutions sets difficult analyse decoding algorithm simple effect stopping sets demonstrated easilly however decoding bsc awgnc much complex see fig iterative decoding methods tend maximum iteration counts termination conditions demonstrating effect trapping sets difficult show lieu example remind reader termination conditions gallager algorithm decoder terminates either variable nodes send values two consecutive iterations maximum iteration count reached latter case decoder failed due existence trapping set fig ber comparison margulis code using example presented fig random gallager code decoded using mpa awgnc graphs also known literature waterfall curves discussed issues margulis code using specific decoding algorithms many code constructions contain trapping stopping sets decoding algorithms terminate reasons reading see reading stopping sets ldpc codes include message passing algorithm maximumlikelihood decoder bec reading trapping sets include finite alphabet iterative decoders faids constructions based latin squares construction offers high structure stopping sets trapping sets analysed constructions avoid trapping stopping sets exist error floors associated ldpc codes lower significantly would improve speed almost digital communication occurs given already high performance ldpc codes modern applications including wifi vii urrent olutions simple goal avoid completely remove every stopping set trapping set ldpc code reasonable given number cycles ldpc construction importantly necessary small elementary trapping sets impact error floor due clustering enough errors transmission decoder get trapped large trapping set highly likely would also trapped small trapping set enough errors decoder become trapped large trapping set received vector either successfully decoded decoder fail due presence least one small trapping set current solutions trapping sets development constructions avoid small trapping sets removal trapping sets existing constructions may fig subgraph tree contained depth neighbourhood spreading variable node note represents variable node represents check node avoiding trapping sets stopping sets pose threats error correction messages sent bec however practice awgnc used discussing proposed solutions focus influence trapping sets awgnc peg construction method constructing tanner graphs high girth many trapping sets include small cycles likelihood small trapping set constructed small graph high girth order give definition peg construction definitions needed peg construction method uses variable check node degree sequences variable node degree sequence denoted dvi degree variable node parity check sequence denoted degree check node construction partitions set edges evi contains edges incident symbol node edge incident denoted evki dvi neighbourhood depth variable node nvli defined set check nodes included subgraph tree spreading variable node within depth demonstrated fig complement nvli set check nodes subgraph tree generated way constructed root given parameters define peg construction follows progressive algorithm peg begin dvi begin edge first edge incident check node lowest degree else expand subgraph symbol node depth cardinality nvli stops increasing less evki edge evi edge incident check node chosen set lowest degree end end end presented check nodes degree decision must made selection check node random selection check node according order improved construction construction chooses random though deterministic nature ordered check node process might use example peg construction given fig setting peg construction maximises local girth variable node new edge added node discovery stopping trapping sets peg construction modified peg construction notable ability create high girth ldpc codes however number cycles controlled trapping sets formed combination several cycles peg algorithm higher girth aternate constructions contains trapping sets thus leaving error floor open improvement randpeg construction improves upon peg algorithm minimizing cycles time reducing computational complexity peg algorithm improved adding objective function avoid small trapping sets ldpc qcldpc codes used many applications constructed using improved randpeg algorithm objective function used improved randpeg algorithm detects trapping sets removing trapping sets many trapping sets possible without adversely affecting performance ldpc code characterization trapping sets achieved locations check nodes different levels trees see fig resulting construction follows improved randpeg algorithm rpeg may fig peg construction check nodes chosen based index order first edge chosen variable node chosen check nodes lowest degree random generation subgraphs subsequent edge placement factors highlight construction method edge choices simplistic subgraphs low depth decision making continues considered edge choice restricted due connections subgraph check nodes choices observed remaining figures one notable choice edge decision remaining option chosen becomes gives uniform check node degree sequence begin dvi begin edge first edge incident check node lowest degree else expand subgraph symbol node depth cardinality nvli stops increasing less remove check nodes appear least tree spreading removes check nodes would create cycles size compute number trapping sets would created selected remove check nodes would create trapping sets remove check nodes create smallest number trapping sets evki edge evki edge incident check node chosen remaining nodes else declare design failure end end end end end improved randpeg construction algorithm high computational complexity performs task avoiding trapping sets optimally given dimensions ldpc code possible improvements construction method include lowering computational complexity potentially lowering girth removal cycles unnecessary cycles contribute trapping sets inclusion lower girth construction also contains small trapping sets could lead ldpc code higher decoding performance removing stopping trapping sets performance ldpc codes constrained presence cycles trapping sets within code paritycheck matrix discuss two methods removing trapping sets addition redundant equation use tanner graph covers redundant equations adding redundant equation equivalent adding redundant row matrix used attempt remove trapping sets present margulis code trapping sets margulis code elementary point trapping sets point trapping sets subsets variable nodes contain errors ever occur throughout decoding process redundant parity check row identified added matrix potentially disrupts elementary trapping sets row identified random search relies information trapped variables available may decoding random searches used applied error correction structured search considered useful structured search identifies variable check nodes connect trapping sets combines projection involved nodes redundant row added eliminate effect trapping sets way disrupt trapping sets margulis code projection variables variables redundant equation row weight one reliably achieved extending trapping sets trapping sets see fig given margulis code paritycheck matrix elementary trapping set contains fixed number check nodes let denote number check nodes connected two variable nodes within trapping set therefore two variable nodes three check nodes must added extend trapping set trapping set one check node connected added variable nodes created extension avoids creation margulis code possible two configurations two check nodes basic trapping set connected two additional variable nodes one additional check node second configuration additional variable nodes share check node configurations demonstrated fig existing check variable nodes neighbouring additional check variable nodes linearly combined generate redundant equation structured search used ensure projection row weight one trapping set addition redundant equation focuses point elementary trapping sets margulis code structure trapping sets well known method applied ldpc codes location structure trapping sets within code unknown redundant rows computationally inexpensive compute however code rate resulting ldpc code reduced extra row increases number operations per decoding iteration though negligible amount another potential problem success rate solution addition redundant equation guarantee trapping set disrupted tanner graph covers another method capable eliminating trapping sets utilization graph covers method constructs ldpc code length given code length parity check matrix code denoted initialized expansion expansion fig trapping set structure trapping set expansion configurations left right two expansion variables denoted check node connected variables nodes denoted unsatisfied check nodes original trapping set denoted variable nodes connected check nodes denoted check nodes degree one expansion trapping set denoted node labels follow similarly operation changing value termed edge swapping graph covers method requires locations dominant trapping sets known method edge swapping described follows graph covers algorithm take two copies code since codes identical share trapping sets initialize swappede dges frozene dges order trapping sets critical numbers choose trapping set tanner graph minimal critical number let denote set edges swappede dges step else step swap arbitrarily chosen edge frozene dges set swappede dges swappede freeze edges swapped following steps set frozene dges frozene dges repeat steps trapping sets desired size removed possible improvements graph cover method prioritize specific edges swapping freezing avoid creating trapping sets critical number however experimentally trapping sets minimal critical number removed using algorithm graph covers method gave improved fer results tanner code margulis code mackay code using gallager decoding algorithm decoding method constant throughout results application graph covers optimize fer performance using may arbitrary decoding algorithm ldpc code created code code rate minimum distance increase minimum distance code gives higher error correcting capabilities however lower code rate could decrease overall efficiency lower row column weight gives higher fer performance though low decoding complexity associated removing trapping sets severe surveyed construction methods avoid code length decoding speed check nodes etc current research goal remains creation construction modifiable construction either avoid remove small elementary trapping sets without penalty code error correcting ability decoding efficiency viii onclusion throughout survey covered literature surrounding ldpc codes communication channels decoding techniques negative impact cycles ldpc code efficiency noted problem stopping sets trapping sets defined discussed including dominance small elementary trapping sets awgnc small variety partial solutions randomized progressive algorithm tanner graph covers discussed research goal remains find constructions ldpc codes without small trapping sets acknowledgment authors would like thank professor ian turner worked closely throughout research emeritus professor dawson harry bartlett help final stages submission dhammika jayalath help decoding simulations awgnc xuan suggestions present ber data computational resources services used work provided hpc research support group queensland university technology brisbane australia price supported apa scholarship eferences diouf declercq ouya vasic improved peg construction large girth codes ieee intern symp turbo codes iterative inf proc istc mackay neal near shannon limit performance low density parity check codes electron vol shannon mathematical theory communication bell system technical journal vol ieee standard information local metropolitan area specific part wireless lan medium access control mac physical layer phy specifications ieee std oct etsi digital video broadcasting dvb second generation framing structure channel coding modulation systems broadcasting interactive services news gathering broadband satellite applications gallager codes ire trans inf theory vol bonello chen hanzo codes rateless relatives ieee commun surv vol richardson error floors ldpc codes proc annual allerton conference commun control computing vol johnson weller codes iterative decoding partial geometries ieee trans vol ivkovic chilappagari vasic eliminating trapping sets codes using tanner graph covers ieee trans inf theory vol proietti telatar richardson urbanke analysis codes binary erasure channel ieee trans inf theory vol margulis explicit constructions graphs without short cycles low density codes combinatorica vol mackay postol weaknesses margulis codes electronic notes theoretical computer science vol baldi cryptography springer science business richter finding small stopping sets tanner graphs ldpc codes turbo codes related topics international source channel coding turbocoding mcgregor milenkovic hardness approximating stopping trapping sets ieee trans inf theory vol shedsale sarwade review construction methods regular ldpc codes indian journal comput sci vol hill first course coding theory oxford clarendon press peterson weldon codes mit press richardson urbanke modern coding theory cambridge university press elias coding two noisy channels third london symposium vol poddar low density parity check codes complexity vol shokrollahi ldpc codes introduction digital fountain tech rep traore kant jensen message passing algorithm linear programming decoding ldpc linear block codes aalborg university colavolpe germi application factor graphs algorithm isi channels ieee trans vol sharon litsyn goldberger efficient schedule ldpc decoding proc conv electric electron engineers hailes maunder hanzo survey ldpc decoders ieee commun surv vol sharon litsyn goldberger convergence analysis serial schedules ldpc decoding turbo codes related topics international source channel coding turbocoding kschischang frey loeliger factor graphs algorithm ieee tran inf theory vol zhang fossorier shuffled belief propagation decoding asilomar conf signals systems computers vol kfir kanter parallel versus sequential updating belief propagation decoding physica statistical mechanics applications vol hocevar reduced complexity decoder architecture via layered decoding ldpc codes signal processing systems casado griot wesel informed dynamic scheduling decoding ldpc codes ieee international conf haykin communication systems john wiley sons tian jones villasenor wesel construction irregular ldpc codes low error floors ieee intern conf commun icc vol may richardson shokrollahi urbanke design capacityapproaching irregular codes ieee trans inf theory vol diouf declercq ouya vasic ldpc code design avoiding short trapping sets ieee intern symp inf theory isit orlitsky viswanathan zhang stopping set distribution ldpc code ensembles ieee trans inf theory vol ripoll barraza new algorithm construct ldpc codes large stopping sets simulation vol ranganathan divsalar vakilinia wesel design irregular ldpc codes using algorithmic cancellation ieee intern symp inf theory laendner milenkovic algorithmic combinatorial analysis trapping sets structured ldpc codes intern conf wireless networks commun mobile computing vol richter hof construction method irregular ldpc codes without small stopping sets ieee intern conf vol mackay good codes based sparse matrices ieee trans inf theory vol krishnan shankar computing stopping distance tanner graph ieee trans inf theory vol sankaranarayanan chilappagari radhakrishnan vasic failures gallager decoder analysis applications proc inf theory applic works ucsd vol luby mitzenmacher shokrollahi spielman efficient erasure correcting codes ieee trans inf theory vol fekri decoding codes binary erasure channel ieee trans inf theory vol danjean declercq planjery vasic selection finite alphabet iterative decoders ldpc codes bsc ieee inf theory works itw laendner milenkovic ldpc codes based latin squares cycle structure stopping set trapping set analysis ieee trans vol eleftheriou arnold progressive tanner graphs ieee global telecommun vol eleftheriou arnold regular irregular progressive tanner graphs ieee trans inf theory vol milenkovic soljanin whiting asymptotic spectra trapping sets regular irregular ldpc code ensembles ieee trans inf theory vol venkiah declercq poulliat design cages randomized progressive algorithm ieee commun vol wang yedidia draper construction qcldpc codes ieee intern symp turbo codes related topics laendner hehn milenkovic huber one redundant equation matter ieee globecom tanner sridhara fuja class ldpc codes proc iscta rosenthal vontobel constructions ldpc codes using ramanujan graphs ideas margulis proc allerton conference control computing mackay encyclopedia sparse graph codes aiden price awarded qut dean scholarship received bsc class honours mathematics queensland university technology aiden currently studying phd qut school mathematics supervision doctor harry bartlett emeritus professor dawson funded apa scholarship research interests coding theory application digital communications cryptography joanne hall received bsc mphil mathematics australian national university graduated phd rmit university supervision asha rao information security informatics research group hall spent one year postdoctoral research scientist charles university prague four years lecturer queensland university technology brisbane returned rmit university lecturer school science research interests algebraic combinatorial structures applications digital communication
| 7 |
may empirical analysis approximation algorithms euclidean traveling salesman problem yihui jiaotong university china ming xiang jiaotong university china heyihui mxiang abstract space cost function euclidean distance euclidean distance two cities applications many disciplines traveling salesman problem tsp classical computer science optimization problem applications industrial engineering theoretical computer science bioinformatics several disciplines recent years plethora novel approaches approximate solutions ranging simplistic greedy cooperative distributed algorithms derived artificial intelligence paper perform evaluation analysis cornerstone algorithms euclidean tsp evaluate greedy genetic algorithms use several datasets input algorithms including small dataset mediumsized dataset representing cities united states synthetic dataset consisting cities test algorithm scalability discover greedy algorithms efficiently calculate solutions smaller datasets genetic algorithm best performance optimality medium large datasets generally longer runtime implementations public available simplification allows survey several cornerstone algorithms without introducing complex scenarios remainder paper organized follows section briefly review first solutions survey variants tsp describe algorithms used experiment section description benchmark datasets results experiment detailed section explains findings compares performance algorithms conclude describe future work section background example tsp illustrated figure input collection cities two dimensional space input represented distance matrix pair cities list points denoting coordinate city latter method distances calculated using euclidean geometry tour shown subfigure although shown figure edge edge weight denoting distance two nodes cities due computational complexity tsp may necessary approximate optimal solution optimal tour shown small graphs may possible perform exhaustive search obtain optimal solution however number cities increases solutions space problem complexity running time number cities number possible pthe edges number possible tours since tour start point appears twice start node start node introduction known traveling salesman problem tsp first formulated one studied optimization problems date problem follows given list cities distance pair cities find shortest possible path visits every city exactly returns starting city tsp broad applications including lasers sculpt microprocessors delivery logistics mail services name tsp area active research fact several variants derived original tsp paper focus euclidean tsp euclidean tsp vertices correspond points https algorithm optimization simple local search algorithm first proposed croes solving tsp main idea behind take route crosses reorder complete local search compare every possible valid combination swapping mechanism technique applied travelling salesman problem well many related problems include vehicle routing problem vrp well capacitated vrp require minor modification algorithm mechanism swap manipulates given route figure tsp first formulated karl menger vienna harvard solutions tsp began appear first solution published dantzig fulkerson johnson using dataset cities richard karp proved hamiltonian cycle problem proves tsp modern day tsp variety applications numerous fields examples among applications include genome sequencing air traffic control supplying manufacturing lines optimization take route route add order new route take route route add reverse order new route take route end add order new route algorithms return new route move discussion algorithms used evaluation first describe upper bound tsp section traditional greedy approaches discussed section section finally discuss genetic algorithm section genetic algorithm genetic algorithms search heuristics attempt mimic natural selection many problems optimization artificial intelligence genetic algorithm population candidate solutions evolved time towards better solutions evolutions generally occur mutations randomization recombination define fitness function differentiate better worse solutions solutions individuals higher fitness scores likely survive time final solution found population converges solution within threshold however great care must taken avoid trapped local optima apply genetic algorithm tsp define fitness function length tour supposed ordering cities number cities fitness score tsp becomes cost tour denote distance random path finding worst case tsp hard best one uniformly generate random path available edges use upper bound optimal path benchmark algorithms greedy algorithm greedy heuristic based kruskals algorithm give approximate solution tsp algorithm forms tour shortest route constructed edges tour must form cycle unless selected number edges equal number vertices graph selected edge appended tour increase degree node algorithm begins sorting edges least weight heavily weighted edges sorted least edge selected added tour violate conditions algorithm continues selecting next edge adding tour process repeated vertices reached tour result minimum spanning tree solution tsp runtime greedy algorithm log generally returns solution within heldkarp lower bound genetic algorithm begins initial random population candidate solutions set paths may may good solutions move forward one time step time step perform set probabilistic statistical methods select mutate produce offspring population traits similar best individuals highest fitness runtime comparison greedy figure dataset plane united states cities figure runtime comparison axis log scale repeat process population becomes homogeneous running time genetic algorithms variable dependent problem heuristics used however individual population require space storage path genetic crossover space requirement remains best genetic algorithms find solutions within optimal tour certain graphs tour length comparison tour length greedy optimal random benchmark algorithms using publicly available datasets additionally test scalability algorithms generated synthetic dataset consisting cities dataset names numeric digits represent number cities dataset datasets follows datasets except found online datasets represent consisting locations cities united states visual representation dataset plane shown figure datasets known optimal tour case use random path algorithm infer upper bound optimal tour experiment dataset figure tour length comparison axis log scale divided random tour length solution similar optimal small datasets become worse larger datasets terms running time figure best algorithm greedy algorithm however terms optimal tour length solution best algorithm line expectations alludes fact different heuristics better suited different situations shown figure genetic algorithm performs fairly consistently comparison greedy algorithms across datasets highlighted figure running time genetic almost linear suggests larger datasets running time concern genetic algorithm used figure demonstrates genetic algorithm maintains smaller percent optimal algorithms see genetic algorithm high accuracy better complexity heuristics especially larger datasets surprisingly genetic algorithm got optimal solution random dataset dataset generated plotting random uniformly distributed points result distances satisfy triangle inequality dataset classified euclidean tsp dataset running time creating dataset output list cities represented points comparison see figure greedy efficient figure see algorithms return bryant benjamin genetic algorithms traveling salesman problem department mathematics harvey mudd college pages burkardt data traveling salesperson problem http croes method solving problems operations research grefenstette gopal rosmaita van gucht genetic algorithms traveling salesman problem proceedings first international conference genetic algorithms applications pages lawrence erlbaum new jersey haque shah ejaz empirical evaluation approximation algorithms metric traveling salesman problem hoffman wolfe garfinkel johnson papadimitriou gilmore lawler shmoys karp steele traveling salesman problem guided tour combinatorial optimization wiley sons homaifar guan liepins schema analysis traveling salesman problem using genetic algorithms complex systems hong kahng moon improved largestep markov chain variants symmetric tsp journal heuristics kim shim zhang comparison tsp algorithms project models facilities planning materials handling mucha frac graphic tsp theory computing systems qiu zhang yan adaptive markov chain monte carlo algorithm tsp computer science software engineering international conference volume pages ieee reinelt tsplib http rosenkrantz stearns lewis analysis several heuristics traveling salesman problem siam journal computing figure solution generated genetic algorithm dataset dataset shown figure conclusion algorithms attempt solve tsp linear fashion originating artificial intelligence genetic algorithm different compared greedy literature suggests best algorithms focus iteration convergence find optimal tours something genetic algorithms attempt achieve example large step markov chain relies markov chains find convergence many paths form global optimum several papers cite markov chains best known solution tsp recent studies include using adaptive markov chain monte carlo algorithms many extend metropolis algorithm simulated annealing algorithm attempts mimic randomness particles temperature varies supports conclusion algorithms inspired artificial intelligence perform well finding solutions tsp however may suitable guarantee required paper surveyed several key cornerstone approaches tsp selected four algorithms tested performance variety public datasets results suggest genetic algorithms approaches artificial intelligence able find solution references arora polynomial time approximation schemes euclidean tsp geometric problems foundations computer science annual symposium pages ieee baojunpeng introduction artificial intelligence
| 2 |
may doppler synthetic aperture radar interferometry novel sar interferometry height mapping using waveforms birsen cagri yanik department electrical computer systems engineering rensselaer polytechnic institute troy usa corresponding author yazici abstract paper introduces new novel radar interferometry based doppler synthetic aperture radar paradigm conventional sar interferometry relies wideband transmitted waveforms obtain high range resolution topography surface directly related range difference two antennas configured different positions novel imaging modality uses continuous waves uncw takes advantage high resolution doppler information provided uncws form high resolution sar images introduced theory interferometry derived interferometric phase model develop equations height mapping unlike conventional sar interferometry show topography scene related difference doppler two antennas configured different velocities conventional sar interferometry uses range doppler doppler due interferometric phase height mapping interferometry uses doppler due interferometric phase height mapping demonstrate theory numerical simulations interferometry offers advantages robust environmentally friendly operations lightweight systems suitable platforms passive applications using sources opportunity transmitting uncw submitted inverse problems doppler synthetic aperture radar interferometry introduction synthetic aperture radar sar interferometry powerful tool mapping surface topography monitoring dynamic processes tool integral part wide range applications many disciplines including environmental remote sensing geosciences climate research earthquake volcanic research mapping earth topography ocean surface current monitoring hazard disaster monitoring well defense security related research basic principles sar interferometry originally developed radio astronomy interferometric processing techniques systems later developed applied earth observation sar interferometry exploits phase differences two sar images extract information medium present single sar image conventional sar interferometry relies wideband transmitted waveforms obtain high range resolution phase difference two wideband sar images related range difference many different interferometric methods depending configuration imaging parameters space time frequency etc two images acquired different phase difference related topography surface paper develop basic principles new novel interferometric method based paradigm determine topography surface unlike conventional sar uses continuous waves form high resolution images conventional sar takes advantage high range resolution due movement sar antenna high resolution imaging hand takes advantage high temporal doppler resolution provided uncws high resolution imaging develop phase relationship two images show phase difference related doppler difference approximate phase difference derive equations height mapping interferometry conventional wideband sar interferometry height mapping requires two different interferometry provides new degree freedom system design allowing antennas different velocities obtain height mapping additional advantages interferometry include following small lightweight inexpensive calibrate hardware high snr long effective range operation make interferometry suitable modality applications requiring high snr long range operation low payload platforms small uninhabited aerial vehicles effective use electromagnetic spectrum environmentally friendly illumination iii passive applications may require dedicated transmitters since existing radio doppler synthetic aperture radar interferometry frequency signals opportunity often properties best knowledge first interferometric method developed paradigm present theory two monostatic however method easily extended bistatic multistatic configurations synthetic aperture imaging applications acoustics rest paper organized follows section sar geometry notation defined section wideband sar image formation layover effect basic principles wideband sar interferometry described perspective relevant subsequent development section data model image formation layover summarized section introduces basic principles interferometry compares results wideband sar case section presents numerical simulations section concludes paper configurations notation consider two sar systems shown fig antenna antenna location scatter height scatter figure imaging geometry interferometric sar system two antennas following trajectories scatterer located height let denote trajectories first second antennas respectively unless otherwise stated bold roman bold italic roman letters denote elements respectively earth surface located unknown height representing ground topography let denote target reflectivity assume scattering takes place surface earth major notation used throughout paper tabulated table doppler synthetic aperture radar interferometry table notation symbol description location earth surface unknown height scatter surface reflectivity antenna trajectory range antenna center frequency transmitted waveforms time antenna antenna wideband sar demodulated received signal antenna surface surface kiw filtered backprojection fbp operator wideband sar iiw wideband sar image wideband interferometric phase baseline vector wideband sar interferometry interferometric phase cone vector known scatterer position unknown location scatterer component perpendicular lat flattened wideband sar interferometric phase smooth windowing function duration data surface surface kiu fbp operator iiu image interferometric phase baseline velocity lat flattened interferometric phase doppler synthetic aperture radar interferometry wideband sar interferometry basic principles sar interferometry described many sources section summarize principles theory sar interferometry notation context relevant subsequent presentation interferometry begin wideband sar received signal model derive interferometric phase model provide geometric interpretation interferometric phase develop equations height mapping wideband sar received signal model assume sar antennas transmitting wideband waveforms let denote received signals variables respectively born approximations received signals modeled range ith antenna speed light temporal frequency variable scene reflectivity function function depends antenna beam patterns geometrical spreading factors transmitted waveforms let bandwith center frequency transmitted waveforms demodulate received signals write next approximate around follows denotes derivative respect time ith antenna denotes unit vector direction denotes velocity antenna doppler synthetic aperture radar interferometry define refer ith antenna note zerodoppler time antenna orthogonal antenna velocity let finally write demodulated received signal follows wideband sar image formation layover many different algorithms developed form wideband sar images seismic migration backprojection chirp scaling algorithms algorithms take advantage high range resolution provided wideband transmitted waveforms doppler information provided movement antennas location scatterer identified intersecting surfaces ground topography shown fig range sphere doppler cone velocity vector sensor position scatterer position figure sar image scatterer reconstructed intersection sphere cone surfaces height scatterer precisely image scatterer formed satisfying following equations surface height surface doppler synthetic aperture radar interferometry note measured range doppler height scatterer functions define surfaces respectively contours defined intersection surface sphere ground topography without loss generality consider filtered backprojection fbp type method received demodulated signals backprojected onto contours defined reference surface absence heigh information demodulated signal backprojected onto intersection surface known reference surface without loss generality assume flat reference surface zero height backproject demodulated signals onto following contours hirange let kiw fbp operator reconstructed image scatterer becomes iiw dtds filter chosen respect variety criteria image scatterer becomes iiw magnitude reconstructed images measure target reflectivity whereas phase reconstructed image depends true location scatterer however since true height scatter unknown hence different reference surface location scatterer reconstructed different true location positioning error due incorrect height information known layover fig depicts layover effect see without knowledge ground topography additional information measurements needed reconstruct scatterers correct locations additional information provided second antenna different vantage point first one wideband sar interferometric height reconstruction interferogram formed multiplying one sar images complex conjugate sar image prior multiplying sar images two intensity images pixel locations corresponding scatterer position scene roughly multiplying complex conjugate get positioning errors due layover different two sar images due different imaging geometries doppler synthetic aperture radar interferometry figure layover wideband sar range sphere depicts surface monostatic sar configuration since correct height scatterer location unknown image scatterer formed flat surface refer phase interferogram wideband interferometric phase provides interferometric phase third measurement needed determine location scatterer general range difference many multiples unique phase proportional range difference determined phase unwrapping process consider following surface defines measured interferometric phase hyperboloid foci assume distance antennas much smaller ranges antennas scene approximate hyperboloid follows baseline vector defines cone whose vertex first antenna axis rotation baseline vector call surface interferometric phase cone interferometric phase cone provides third equation needed locate position scatterer precisely location scatterer given solution following equations range sphere interferometric phase cone doppler cone doppler synthetic aperture radar interferometry measured quantities defined terms true location scatterer scene left three surfaces terms location scatterer image fig geometrically illustrates solution three equations wideband sar interferometry typically figure wideband sar interferometry provides third algebraic equation unknown location scatters determined scatterer located intersection sphere interferometric phase cone axis rotation velocity first antenna axis rotation interferometric cone baseline vector extending first second antenna variation color coding interferogram flattened subtracting expected phase surface constant elevation let assumption words vector component perpendicular flattened phase becomes lat since component perpendicular alternatively expressed lat doppler synthetic aperture radar interferometry fig illustrates key concepts vectors involved wideband interferometry figure illustration vectors involved wideband interferometric phase denotes location ith antenna time denotes first antenna respect reference scatterer located component perpendicular wideband interferometric phase related projection baseline vector onto antenna respect scatterer location known vectors shown red unknown vectors shown black data model image formation data model consider two antennas following trajectories transmitting cws shown fig let transmitted waveform center frequency scattered field model ith antenna given let smooth windowing function finite support following correlate scaled translated version transmitted signal follows unb doppler synthetic aperture radar interferometry inserting obtain unb dtdx approximating around making approximation write velocity ith antenna simplify notation rest paper set next define doppler ith antenna fid inserting data model becomes unb dtdx slow varying function composed rest terms approximate fid around sid follows fid fid sid fid sid choose sid sid sid sid sid xsid sid acceleration ith antenna sid component sid perpendicular sid described refer sid time ith antenna using redefining function obtain following data model image reconstruction unb dtdx doppler synthetic aperture radar interferometry image formation layover similar wideband case reconstruct images backprojection described forward model shows data dui weighted integral scene reflectivity contours shown scatterer located scene reconstructed intersection surface surface ground topography precisely image scatterer located scene reconstructed satisfying following equations surface ldi fid surface fid height corresponds measurements defines surfaces image parameter surface given following set viewed continuum intersections cones expanding spheres centered sensor location axis rotation surface acceleration vector antenna trajectory fig illustrates surfaces reconstruction point scatterer intersection surfaces ground topography reconstruction analogous wideband sar image reconstruction shown fig absence ground topography information backproject data onto isodoppler contours reference surface without loss generality consider following contours dop equality high resolution measurement provided let kiu fbp operator described reconstructed image given unb unb iiu eit qui dui qui filter chosen reconstructed image given iiu doppler synthetic aperture radar interferometry figure image reconstruction scatterer located scene correctly reconstructed intersection surfaces ground topography surface cone vertex antenna location axis rotation antenna velocity geometry surface depends antenna trajectory figure drawn linear trajectory constant height absence topography information see scatterer located scene reconstructed image position error reconstructed image counterpart layover effect observed conventional wideband sar images fig illustrates layover effect however phase reconstructed image function scatterer true location hence includes height information figure height scatter known reconstructed incorrect position correct scatterer location image lie isodoppler surface doppler cone lies intersection doppler cone defined flat topography note phases reconstructed images depend doppler synthetic aperture radar interferometry fid sid duration windowing function corresponding times sid height information included however since imaging geometry may yield different dopplerrate phase image multiplied different time equalize effect multiplication factor multiply one reconstructed images phase images multiplied factor say result image becomes iiu interferometric height reconstruction similar wideband case form two images iiu intensity images multiply one complex conjugate form interferogram interferometric phase phase function given denotes thus scatterer lies following surface measured interferometric phase defines surface described intersections two cones one continuously changing solid angle assuming distance antennas much smaller ranges antennas scene approximate second antenna terms first one follows baseline vector component perpendicular first antenna using approximate interferometric phase follows unb refer baseline velocity see approximates interferometric phase additionally shows interferometry involves configuring antennas position space also velocity space larger difference antenna velocities first antenna larger interferometric phase becomes hand velocities antennas second term defines interferometric phase surface doppler synthetic aperture radar interferometry clearly interferometry provides third equation needed determine location scatterer precisely location scatterer given solution following three equations interferometric fig depicts intersection three surfaces scatterer location surface interferometric phase surface doppler cone velocity vector sensor position scatterer position figure determination scatterer location interferometry scatterer located intersection doppler cone two surfaces interferometric phase measurement provides third surface interferometric phase surface similar wideband sar interferometry interferometric phase flattened subtracting phase due scatterer known height without loss generality let thus identifying location scatterer equivalent determining using see unb unb unb lat component perpendicular shows flattened interferometric phase interferometry related projection unknown onto baseline velocity vector scaled range first antenna since component perpendicular alternative express follows lat fig shows key concepts vectors involved interferometry doppler synthetic aperture radar interferometry figure illustration key concepts vectors interferometry denotes ith antenna position denotes respect reference surface component perpendicular denotes antenna velocity denotes antenna respect correct target location interferometric phase proportional projection baseline velocity vector onto known vectors shown red unknown vectors shown black comparison interferometry wide wideband case table tabulates interferometric phase wideband sar cases compare contrast two interferometric phases unb baseline difference range difference velocity respectively larger center frequency larger interferometric phase unb cases larger range smaller interferometric phase unb cases unb larger larger interferometric phase larger difference positions two antennas larger interferometric phase unb larger difference velocities two antennas larger interferometric phase doppler synthetic aperture radar interferometry table raw flattened interferometric phase functions wideband sar wideband sar interferometric phase flattened interferometric phase numerical experiments experimental setup conducted numerical experiments wideband experimental setup follows scene size resolution imaged single point target placed origin scene center two antennas flying linear trajectory parallel used antennas placed scene center direction midpoint linear trajectories antennas aligned wideband first antenna placed height second length trajectories length antennas antennas moving velocity waveform flat spectrum bandwidth center frequency transmitted antennas frequency samples samples used imaging doppler first antenna placed height second length trajectories antennas first antenna moving velocity second continuous waveform center frequency transmitted antennas window used processing slow time fast time samples samples used imaging wideband sar interferometry fig fig show reconstructed images point target located first second antenna respectively assuming flat ground topography height fig fig see displacement due layover effect range direction first antenna reconstructs target second antenna reconstructs target doppler synthetic aperture radar interferometry figure wideband reconstruction target located using first antenna assuming flat ground topography target reconstructed wideband reconstruction target located using second antenna assuming flat ground topography target reconstructed next align peaks two images multiply first image complex conjugate second generate interferogram resulting interferogram shown fig figure interferogram wideband sar reconstructed images order reconstruct height use set equations doppler cone equation point gives contours scenario parallel thus contours constant value target position using fact doppler synthetic aperture radar interferometry need compute intersection contour interferometric phase contour fixing position figs see targets reconstructed position thus reconstruct true target position using reconstruction sampled height interval resolution fig shows magnitude image note measured value derived phase reconstructed image dark blue area indicates contour magnitude difference minimized figure image magnitude contour indicated dark blue area magnitude minimized image magnitude interferometric phase contour indicated dark blue area magnitude minimized similarly fig shows magnitude image difference dark blue area indicates interferometric phase contour combining two images fig shows intersection two contours indicated dark blue area white fig indicates exact intersection computed target reconstructed white indicates true target position clear target reconstructed correct position height proceed similar wideband case case figs show reconstructed image first second antennas respectively first antenna reconstructs target second antenna doppler synthetic aperture radar interferometry figure image intersection contour interfermetric phase contour exact intersection indicated white true target position indicated white target reconstructed correct position height figure reconstruction target located using first antenna assuming flat ground topography target reconstructed reconstruction target located using second antenna assuming flat ground topography target reconstructed doppler synthetic aperture radar interferometry wideband case align peaks two images multiply first image conjugate second image form interferogram doppler images resulting interferogram shown fig figure interferogram reconstructed images reconstruct height use set equations given points approximated end antenna trajectories farthest target position linear trajectory constant velocity true point would namely parallel velocity vector best estimate would point trajectory farthest away target location fig illustrates surface target position reconstructed true target position notice images reconstruct scatterer correct position contour given dark blue area similarly figs illustrate interferometric surfaces respectively fig combines figs intersection three contours indicated white white shows true target location clearly target reconstructed correct position height conclusions present novel radar interferometry based imaging paradigm uses single frequency transmitted waveforms several advantages conventional sar including simpler inexpensive hardware high snr long effective range operation suitable use passive radar applications derived interferometric phase relationship interferometric phase depends difference velocity antennas opposed doppler synthetic aperture radar interferometry figure image magnitude contour indicated dark blue area magnitude minimized image magnitude contour indicated dark blue area magnitude minimized image magnitude interferometric dopplerd rate contour indicated dark blue area magnitude minimized doppler synthetic aperture radar interferometry figure image intersection interferometric contours intersection indicated white true target position indicated white target reconstructed correct position height range difference observed wideband sar thus interferometry one reconstruct ground topography even antennas long velocities different furthermore showed true target position determined intersection interferometric surfaces different conventional wideband sar surfaces determine true target position interferometric surfaces presented numerical simulations single point scatterer using two antennas moving linear trajectories verify interferometric method also conduct conventional wideband sar interferometric reconstruction comparison show wideband sar interferometry able accurately reconstruct target location thus numerical simulations show dopplersar interferometry retains accuracy conventional sar interferometry advantage affords future analyze sensitivity height estimation respect observables parameters acknowledgement material based upon work supported air force office scientific research afosr award number national science foundation nsf grant doppler synthetic aperture radar interferometry appendix approximations appendix approximation let two vectors using taylor series expansion make following approximation unit vector appendix approximation assumption let denote look direction using far field expansion write transverse projection onto plane whose normal vector along look direction therefore difference look directions given doppler synthetic aperture radar interferometry references bamler hartl inverse problems rogers ingalls science rogers ingalls rainville astronomical journal graham proceedings ieee zebker goldstein journal geophysical research solid earth goldstein zebker nature gabriel goldstein international journal remote sensing gabriel goldstein zebker journal geophysical research solid earth hanssen radar interferometry data interpretation error analysis vol springer rosen hensley joughin madsen rodriguez goldstein proceedings ieee cherniakov moccia bistatic radar emerging technology john wiley sons isbn url http fritz rossi lachaise breit interferometric processing data ieee international geoscience remote sensing symposium igarss ieee duque mallorqui geoscience remote sensing ieee transactions wang yazici ieee trans image process wang yazici geoscience remote sensing ieee transactions issn wang yazici siam journal imaging sciences wang yazici synthetic aperture radar imaging moving targets using ultranarrowband continuous waveforms european conf synthetic aperture radar nuremberg germany wang yazici detection imaging multiple ground moving targets using ultranarrowband sar spie defense security sensing baltimore wang yazici bistatic synthetic aperture radar imaging using continuous waveforms ieee radar conf kansas city issn wang yazici synthetic aperture radar imaging arbitrary flight trajectories int conf digital signal process corfu greece yarman wang yazici inverse problems wang yarman yazici ieee transactions geoscience remote sensing borden cheney inverse problems wang yarman yazici theory passive synthetic aperture imaging excursions harmonic analysis volume springer zebker rosen derivation coseismic displacement fields using differential radar interferometry landers earthquake geoscience remote sensing symposium igarss surface atmospheric remote sensing technologies data analysis international vol ieee madsen zebker martin ieee transactions geoscience remote sensing prati rocca guarnieri damonti ieee transactions geoscience remote sensing rodriguez martin theory design interferometric synthetic aperture radars iee proceedings signal processing vol iet doppler synthetic aperture radar interferometry nolan cheney ieee transactions image processing yarman ieee transactions image processing yarman cheney ieee transactions image processing prati rocca international journal remote sensing raney runge bamler cumming wong geoscience remote sensing ieee transactions yazici cheney evren synthetic aperture inversion presence noise clutter inverse problems vol iop publishing wang yazici doppler synthetic aperture radar imaging society instrumentation engineers spie conference series vol
| 5 |
bourbaki janvier isomorphismes graphes temps babai luks oct par harald helfgott soient deux graphes sommets isomorphes ils sont ensemble des isomorphismes peut avec une classe groupe sur comment trouver des donner algorithme toujours ces questions est longtemps ouvert babai comment ces questions autres qui sont temps temps exp log est partie sur algorithme luks qui cas graphes introduction soient deux savoir deux applications alphabet domaine sont des ensembles tout groupe permutations sym agit sur ensemble des domaine sur alphabet pour nous groupe groupe voudra toujours dire donner voire ensemble une classe voudra dire donner classe ensemble isomorphisme consiste moins qui envoie sur tels isomorphismes existent les est clair que ensemble des isomorphismes isog forme une classe autg groupe autg automorphismes dans groupe consistant dans les qui envoient sur consiste donner algorithme qui temps polynomial taille voire temps raisonnable par exemple temps pourrait qui veut dire exp log ici comme toujours une par pour assez grand une constante indique que constante une grande partie motivation pour isomorphisme vient fait que isomorphisme graphes lui pour nous veut dire est pas forcement propre consiste deux graphes sont isomorphes ils sont classe leurs isomorphismes isomorphisme est une bijection ensemble sommets vers celui telle que une solution permettrait par exemple trouver une dans une base isomorphisme graphes temps polynomial isomorphisme suivante supposons sans perte que ont ensemble sommets alors nous pouvons comme ensemble des paires non suivant que nos graphes sont pas est comme suit pour paire nos graphes sont valeur est une entre dans cas contraire soit image homomorphisme sym sym par alors induit une bijection entre classe des isomorphismes classe isog babai isomorphisme peut temps nombre domaine novembre babai une solution temps quasipolynomial avec algorithme explicite cet conduit trouver une erreur non triviale dans analyse temps mais babai algorithme preuve est maintenant correcte corollaire babai isomorphisme graphes peut temps nombre sommets notre principale sera nous nous servirons aussi version courte nous essayerons examiner preuve plus possible dans format partie pour aider tout doute qui pourrait rester sur forme actuelle meilleure borne connue pour temps requis par isomorphisme graphes due luks bkl exp log usage joue crucial dans babai comme dans voire dans usage courant choix est canonique est fonctoriel situation typique pour nous sera suivante groupe sym agit sur donc sur agit aussi sur autre ensemble donc aussi sur les applications est ensemble une application appelle coloriage ensemble appelle ensemble couleurs choix canonique relation coloriage pour chaque est une application qui aux coloriages qui commute avec action particulier choix canonique peut outil pour des nonisomorphismes les coloriages induits canoniquement par sont pas isomorphes autre par exemple ils ont nombre vermeils alors sont pas isomorphes autre quand des isomorphismes dans qui envoient sur classe isog tels isomorphismes sert classe isomorphismes isog puisque cette est isog preuve assimile aussi plusieurs lors approches consiste essayer suivre qui est essence algorithme luks cet algorithme est parce est contre quotient isomorphe alt est grand notre majeure consiste qui passe principale sera chercher colorier une qui canoniquement cela limitera les automorphismes isomorphismes possibles par exemple est rouge autre noir groupe automorphismes possibles sym coloriage similaire induit par limite les isomorphismes aux applications qui alignent les deux coloriages nous trouverons toujours des coloriages qui nous aident sauf quand certaines structures ont une grande laquelle revanche permettra une descente plus petit cette double groupe descente des plus courtes fondements travaux suivant usage courant pour les groupes permutations nous pour auquel sym envoie une sym nous par par contre nous pour ensemble des avec action gauche par est que ceci est non pas seulement pour une permutation mais pour toute application non injective nous appelons les tuples que algorithmes base plusieurs algorithmes essentiels basent sur une schreier sch que pour tout groupe tout qui engendre contient des toutes les classes dans est ensemble suivante est celle sims qui travailler avec groupe permutations sym termes une stabilisateurs xgi stabilisateur points algorithme algorithme description sur construit des ensembles tels que engendre pour tout temps pris par algorithme est est ensemble qui nous est fonction filtre prend temps tout pour lequel elle est satisfait est valeur bien algorithme nous pourrons toujours supposer que nos ensembles sont taille temps pris par algorithme est donc algorithme construction ensembles fonction schreiersims engendre sym assure engendre est injectif pour tout tantque choisir arbitraire enlever filtrer alors ajouter retourner fonction filtrer retourne tel que requiert injectif assure sauf pour jusqu tel quel xhi xgi alors sinon retourner retourner nous supposons que ensemble initial groupe est taille une constante temps pris par utilisation algorithme est donc nmax une fois les ensembles construits devient possible accomplir plusieurs essentielles rapidement exercice montrer comment accomplir les suivantes temps polynomial groupe sym sym est dans homomorphisme sym sousgroupe sym fhl soit avec test qui temps polynomial appartient astuce travailler avec place ici comme toujours veut dire trouver ensemble groupe nous est tel ensemble nous est algorithme stabilisateur points pour arbitraires par contre nous pouvons pas demander ensemble stabilisateur ensemble xgk pour arbitraires faire ceci serait isomorphisme orbites blocs soit comme toujours groupe permutations agissant sur ensemble domaine est union disjointe des orbites ces orbites peuvent temps polynomial ceci est exercice simple celle simple elle aussi trouver les composantes connexes graphe supposons que action soit transitive donc une seule orbite bloc est tel que pour quelconques soit soit collection blocs pour partitionne action est primitive pas blocs taille autrement elle appelle imprimitive blocs est minimal action sur lui est primitive voyons comment action est primitive est pas comment trouver blocs taille nous obtiendrons blocs minimal temps polynomial nous suivons qui cite pour distincts soit graphe avec comme son ensemble sommets orbite comme son ensemble composante connexe qui pour est taille ensemble qui nous est nous omettrons toute mention cette taille par suite puisque comme nous avons dit nous pouvons garder toujours sous pour paraphraser faut avouer tel pourrait appeler maximal taille des blocs est maximale leur nombre est minimal contient est bloc plus petit qui contient est connexe alors bloc est action est imprimitive ssi est non connexe pour arbitraire moins dans nous obtenons bloc qui contient donc tout blocs taille dernier mot sym nous disons que est transitif voire primitif son action sur est luks cas groupes avec facteurs ordre luks comment isomorphisme graphes temps polynomial dans cas graphes valence sommet dans graphe non est nombre qui contiennent ceci groupe automorphismes dans cas groupe tel que tout facteur composition tout quotient dans une suite principale est processus loin trivial nous concerne pas ici voyons comment luks cas isomorphisme nous suivrons notation les viennent soient sym ensemble isomorphismes partiels est ensemble automorphismes partiels est isok est donc ensemble toutes les permutations qui envoient sur moins juger par qui peut voir par nous travaillerons avec forme laisse invariante tant ensemble est clair que pour sym sym isok est aussi clair que est sym est invariant sous alors autg est pour tout sym est soit vide soit une classe droite forme autg sym soient invariant sous pour autg tels que iso est une application babai appelle suivant utilise pas groupes simples bcp soit sym groupe primitif soit tout facteur composition est ordre alors nok ici comme habitude une qui seulement luks soient ensemble deux soit groupe sym tel que tout facteur composition est ordre est possible isog temps polynomial preuve cas non transitif soit stable sous action alors par calculer une classe que nous notons pour pour isog nous isog puis par sims stabilisateur points pour groupe isomorphismes dans groupe entre deux longueur comme prend temps tout bien est lecteur cas transitif soit stabilisateur blocs minimal pour donc est primitif par mok est nombre blocs pour tels que isog ison ison par comme les orbites sont contenues dans les blocs qui sont taille ison par les groupes isomorphismes paires longueur nous avons donc solution mok pour des longueur pas consiste faire union classes nous avons une description chaque ison soit comme ensemble vide soit comme une classe droite groupe autn dont nous avons une description ensemble alors ison isog nous aurions quelques appels travaillant toujours avec des isomorphismes partiels mais cela peu importance qualitative vrai dire bcp thm est plus que ceci par exemple des facteurs arbitraires non sont admis cela donne une relations partitions configurations soit couleurs ensemble que nous pouvons supposer disons rouge violet une relation sur ensemble est une structure relationnelle est une paire pour chaque est une relation sur les sont tous non vides partitionnent nous disons que est une structure partition dans nous pouvons par une fonction qui assigne chaque indice relation laquelle appartient nous disons que est couleur isomorphisme entre deux structures est une bijection qui envoie pour chaque est possible construire foncteur qui envoie chaque structure sur une structure partition sur qui plus est iso iso est triviale nous algorithme pour montrer indexer veut dire cela nous permet pas utiliser plus min couleurs tout gardant leur termes des couleurs originales temps pris pour calculer est nous nous occupons pas des collection tuples mais peut agir tout simplement une liste lexicographiquement dans cas est dans serait avec hachage qui est que art bien organiser une algorithme une structure relation indexeur fonction pour indexeur retourner retourne est ensemble indices explique termes fonction indexeur est une collection est pas dans alors ajouter retourner indice dans une relation sur ssi ensemble consiste les applications avec composition comme une structure partition est dite pour tous alors homomorphisme tel que pour tout pour tout alors par exemple pour veut dire que couleur sait pas dans sens nous connaissons alors nous savons pas nous indique que couleur les couleurs nous pouvons foncteur qui envoie chaque structure partition sur une comme pour fait que est implique que iso iso pour calculer est similaire celle pour calculer algorithme lieu assigner couleur nous lui assignons couleur est voir que est plus grossier une structure partition qui est une que est plus grossier une structure qui est une structure partition soit une structure partition pour nous comme suit structure partition est dite squelette vide sera viride exercice tout squelette une est une ici fait que axiome dans soit valable pour non injectif est crucial pour une structure partition induite est structure par restriction est clair que est une alors est aussi faut pas confondre une structure partition partition structure avec que nous appellerons colored partition ensemble est coloriage une partition chaque classe couleur une classe couleur est ensemble sommets une couleur est dit admissible chaque ensemble dans chaque partition est taille pour est admissible tel que pour chaque est une structure plus que coloriage mais moins que structure que nous obtiendrions nous donnions chaque chaque partition une couleur automorphisme isomorphisme doit les couleurs mais pourrait permuter les ensembles taille qui appartiennent partition une couleur comme les ensembles taille peuvent est clair que nous pouvons supposer sans perte que toute couleur est ensembles taille nous ajoutons ceci partir maintenant configurations pour nous comme suit une est une ayant suivante une fonction telle que pour arbitraires tout tel que les valeurs sont nombres intersection une est dite classique remarque les classiques ont introduites par higman les premiers exemples type schurien une est schurienne elle est partition dans ses orbites orbitales sous action groupe sym une classique que deux couleurs une pour autre pour son est dite une clique triviale exercice tout squelette une est encore une fois axiome des joue exercice soient une une classe couleurs relation coloriage induit par sur alors induite est une ici est cas faut utiliser couleur les couleurs puisque soient nous colorions comme suit pour une structure partition exercice soit une structure partition soit alors est coloriage squelette est est aussi est clair que plus est canonique relation qui veut dire que commute avec action sur stabilisateur dans sym des points une est dite couleur tout sommet est une classique est dite primitive elle est les graphes pour toute couleur telle que pour moins une paire avec sont tous connexes elle est dite uniprimitive elle est primitive non triviale nous avons pas besoin ces graphes son connexes dans sens propre savoir chemin tout sommet tout autre respectant orientation dans sens faible sans compter orientation fait que soit classique implique que est pourquoi qui implique que toute composante faiblement connexe est connexe exercice exercice soit une classique uniprimitive aucun ensemble tel que restriction soit une clique solution les grande clique sont blanches soit noir une autre couleur soit gnoir pour graphe non vide avec comme ensemble sommets est impossible ait ensemble tel que graphe soit vide pourquoi exercice soit une classique soit une couleurs alors sont tels que nombre tels que pour tout seulement pour toute couleur toute composante connexe est taille solution esquisse cas vaut par prouvez les cas par induction pour prouver utilisez raffinement canonique foncteur qui envoie une une comme sera nous aurons iso iso algorithme qui calcule est sur une weisfeiler leman agit une dans une aucun produit les classes nouveau coloriage sont les que celles ancien coloriage alors aucun produira dans futur coloriage est voir aussi lehman mais indique que auteur leman deux transformations naturelles peuvent pas inverse une autre coloriage couleurs est clair peut que fois alors sont pour produire une particulier indexation est faite temps logarithmique vecteur dans pas algorithme est comme vecteur creux puisque son nombre est plus temps pris par algorithme est log outre une borne plus forte les algorithmes type autrefois comme une approche plausible isomorphisme graphes depuis cfi evp est clair pas ils sont quand outil version ici est due iml algorithme pour les fonction weisfeilerleman pour jusqu pour indexeur indexeur est comme dans algorithme indices retourner donne sens graphes hypergraphes designs blocs nous savons graphe est une paire est ensemble sommets est une collection paires voire avec deux graphe est non graphe non est dit tout sommet est graphe est dit sortant entrant sont pour ils sont constante graphe biparti est triplet avec graphe biparti est est est exercice soit une classique soient deux classes couleur soit vert une couleur alors graphe biparti gvert est nous omettons les mots entrant sortant puisqu est agit entrant dans cas sortant dans cas soit soient lin bis terre trois couleurs alors pour llin lbis graphe biparti rterre est exercice soit une classique soient deux classes couleur soient vert une couleur rouge une couleur soient les composantes connexes grouge graphe biparti comme suit ssi gvert pour moins alors est solution notez que pour est vert pour moins ssi existe tels que est rouge pour est vert concluez par exercice que tous les sommets ont analogue montrez que pour tels que est rouge nombre tels que est rouge pas notons nombre alors tout est son par par donc pas graphe biparti est complet tant que graphe biparti graphe biparti qui est vide complet est non trivial hypergraphe consiste ensemble sommets une collection avec des hypergraphe est dit pour tout est dit tout appartient exactement ensembles dans hypergraphe complet sur est chaque ensemble est une fois coloriage des hypergraphe complet est une application ensemble block design bde est hypergraphe avec sommets tel que toute paire sommets distincts est contenue dans exactement blocks block design mais avec condition additionnelle hypergraphe peut block design est incomplet notons nombre bde proposition fisher pour tout block design incomplet est voir que cette est vraie pour les designs les blocks designs admettent une design est hypergraphe avec sommets tel que tout taille est contenu dans exactement ici nous toujours fisher statisticien ici design vient experimental design proposition rchw pour tout design tout min nous avons johnson association est une classique telle que agit donc sens mot qui rien voir avec les soient johnson est par est ensemble relation est bien ensemble notons que nous avons implicitement foncteur ensembles avec johnson ceci est foncteur plein autrement dit les seuls automorphismes sont ceux qui sont induits par sym identification groupes est une chose que deux groupes sont isomorphes une autre construire isomorphisme explicite entre eux cette implique moins donner les images voyons cas particulier qui nous sera crucial nous aurons groupe permutation sym nous saurons est isomorphe groupe abstrait altm comment construire isomorphisme est pas trop petit relation est connu que doit isomorphe que groupe altm groupe permutation forme alt qui est autre agissant sur ensemble est ensemble autres termes existe une bijection isomorphisme alt tels que consiste construire alt calculables temps polynomial avec ces nous suivons bls soient orbitale plus petite hors diagonale soit orbitale plus grande nous supposerons que babai nomme les groupes alt groupes johnson par analogie avec les johnson puisque alt est altm pas appeler dernier groupe ramerrez qui revient dire que est pas trop grand relation alors pour ceci est ensemble tous les tels que intersecte mais pas soit alors est qui est pas dans soit collection sans nous pouvons calculer comparer pour calculer indexer tout temps polynomial nous calculons aussi temps polynomial action sur induite par action sur ceci alt une bijection naturelle qui commute avec action elle envoie est tel que est clair que pour ssi ainsi nous obtenons bijection par satisfait les applications sont donc celles que nous nous avons construit isomorphisme explicite entre alt notons que cette nous permet construire isomorphisme explicite entre association sait isomorphe johnson autre alors est grand que log cas nous pouvons enlever groupe dans application qui nous quotient brutale comme dans cas preuve luks nous pourrions aussi nous passer supposition quelques complications qui suit particulier serait pas comme dans sinon autre principale input sym transitif aligner coupe coupe johnson coupe coupe relations rels non altm blocs non oui petit non oui oui cas trivial non non locaux non non oui alt primitif lem des designs non oui oui une couleur domine output isog fonction weisfeiler leman oui relations sur oui pullback premiers pas luks transitif non non oui altm oui petit oui les premiers pas sont ceux preuve luks particulier sym est pas transitif nous exactement comme dans cas non transitif preuve bien soit possible que que marche puisque son est aussi dans cas nous avons subdiviser selon les orbites supposons que soit transitif nous savons que nous pouvons trouver rapidement blocs minimal par nous trouvons aussi temps polynomial des tels que big pour tout groupe agit sur lieu bcp nous utiliserons une des groupes finis simples cgfs elle pour fois par cameron puis par cam soit sym groupe primitif est plus grand une constante absolue alors soit soit tel que subdivise blocs sur lequel agit comme groupe altm plus borne est dans est possible bls trouver temps polynomial normal les blocs action nous avons comment explicitement action avec celle alt par ailleurs algorithme nous permet calculer temps polynomial donc nous dit aussi nous sommes dans cas est cas nous pour nous logarithme base non pas logarithme log log dans cam est plus fort toute action sur vrai dire groupe est isomorphe tant que groupe permutation alt nous avons comme dans cas transitif preuve nous ainsi instances pour des longueur nous sommes dans cas nous toujours par instances avec place par comme dans isoh isom est des classes dans log est une constante log log donc ici comme dans cas nous nous permettons comme dans cas transitif preuve nous obtenons une log log instances pour des longueur ceci est tout fait consistant avec objectif avoir une solution temps quasipolynomial temps log reste savoir que faire nous sommes dans cas suivant isomorphisme alt log une constante ici nous avons par dans cela par stabilisateur des blocs dans partie cas nous occupera pour reste article babai indique comment enlever cgfs cette soient comme avant avec transitif alors est groupe primitif agissant sur ensemble blocs groupe permutations sur ensemble est tel que son action sur ensemble des paires distincts est transitive groupe est dit doublement transitif pyber qui pas cgfs nous dit tel groupe est soit alt soit sym soit ordre log est alt sym nous sommes dans cas que nous discuterons ici jusqu est doublement transitif mais est alt sym nous pouvons comme dans cas transitif preuve puisque log babai propose aussi traitement alternatif plus supposons donc que est pas doublement transitif alors schurienne elle induit est pas une clique nous pouvons donner cette reprendre argument structure action alt stabilisateurs orbites quotients alternants nous aurons besoin plusieurs sur les altk ils joueront crucial dans des locaux dans version originale ils ont aussi dans par bls dans cet lem soit sym primitif soit altk avec max alors est isomorphisme prouver lem est peu exercice des groupes faut utiliser baps prop pour cas socle conjecture schreier pour cas socle non conjecture schreier est mais dont preuve son tour cgfs par contre pyber une preuve lem qui utilise pas cgfs avec une condition plus stricte max log constante cgfs donc preuve principal soit sym soit symk homomorphisme dont image contient altk alors est dit atteint contient pas altk lem soit sym soit altk avec max est taille plus grande orbite est transitif tout est atteint moins est atteint preuve esquisse ceci lem est primitif ker pour stabilisateur blocs minimal reste cas altk surjectif lem pour arbitraires simple doit avoir tel que factorise comme suit utilisant lem pour les restrictions aux orbites nous passons une orbite par induction soient les orbites soit restriction par lem doit avoir tel que factorise altk alors par altk pour tout proposition suivante jouera crucial proposition soient sym transitif altk soit ensemble des non atteints supposons que max est taille plus grande orbite alors altk supposons que est une orbite qui contient des atteints alors chaque orbite ker contenue dans est longueur rappelons que stabilisateur points preuve est facile voir que tant ensemble alors donc altk supposons que alors factorise comme suit altk puisque est noyau ici est donc par lem existe tel que altk altk parce que est dans non atteint contradiction comme contient des atteints est une orbite tout est atteint soit ker longueur orbite est ngx ngx altk tout propre altk est indice donc cas grande primitif oui oui cas trivial non oui pullback cas primitif nous pouvons supposer que est isomorphe tant que groupe permutation alt puisque nous avons les autres cas passant groupe non primitif cas non primitif sera comme nous avons nous pouvons construire une bijection entre ensemble des avec ensemble cette bijection induit isomorphisme alt alors est bijection avec altn altm nous sommes donc dans cas trivial groupe autg consiste les altn qui permutent les lettres couleur isog est non vide ssi ont exactement nombre lettres chaque couleur aucune lettre est nous ajoutons condition que permutation qui induit soit dans altn alors soit primitif deux sont des jumeaux par rapport objet transposition laisse invariant est clair que les jumeaux forment des classes que pour toute telle classe tout sym laisse objet invariant notre objet sera sont des jumeaux par rapport pour tout nous pouvons donc facilement temps polynomial les classes dites classes jumeaux une classe taille examinons cette puisque nous devrons exclure classe taille est unique donc canonique une telle classe pas les deux ont telles classes mais tailles alors sont pas isomorphes ont des classes jumeaux taille nous choisissons alt tel que nous supposons par nous notre cas exemple plus simple que babai appelle aligner nous avons alors soit partition induit une partition ssi est montrer que pour donc pour nous avons notre celui isoh alt ici besoin prendre stabilisateur ensemble savoir alt pose aucun souci nous engendrons prenant des deux alt alt deux alt alt alt forme nombre est moindre discussion notre celui isoh pour alt alt comme est une classe jumeaux pour tout alt laisse invariant alors isoh soit alors nous avons notre celui isoh rappelons que agit sur avec des orbites longueur nous donc comme dans cas non transitif luks preuve thm des aux johnson relations sur weisfeiler leman lem des designs une couleur domine oui coupe johnson non discutons maintenant cas primitif plus isomorphe alt maintenant nous pouvons supposer que nos ont pas classes jumeaux taille les outils principaux que nous lem des designs nous seront utiles voire essentiels aussi dans cas imprimitif nous avons une bijection entre les pour nous avons donc une structure relationnelle sur sont tous est qui correspond nous appliquons foncteur qui fait elle une structure partition puis foncteur encore qui nous donne une foncteur par nous obtenons ainsi qui est une comme sont des foncteurs assignation est canonique elle nous sera donc utile sont pas isomorphes sous action altm alors sont pas isomorphes sous action alt non plus nous obtiendrons une classique canonique partir lem des designs soit cette nouvelle sera non triviale soit nous obtiendrons coloriage canonique sans couleur dominante qui nous permettra certain nombre pour des plus courtes comme dans algorithme luks supposons alors que nous disposons une classique non triviale canonique nous donnera autre ces deux soit canonique soit johnson canonique dans dans cas comme dans autre avoir une telle structure canonique limite fortement ensemble isomorphismes automorphismes possibles nous pourrons avec dans cas dans cas johnson est pour une lem des designs une une couleur est dite pour valeurs classe couleurs est elle aussi dite dominante par contre pour toute couleur classe est taille coloriage est dit comme avant deux sont des jumeaux par rapport une structure ici une sur aut proposition lem des designs soit une soit supposons aucune classe jumeaux dans avec alors moins une des options suivantes est vraie existe tels que pas couleur existe tels que une couleur est pas une clique notation dans les sections particulier est tout simplement coloriage lem lem grande clique soit une classique soit une classe couleurs avec est une clique alors est une classe jumeaux preuve supposons que est pas une classe jumeaux donc une couleur disons azur telle que est cette couleur pour moins mais pas pour tous comme est une clique appelons couleur carmin celle bronze soit ensemble des couleur bronze agit construire block design qui contredise fisher prop azur pour comme azur pour moins couleur tous les sont carmin par des nombres intersection def azur bronze donc pas comme nous avons dit donc pour tout montrez similaire que pour taille azur pas comme est une clique est couleur pour tous appelons cette couleur montrez que azur alors est block design incomplet par fisher nous savons que contradiction preuve lem des designs prop supposons que pour chaque une couleur plus est une clique nous arriverons une contradiction soit vide comme est trop grande pour ensemble jumeaux donc existe tels que aut soit longueur minimale entre les satisfaisant par cette les dans sont tous distincts les permutant nous pouvons assurer que sans perte que soit soit dans cas nous choisissons voyons que dans cas nous choisissons obtenons nous aurons donc une contradiction avec notre supposition une fois que nous aurons que fait que ensuivra cette son tour par fait que pour longueur pourquoi vrai nous sommes train supposer que est une clique que donc par lem grande clique tous les sont des jumeaux particulier pour pas puisque coloriage sommets est celui par ensuit que soit soit soit comme les deux sont exclues nous appliquons lem des designs avec est par section nous parcourons tous les tuples possibles jusqu trouver tuple pour lequel conclusion lem des designs est vraie conclusion est vraie nous sautons section conclusion est vraie nous passons ayant est couleur coupe johnson nous avons une classique non triviale nous rappelons que ceci est coloriage graphe complet sur tel que les sommets ont leur couleur propre couleur diagonale les sont pas toutes couleur couleur axiome nous voudrions trouver des structures qui canoniquement qui contraignent son groupe automorphismes est raisonnable attendre que telles structures existent par groupe automorphismes est transitif soit est imprimitif donc laisse une partition invariante soit est alt qui laisse invariant johnson soit est petit donc stabilisateur quelques points aura des orbites petites ainsi nous donnera coloriage sans couleur dominante est trouver telles structures faire canoniquement est pas primitif est facile soit couleur non diagonale plus rouge telle que graphe est connexe par exercice ceci donne une partition dans des ensembles taille coupe johnson soit une classique uniprimitive soit temps nous pouvons trouver soit soit johnson sur sym avec sym log tel que voire est canonique relation groupe sera comme stabilisateur points valeur dans est assez arbitraire toute valeur serait valable une valeur proche les constantes implicites preuve choisissons arbitraire donnons chaque couleur coloriage est canonique relation aucune classe couleur taille partition triviale chaque classe nous donne nous avons supposons par contre ait une classe couleur disons clin taille comme relation rlin cette couleur est non lin ssi lin rlin toute autre relation est exercice soient tels que lin soit tel que lin appelons bis terre graphe biparti avec sommets clin cbis rterre graphe est non vide par par exercice par nombre tels que est une couleur est donc est toujours pour lin appliquant ceci terre nous voyons que est donc comme graphe est pas complet nous appliquons donc proposition avec notons que nous travaillerons donc avec graphe biparti sera essayer soit rendre plus petit par moins facteur constant soit trouver des structures lui soit ces structures nous permettront quand soit elles nous aideront trouver johnson assez grand sur tout abord nous devrons borner voire les jumeaux deux raisons ceci nous une structure assez riche cela impliquerait peu rien sur beaucoup connectent est petit nous colorierons chaque sommet par son ensemble voisins ceci nous donnera coloriage canonique relation dans coloriage deux sommets auront couleur ssi ils sont des jumeaux donc aucune classe jumeaux nous aurons exercice soit graphe biparti non trivial alors aucune classe jumeaux plus solution nous assurons que prenant est soient des sommets une classe jumeaux montrez que donc exercice soit graphe biparti sans jumeaux soient montrez que pour moins aucune classe jumeaux dans graphe exercice soit une soient deux classes couleurs soit brun une couleur alors pour couleur sont des jumeaux dans graphe biparti gbrun proposition coupe johnson biparti una partita poker soit graphe biparti avec tel aucune classe jumeaux ait plus alors nous pouvons trouver temps soit soit johnson sur sym sym avec log tel que voire est canonique relation condition sur les classes jumeaux ici remplie avec place preuve thm exercice qui concerne temps nous expliciterons quelques qui pourraient pas qui sera plus est indice groupe sera comme stabilisateur points nous devons bien nombre points que nous stabilisons esquissons preuve que nous voulons est une proposition nous pouvons produire une classique sur partir graphe tout simplement utilisant qui demande ruse est garantir que restriction classe couleurs dominante une soit non triviale pour obtenir une sur nous noterons que graphe induit une relation sur est plus telle chose existe sinon les nous donnent une partition relation est triviale dans sens contenir toutes les distincts dans nous obtenons johnson elle est non triviale mais contient beaucoup jumeaux elle nous donne une descendre plus petit pas beaucoup jumeaux nous utilisons lem des designs par lem standard sur les designs pour obtenir une classique sur qui trouver preuve est une constante nous colorions chaque par coloriage est canonique relation autrement dit est pas canonique tout peu importe trivialement log nous pouvons donc supposer que log disons alors par discussion nous obtenons donc coloriage est canonique relation indice log log nous pouvons donc supposer que log notre est les jumeaux nous divisons dans ses classes jumeaux colorions chaque par son nombre jumeaux par son dans graphe nous obtenons sauf entier tel que ensemble des sommets sans jumeaux est taille supposons que cela est cas comme pas jumeaux nous voyons que nous pouvons supposer que par son soit hypergraphe dont les sont les voisinages des sommets dans elles sont toutes contenues hypergraphe est comme pas jumeaux dans pas identiques est hypergraphe complet alors peut avec johnson scoppia pianto angoscioso abbraccia testa johnson supposons alors que est pas complet nous voudrions avoir coloriage canonique sur pour log log tel que les soient pas tous jumeaux nous colorions tout reste gris supposons par contre que soit nous colorions gris les sont pas tous distincts dans cas contraire nous donnons couleur cette coloriage peut faite temps ordre log log les tuples avec distincts avaient pas tous couleur nous aurions design avec donc par proposition pour comme log peut plus grand une constante log log qui donne une contradiction donc pour arbitraire les tuples avec distincts ont pas tous couleur autres termes les sont pas tous jumeaux relation notre nouvelle structure une classe jumeaux taille alors par exercice moins des deux graphes aucune classe jumeaux dans comme pour nous appliquons proposition ces deux graphes disons celui sur les deux sont valables terminons que taille est descendue seulement mais tous nos choix ont canoniques des veut donc gratuits nous avons perdu que temps pour temps qui est acceptable alors nous avons coloriage relation auquel aucune classe jumeaux taille nous appliquons les foncteurs weisfeilerleman coloriage puis nous utilisons lem des designs prop avec nous trouvons les dans proposition par force brute temps proportionnel nous les nous imposons que qui dans sens nous sommes dans premier cas lem designs pas couleur dominante nous cueillons les classes couleur par plus rouge comme une longueur onde jusqu avoir une union des classes avec ceci marche aucune classe taille telles classes existent nous comme classe plus grande type nous appliquons exercice obtenons graphe remplissant les conditions notre proposition avec donc donc nous appliquons proposition graphe marche est important ici que puisque nous avons encouru dans indice restons donc dans cas lem des designs nous avons coloriage avec une classe couleurs telle que une classique non triviale sur nous graphe avec des sommets est couleur vient par lem des designs les seront non pas seulement celles noir mais aussi les entre les dans les couleurs par nous appliquons les graphe obtenons une est elle pas couleur nous notre celui pour comme avant nous pouvons appliquer proposition tel graphe sans changer parce que marche ici aussi parce que est important que soit plus petit que par facteur constant puisque jusqu maintenant dans index est supposons donc une classe couleurs dans elle doit car restriction est pas une clique elle restriction aurait aussi cela est impossible par exercice nous pouvons supposer existe une classe couleurs qui satisfait sinon nous avons pouvons fait que implique que nous pouvons supposer aussi que les sont pas toutes couleur elles aurait une classe jumeaux dans graphe dont est dans cas par exercice nous aurions une nous pourrions utilisant proposition ainsi nous avons tout proposition nous appliquons nous obtenons soit soit graphe biparti avec tel aucune classe jumeaux plus que nous pouvons supposer que parce que dans cas contraire nous avons obtenu alors nous pouvons alors faire nous appliquons proposition avec place pas plus que log pas puisque taille dessous pour valeur originale alors nous avons obtenu comme nous avons coupe johnson biparti utilise coupe johnson son tour coupe johnson coupe johnson biparti pour graphe biparti avec taille plus une taille original proposition coupe johnson soit une avec des classes couleurs sommets supposons que est une fonction constante alors nous pouvons trouver temps soit graphe biparti tel que toute classe jumeaux contient plus tel que voire graphe biparti est canonique relation sym sym soi que dire que est constant dire que est une clique preuve restriction une clique alors par pour toute couleur pourpre disons les voisinages dans gpourpre des sommets nous donneraient block design sur design est incomplet parce que est pas monochrome sur fisher nous donne que contradiction avec nos suppositions donc est pas une clique est pas primitive plus rouge ses relations non connexes nous donne canonique par exercice nous pouvons donc supposer que est primitif nous avons deux cas primitive imprimitive supposons abord que est imprimitive relation non connexe plus rouge dans nous donne une partition dans des ensembles tous taille nous avons donc une structure nous utiliserons soit pour soit pour par facteur constant premier pas consiste montrer pas jumeaux dans comme notre est couleur une sait ses sommets sont des jumeaux relation donc avait des jumeaux dans relation nous aurions soit une des couleurs donne une relation non connexe qui contredit fait que est uniprimitive soit que tous les sont des jumeaux relation dans dernier cas par exercice serait monochrome qui est pas cas conclusion pas jumeaux dans relation notre intention est appliquer exercice pour obtenir graphe biparti avec nous devons seulement faire attention que graphe soit pas trivial soit tout dans graphe biparti pour une couleur consiste les couleur par pas pour tout nous non canonique obtenons assignant couleur sommet supposons donc une couleur que nous appellerons violet telle que dviolet tel aucune classe plus que jumeaux dans relation nous tel non canoniquement ainsi cet cette nous obtenons une graphe biparti gviolet supposons que cela est pas cas donc pour chaque existe une classe jumeaux relation telle que pour chaque les tout sont couleur alors elles doivent violettes soit vert une couleur qui soit pas violet alors graphe dans exercice est pas vide comme est violet pour tout est pas complet non plus comme est aucune classe jumeaux relation avec nous avons donc tout graphe biparti type que nous maintenant cas primitive fixons arbitraire non canoniquement nous pouvons supposer une couleur disons violet telle que dviolet puisque sinon les couleurs des qui connectent les avec nous donneraient lviolet violet donc soit bleu une couleur telle que gbleu est positif une telle couleur existe parce que est pas une clique plusieurs couleurs comme cela nous choisissons plus bleue entre elles alors lbleu satisfait graphe biparti gviolet est par exercice est non vide parce que pour tout donc lviolet complet nous aurions lviolet pour tout comme ceci impliquerait que lviolet cela voudrait dire que sont des jumeaux dans graphe gviolet par argument avant sur exercice fait que soit pas monochrome impliquent pas jumeaux dans relation graphe gviolet donc gviolet est pas complet par exercice nous obtenons aucune classe jumeaux dans gviolet plus nous avons donc dans preuve originale babai point qui suit est argument alternatif par lui col rumore sordo galoppo lorsque cet article train est plus concis que argument origine plus correct avant preuve faisait deux fois plus recours proposition qui faisait indice catastrophique une couleur domine non aligner transitif etc cas sans couleurs dominantes nous sommes dans cas dans lequel coloriage pas couleur dominante ici est image une structure sous foncteur qui commute avec action alt fait que pas couleur dominante nous servira pour trouver ses isomorphismes possibles pour trouver des isomorphismes tout alt nous avons travailler avec ensemble des classes dans faire union cyi pour isoh cyi ceci est similaire est par coloriage est pas une permutation coloriage cyi alors cyi supposons par contre moins tel que nous disons que aligne cyi est trivial trouver comme pas couleur dominante ceci est assez contraignant que nous voulions appliquons cette cas primitif que nous sommes train discuter une bijection donc induit coloriage nous sommes dans une situation similaire celle mais mieux est facile montrer que comme aucune classe couleur plus aucune classe couleur plus nous alors comme dans cas intransitif preuve luks thm qui isomorphisme pour des longueur longueur totale dernier pas lifting consiste trouver des qui induisent une bijection ceci est trivial cas maintenant ensemble sommets sera canoniquement savoir tant que image une structure sous foncteur tout comme coloriage nous pouvons supposer que une classe couleurs dominante puisque dans cas contraire nous pouvons passer nous voulons savoir quels alt respectent ceci nous aidera contraindre les isomorphismes tout comme par est ensembles taille les seules permutations qui sont permises sont celles qui respectent partition groupe qui respecte partition est isomorphe nous avons donc notre avec avoir nous travaillons comme dans sur les autres classes couleurs deux nous les partitions des deux ont nombre ensembles taille pour chaque couleur puis nous alignons les deux exactement comme pour automorphisme cas johnson soit johnson sur ensemble sommets deux johnson sur des ensembles sommets taille nous avons comment explicitement avec les ensembles taille ensemble taille nos structures sont pas isomorphes nous une bijection entre alignons les deux structures nous avons notre avec place situation nous est donc plus favorable que dans cas nouveau nous laissons lecteur une petite confession cas primitif que nous venons traiter pourrait exactement comme cas imprimitif que nous examinerons maintenant motivation traitement pour primitif est aucune peine est perdue puisque toutes les techniques que nous avons nous seront essentielles dans cas imprimitif cas imprimitif nous avons une application surjective explicite alt sym est groupe permutation nous pouvons supposer que log arbitraire application factorise comme suit alt est stabilisateur blocs alt est isomorphisme nous devons isog sont des nous avons cas nous attaquerons locale pour nous arriverons obtenir soit fait que autgt contient alt soit contraire ici groupe nous calculerons tous ces pour une taille nombre est grand nous aurons que autg contient grand groupe alternant qui restera faire sera une version dans cas contraire les formeront une structure dont est nous pourrons donc appliquer lem des designs suivi comme avant aussi quelques autres cas particuliers mais ils nous des qui est aussi bien les certificats locaux automorphismes local pour est soit une paire pas plein sym alt donc pas plein autw soit une paire plein autgt alt local canonique est clair plein voire pas plein garantit que autgt est alt voire est pas est tant que tuple son ordre seulement dans sens pas groupe sym disons une apparence nous regardons point vue ordre ordre nous construisons par une chaque pas est groupe autw sera invariante sous tout nous pouvons calculer comme dans exercice temps chaque pas nous ajoutons tous les atteints par voir puis nous mettons jour selon nouveau nous nous alt plus qui veut dire aucun est atteint par est clair aura dans cas nous retournons plein dans cas nous retournons plein est clair que stabilisateur des points est contenu dans nomenclature babai global fait autgt sym non pas seulement dans autw mais aussi dans autgt puisqu tous les points nous savons que alt par proposition sous condition que max alt est facile nous avons utilisant deux arbitraires alt sont est simple quels sont atteints par nous calculons pour chaque par toujours par alt ceci prend temps polynomial reste voir comment mettre jour nous pour ancienne valeur tout est dans donc autw comme dans iso autw aut est noyau parcourt des des classes nous pouvons trouver rapidement pour tout sym par proposition nous donne que toute orbite contenue ensemble atteints par est longueur par mettre jour iso pour des longueur comme nombre est fait appel fois pour des longueur ceci comme routine qui prenait temps est acceptable pour log nous choisirons comparaisons une cidessus nous permet relation entre deux locaux pour deux soient pour soit glauque glauque nous voulons calculer isogt est classe des qui envoient ensemble est valeur quand est place pour nous suivons suivante nous mettrons jour dans chaque non pas seulement mais aussi classe isomorphismes comment faire analogue ison ison est noyau parcourt des des classes contenues comme est par donc par fait que envoie sur non seulement classe laquelle appartient classe iso dans expression est vide comme avant toute orbite contenue est longueur appels par pour des longueur par ailleurs nous sont comme tuples est facile isog est classe des qui envoient tuple nous avons puis utiliser pour les qui envoient dans bon ordre des certificats coupe coupe locaux coupe relations relations leman johnson lem des designs etc oui pas couleur dominante pullback suivant pour une nous trouvons des locaux pour chaque taille est une constante log log soit autg groupe par les pleins soit support ensemble des qui sont pas par tout notre objectif est les isomorphismes isog une autre puisque les sont canoniques assignation une est aussi donc nous arrivons deux cas suivant pour pour les deux sont pas isomorphes cas mais aucune orbite est longueur alors nous colorions chaque par longueur orbite qui contient ceci est coloriage canonique soit aucune classe couleurs est taille soit une classe couleurs taille est ensembles taille dans cas comme dans autre nous passons une cas une orbite est longueur cas alt nous sommes dans cas grande nous comme jusqu point nous devons isoh est alt alt soient des arbitraires sous deux alt alt par nous savons que les classes sont non vides puisque alt comme pas orbites longueur nous pouvons ces deux classes par des appels pour des longueur longueur totale elles engendrent auth encore par fait que alt classe isoh sera non vide ssi isok est non vide nous pouvons cette classe par des appels comme puisque pas orbites longueur elle est non vide nous obtenons isoh auth isok cas alt soit entier maximal avec que est agit transitivement sur ensemble des dtuples distincts par cgfs nous voulons pas utiliser cgfs nous avons borne classique log choisissons arbitrairement reste notre traitement cas sera donc seulement canonique relation qui comme nous savons est pas voir restriction groupe est transitive sur mais elle est pas doublement transitive donc schurienne qui lui correspond est pas une clique nous livrons cette tout comme pour comparer les qui correspondent deux nous alignons leurs classes abord elles sont pas taille une nous donne cas autre pas les sont pas isomorphes les isomorphismes seront donc contenus dans stabilisateur ensemble facile comme vers puisque est surjective nous pouvons remplacer par application alt puis nous construisons les comme comparons que nous donne tout nous nous occupons agit comme habitude appels pour des longueur longueur totale cas nous alignant les supports pour les par tout comme dans cas nous allons une relation avec peu jumeaux pour donner lem des designs regardons toutes les sont une action sur est est aussi nous regardons depuis longtemps puisque nous devons comparer les couleurs sur des induites par des pour ces sont isomorphes cette nous des couleurs par des classes deux paires sont ensemble des isomorphismes est non vide nous colorions dans coloriage correspondant par classe ici est sans des elle est gris pour aucune classe jumeaux peut avoir existait tel ensemble avec contiendrait ensemble avec tous les ordres auraient couleur ceci voudrait dire que ensemble des isomorphismes serait non vide pour importe quels ordres autgt contiendrait des donnant toutes les permutations possibles ceci nous donnerait une contradiction puisque contenu est pas plein alors pourvu que nous avons coloriage sans aucune classe jumeaux avec nous pourrons donc appliquer lem des designs une application habituels mais calculer ces coloriages les classes sont par contre aucun besoin les calculer tout dont nous aurons besoin pour comparer des structures qui viennent sera capables comparer deux tuples sur par sur par dire elles sont couleur autres termes nous devrons calculer tout pour toute paire pour les paires ensemble isomorphismes que nous savons faire les couleurs sont donc dans pratique des dans index que nous enrichissons auquel nous faisons durant nos nous invoquons donc lem des designs suivi par reste fine dell opera lecteur peut que les informations jusqu ici temps pris par des type sont assez pour donner une borne type exp log pour temps algorithme qui isomorphisme ceci donne une borne exp log pour isomorphisme graphes avec sommets avec peu plus travail devient clair que dans cas comme dans autre nous donnons les dans appendice exposant est plus petit que celui origine est devenu possible quelques que capable apporter remerciements remercie vivement babai bajpai bartholdi dona kowalski kantor puccini pyber rimbaud pour des corrections suggestions particulier babai beaucoup mes questions aussi fourni des versions plusieurs sections particulier les sont sur ces nouvelles versions voudrais aussi remercier ladret dret pour grand nombre corrections ordre typographique linguistique appendice analyse temps quelques sur principale tout moment nous travaillons avec groupe transitif sym qui agit sur blocs disjoints nous notons noyau action sur vrai dire nous aurons toute une tour blocs est moins dont les blocs sont tous taille dont noyau est trivial nous voudrions que action sur soit primitive donc elle est pas nous ajoutons tour minimal tel que soit nous sera noyau nouveau est petit log cas cameron nous notre plusieurs instances avec place chacune ces instances plusieurs instances une pour chaque orbite chaque orbite est contenue dans bloc les intersections avec les blocs nous donnent une tour blocs pour nous sommes dans cas nous passons instances avec place nous passons nouveau blocs ajoutons tour comme son nouveau dernier niveau nous notons noyau action sur alors alt nous par donc nous avons isomorphisme altm nous sommes dans cas principal que babai attaque ses une altm soit groupe intransitif sans grandes orbites soit produit soit groupe nous quelque peu nous pourrions avoir disons produit agissant sur une orbite grande taille peut seulement voir note pied page dans dans passage est bien gratuit autres groupes sur des petites orbites plusieurs produits agissant sur des petites orbites dans cas intransitif sans grandes orbites nous comme dans preuve luks aura plus que dans luks mais manque grandes orbites gain dans est aussi plus grand dans cas nous dans cas qui correspond dans des ensembles taille couleur nous avons une action primitive alts sur blocs taille nous passons alors cette action ces blocs sans oublier les blocs auxquels nous retournons plus tard avoir travailler sur alts est clair que type altk nombre qui est pas temps examinons temps total algorithme qui trouve les isomorphismes entre deux les pas individuels sont peu aucun plus log temps notre attention doit porter avant tout sur dans une est toujours une descente soit vers des plus courtes soit vers groupe plus petit moins dans des tranches plus par une tour blocs ayant plus niveaux dans premier type descente groupe reste est par une restriction dans cas longueur des reste nous pouvons aussi avoir des deux cas tant mieux groupe devient plus petit les raccourcissent aussi descente moins moins avantageuse est celle cas intransitif luks pourrait arriver que ait deux orbites sur une longueur une longueur ceci serait compatible avec une borne polynomiale sur temps pourvu que temps pris avant descente soit polynomial pour autres types descente sont plus mais aussi plus avantageux nous descendons des longueur altm par exemple est clair est impossible descendre plus nombre logarithmique fois cette est crucial pas oublier peut dans une perte nos choix sont canoniques relation notre groupe leur application sera par voir alors chaque cas intransitif luks est comme nous avons compatible avec une borne polynomiale alors sur cas agit primitive sur blocs soit noyau nous sommes dans cas dans cas mais avec log nous faisons appel log instances principale pour des longueur ceci est consistant avec une borne totale type exp log nous pouvons donc nous concentrer sur cas existe isomorphisme alt log rend cet isomorphisme explicite premier pas est locaux avec comme objectif une relation sur est primitif une telle relation est trivial voir locaux log disons nous devons les calculer aussi comparer toute paire premier pas calcul savoir calcul prend temps plus autres calculs prennent moins temps usage par contre est relativement lourd nous faisons appel principale fois pour des longueur ceci passe pour chaque ensemble taille fois pour comparer des paires est analogue nous faisons donc appel principale fois pour des longueur dans chacun ces appels notre tour stabilisateurs est notre groupe est groupe transitif restriction une ses orbites est pour deux blocs notons nombre blocs dans chaque bloc est clair que nombre augmente pas quand nous passons restriction par exemple une ses orbites examinons maintenant des locaux trois cas dans premier temps calcul additionnel est peu trivial nous obtenons une soit groupe intransitif sans grandes orbites soit produit sur une grande orbite autres groupes sur des orbites plus petites ici analyse devient nous devons prendre non seulement taille domaine mais aussi groupe qui agit sur lui plus nous devons borner nombre fois que notre tour pourrait raccourci encore ceci sera par nous supposons que nous avons des tour donc notons temps principale pour des longueur pour une tour blocs pour telle que est une fait par moins coloriage sans aucune grande classe couleurs assure une descente vers des longueur nous devrons aussi inclure facteur log prenant temps requis pour nos comparaisons paires locaux donc dans cas que nous examinons est par log pour log pour puisque log ceci est consistant avec exp log pour avec exp log log pour par exemple cas similaire facteur constant cas sont dans les deux cas nous arrivons construire une relation avec dans cas log dans cas puis nous appelons suivi lem des designs pour des prend temps lem des designs garantit existence tuple avec certaines nous cherchons tel tuple par force brute qui prend temps qui est plus important est que choix est pas canonique donc temps tout qui reste est par log prend temps ici nouveau nous faisons des choix qui sont pas canoniques ils imposent facteur log sur tout qui suit est soit qui implique une produit type des plus courtes soit johnson qui implique une donc soit log pour log pour ici nous pourrions travailler avec une borne moins mais cela nous servirait peu donc les sont consistantes avec exp log pour faire type comparaisons avance nous aide mais pas les faire avance changerait pas ordre temps asymptotiquement comme nous concluons que temps total pour les isomorphismes entre deux longueur est log babai graph isomorphism quasipolynomial time disponible ligne sur babai graph isomorphism quasipolynomial time extended abstract dans proc acm stoc babai lectures graph isomorphism university toronto dept computer science notes bcp babai cameron orders primitive groups restricted nonabelian composition factors journal algebra bkl babai kantor luks computational complexity simple groups dans proc ieee focs bls babai luks seress permutation groups dans proc acm stoc baps babai saxl number elements simple groups lms comput math cfi cai furer immerman optimal lower bound number variables graph combinatorica cam cameron finite permutation groups simple groups bull london math soc evp evdokimov ponomarenko highly closed celular algebras highly closed isomorphisms electr comb fisher examination possible solutions problem incomplete blocks ann eugenics fhl furst hopcroft luks algorithms permutation groups dans proc ieee focs higman finite permutation groups rank math iml immerman lander describing graphs approach graph canonization dans complexity theory retrospective honor juris hartmanis occasion birthday springer luks isomorphism graphs bounded valence tested polynomial time comput sys sci orders primitive groups algebra pyber analysis babai quasipolynomial algorithm disponible ligne sur pyber orders doubly transitive permutation groups elementary estimates combin rchw wilson osaka math sch schreier die untergruppen der freien gruppen abh math semin univ hambg sims graphs permutation groups math sims computational methods permutation groups dans computational problems abstract algebra pergamon oxford sun wilmes faster canonical forms primitive coherent dans proc acm stoc harald helfgott mathematisches institut bunsenstrasse allemagne helfgott
| 4 |
modified recursive cholesky rchol algorithm mar explicit estimation correlation matrices vanita pawar krishna naik karamtot vanita krishnanaik cholesky decomposition plays important role finding inverse correlation matrices fast numerically stable linear system solving inversion factorization compared singular valued decomposition svd factorization decomposition different methods exist find cholesky decomposition given matrix paper presents comparative study proposed rchol algorithm conventional methods rchol algorithm explicit way estimate modified cholesky factors dynamic correlation matrix cholesky decomposition fast numerically stable linear system solving inversion factorization compared singular valued decomposition svd factorization decomposition wireless communication system highly dependent matrix inversion correlation matrix system consists huge matrix inversion outdoor wireless communication channel changes dynamically mobile user case narrowband channel channel considered constant symbol duration whereas broadband changing within symbol period channel forms special structure channel matrix correlation matrix exploit special structure novel modified recessive cholesky rchol algorithm introduced proposed rchol algorithm computational efficient algorithm compute modified cholesky factors known well unknown covariance matrix paper present comparative study conventional cholesky algorithm rchol algorithm manifest importance proposed algorithm highly dynamic wireless communication ystem odel wireless communication system number transmit received antennas used improve diversity system channel transmitter receiver different form depends number antennas used transmitter receiver side channel siso simo output mimo let received signal number transmit antennas multipath channel noise represented let received vector stacking successive received vectors transmitted symbol vector represented matrix form correlation matrix written let rnij correlation matrix time instant represented equation equation respectively holesky ecomposition correlation matrix complex matrix pseudoinverse computed cholesky factors lower triangular matrix cholesky factors correlation matrix represented llh computed section details conventional cholesky algorithms rchol algorithm cholesky decomposition gaxpy version cholesky decomposition factorizes complex hermitian symmetric matrix product lower triangular matrix hermitian transpose llh lower triangular matrix hermitian matrix must positive definite method needs square root operation algorithm steps compute time instant find square root diagonal element modify column equate lower triangular part repeat steps time instant algorithm cholesky decomposition llh initialization order updates end levinson recursion may used derive lattice recursion computing factors data matrices lattice recursion used derive schur recursion computing cholesky factors toeplitz correlation matrix detail algorithm given algorithm schur algorithm like previously mentioned algorithm computes inner product compute matrix initialization algorithm steps compute time instant initialize first column first column cholesky factor compute rest column recursively columns repeat step time instant algorithm schur algorithm llh initialization tril modified cholesky algorithm ldl avoid square root operation modified cholesky algorithm used avoids square root operation introducing diagonal matrix cholesky factors modified cholesky algorithm require positive definite matrix determinant must nonzero may rank deficient certain degree may contain negative main diagonal entries positive semidefinite algorithm steps compute time instant modify column equate strictly lower part matrix ones main diagonal equate main diagonal main diagonal repeat step time instant algorithm modified cholesky decomposition ldlh initialization order updates end end diag daig tril recursive cholesky algorithm shcur algorithm rschur hhh schur algorithm recursively compute columns lower triangular matrix form matrix shown order updates kref kref scaling factors kref note notation followed represents vector rchol algorithm clear equation equation represented submatrix utilize special structure correlation matrices propose modified recursive cholesky algorithm compute cholesky factors recursively algorithm modification schur algorithm mentioned general approach consists using schur algorithm induce recursion columns dynamic algorithm need inner products compute correlation matrix cholesky factors computed explicitly let computed algorithm steps initialize first first column cholesky factor compute second column recursively substitute repeat step time instant schur algorithm columns cholesky factors time instant computed recursively correlation matrix instant whereas rchol algorithm first two columns cholesky factors time instant computed recursively previous cholesky factor submatrix cholesky factors updated recursively previous cholesky factor time instant conventional cholesky algorithm mentioned introduced normal matrices whereas proposed matrix well suited block matrices simulations shown algorithm recursive cholesky update rchol ldlh initialization order updates onclusion convention methods cholesky factorization requires correlation matrix needs inner product recursive modied cholesky algorithm rchol algorithm explicit way recursively calculating matrices without estimating correlation matrix requires less number iteration avoids error propagation column updates rchol algorithm use calculating matrix applicable cdma ofdm etc wireless communication systems kref scaling factors kref kref iii imulation results compare proposed rchol algorithm schur algorithm compared result algorithm theoretical results fig show ratio difference matrices correlation matrix unknown application blind channel data estimation fig shows maximum error rchol algorithm schur algorithm nearly times rchol algorithm case ratio fig shows maximum ratio rchol algorithm schur algorithm fig comparisons rchol algorithm schur algorithm unknown known correlation matrix proposed algorithm difference schur algorithm difference proposed algorithm ratio schur algorithm ratio fig show ratio difference matrices correlation matrix known fig shows maximum error rchol algorithm schur algorithm nearly times rchol algorithm case ratio fig shows maximum ratio rchol algorithm schur algorithm fig concluded schur algorithm best suited correlation matrix known leads huge error propagation column unknown applied blind channel estimation converse rchol algorithm best suited blind channel estimation reduces error propagation column pawar naik diat pune india vanietaapawar eferences golub van loan matrix computations pawar krishna naik blind multipath time varying channel estimation using recursive cholesky update aeu int electron hunger report floating point operations calculus matrix rialan scharf fast algorithms computing cholesky factors oftoeplitz operators ieee trans
| 0 |
may euclidean criterion irreducibles pete clark abstract recast euclid proof infinitude prime numbers euclidean criterion domain infinitely many atoms make connections furstenberg topological proof infinitude prime numbers show criterion applies even certain domains nonzero nonunits factor products irreducibles introduction article genesis graduate vigre research group taught paul pollack fall introduction process mathematical research rather concentrating fixed topic preselected goal guide students process selecting performing research one technique tried inculcate exploitation relation theorems proofs good theorem several proofs know two proofs different used prove theorems first meeting pollack presented seven proofs euclid proposition infinitely many prime numbers first proof suppose given domain field nonzero nonunit factors irreducibles whenever nonzero nonunit unit least one irreducible element given irreducibles factoring get new irreducible element pointed argument though correct imply euclid result problem salvages suggested enough replace necessary present general fix euclidean criterion domain infinitely many nonassociate irreducibles explore consequences soon find scenic tour century mathematics engage work jacobson furstenberg among others acknowledgments thanks members introduction mathematical research uga vigre group conversations saurabh gosavi noah robert samalis lee troupe lori watson helpful group coleader paul pollack made key contributions first emphasized euclidean criterion automatically yields pairwise comaximality second theorem inspired thm though came statement could prove various special cases proof included grateful two anonymous referees careful reports particular example suggested first referee date may pete clark euclidean criterion primer factorization domains ring mean commutative ring multiplicative identity denote set nonzero elements element unit denote group units subset ring denote ideal generated standard write ideals comaximal elements comaximal comaximal indexed family ideals pairwise comaximal similarly pairwise comaximal elements domain nonzero ring say divides write elements associates element domain irreducible nonzero nonunit implies prime element element prime ideal thus nonzero nonunit prime atom domain principal ideal generated irreducible element thus two irreducibles domain determine atom associate common literature terms atom irreducible fully synonymous minor distinction convenient purposes usually count count irreducibles domain associates sometimes want count irreducibles furstenberg domain domain every nonzero nonunit irreducible atomic domain domain every nonzero nonunit irreducible elements unique factorization domain ufd atomic domain irreducibles bijection prime elements irreducible general converse false atomic domain ufd iff every irreducible prime thm terminology confusing light definition prime number positive integer divisible means irreducible euclid showed one easily show fundamental theorem arithmetic ufd principal ideal domain pid domain ideal generated single element every pid ufd follows euclidean algorithm pid domain domain every finitely generated ideal principal ring noetherian ideals finitely generated noetherian domains atomic prop thus pid precisely noetherian domain dedekind domain domain nonzero proper ideal factors uniquely prime ideals domain dedekind iff noetherian dimension one every nonzero prime ideal maximal integrally closed every element fraction field satisfies monic polynomial coefficients lies thm working domain rather general ring confers certain advantages explanation terminology comes euclidean criterion irreducibles fact every nonzero ideal ring contains nonzero principal ideal domain gives bijection thus every nonzero ideal domain nonzero ideals contains thus nonzero euclidean criterion ring satisfies condition words fact equivalent nonzero ideal though defer consideration restatement later example ring satisfies condition indeed take positive negative domain polynomial ring satisfies condition indeed take satisfies condition indeed geometrically clear multiply large enough much unit away point unit circle proposition domain satisfies condition proof map given injection thus theorem euclidean criterion let domain field satisfying condition infinite sequence pairwise comaximal nonunits also furstenberg admits infinite sequence pairwise comaximal irreducibles thus sequence distinct atoms proof induction let nonzero nonunit chosen pairwise comaximal condition clearly induction since furstenberg field irreducible chosen pairwise comaximal irreducibles condition nonzero since nonunit irreducible factor comaximal finally pairwise comaximal irreducibles must two applications euclidean criterion first two immediate theorem domain infinitely many atoms particular let ufd let ufd satisfying condition infinitely many nonassociate prime elements gaussian integers infinitely many atoms since pid infinitely many nonassociate prime elements pete clark theorem let furstenberg domain field infinitely many atoms theorem let furstenberg domain let set irreducible elements either empty field infinite otherwise proof assume fix finite theorem yields infinitely many atoms infinite infinite subset supplement irreducibles residue classes switch ancient theorem matters contemporary interest ask infinitely many primes satisfying certain additional conditions result along lines relatively modest general algebraic nature lemma let elements ring proof let lemma let domain field satisfying condition nonzero nonunit proof put take nonzero nonunit condition nonzero nonunit nonzero nonunit take proof following result suggested paul pollack theorem let atomic domain satisfying condition let nonzero ideal let proper subgroup infinitely many pairwise comaximal irreducibles class modulo lies proof let quotient map let let inductively assume pairwise irreducibles let need include base case case lemma nonzero nonunit get irreducible factorization least one say associate hence also would divide contradicts irreducibility contradicts euclidean criterion irreducibles moreover mod hence also finally since mod lemma thus may take completing induction get proper subgroup infinitely many prime numbers mod moreover classical case one run argument positive integers get rid annoying special case dirichlet theorem primes arithmetic progressions observation granville unpublished reproduced thm case proved elementary euclidean way special case trivial infinitely many primes mod older better known also simpler consider case use ufd granville argument auspicious replacement coprimality arguments comaximality done topological interlude furstenberg lemma section give several proofs following result theorem let furstenberg domain least one finitely many irreducibles precisely nonzero ideal theorem contrapositive part euclidean criterion without information comaximality proofs give inspired famous paper furstenberg essential core argument observation set elements divisible prime number notice nothing natural ordering underlies classical proofs euclid theorem fact property used furstenberg domain lemma furstenberg lemma domain furstenberg domain iff irreducible furstenbergtdomain least one finitely many irreducibles proof virtually immediate left reader following furstenberg let domain fact family nonzero ideal pete clark closed finite intersections system neighborhood bases topology let call adic topology open iff nonzero ideal fact every nonempty open cardinality proof theorem let furstenberg domain least one finitely many irreducibles open hence complement union cosets also open furstenberg lemma open since precisely nonzero ideal following let domain let field two elements ideal function lemma let domain let nonzero ideals also pointwise product proof immediate definition certainly apply part choose nonzero proof theorem step let characteristic function put hence thus moreover characteristic function step since part step precisely part following mercer let domain call subset lovely form nonzero ideal coset nonzero ideal call subset pleasant union lovely subsets nonzero ideal union cosets hence pleasant pleasant sets nonzero ideals fact lovely subset containing pleasant fact every nonempty pleasant subset cardinality proof theorem let furstenberg domain least one finitely many irreducibles furstenberg lemma finite intersection complements nonzero ideals pleasant since precisely nonzero ideal euclidean criterion irreducibles debriefing three proofs given generalizations proofs euclid theorem given furstenberg mercer latter two works take detopologization furstenberg proof goal presentation argument differs superficially mercer chose words lovely pleasant precisely commonly understood technical mathematical meaning said basic open reader attention would drawn fact since basic sets closed finite intersections form base topology mercer exposition takes pains point underlying fact finite intersections unions unions finite intersections course basic logical principle conjunctions distribute disjunctions conversely like many basic logical principles completely innocuous used context version argument pleasant sets form topology less crisp enunciation facts need check first part proof find quite striking pleasant facts enunciated way must agree claimed essential topological content furstenberg use periodic functions involves slightly packaging standard kind well known boolean ring subsets represented ring maps pointwise addition multiplication recommend wikipedia glaymann references glaymann develops correspondence applies prove identities manner intended used high school classroom interesting snapshot new math near zenith ubiquitous theorem result complements theorem deep play recurring role common intersection various constructions themes first proof give follows topological conceit section give simpler proofs later theorem let domain field finitely many maximal ideals precisely nonzero ideal proof endow topology maximal ideal neighborhood subbase open iff subset claim topological proof infinitude primes rather topological proof infinitude primes pete clark fact gives every nonempty open cardinality union cosets also open therefore open since precisely subset thus also supplement topologies domain common generalization theorems let family nonzero ideals domain suppose particular look theorem instead taking family nonzero ideals could take endow unique translationinvariant topology neighborhood subbase coarsens adic open yields sharper conclusion particular back version euclid argument adic topology interesting topological space countably infinite metrizable totally disconnected without isolated points hence homeomorphic euclidean topology golomb proved euclid theorem using topology base arithmetic progressions coprime golomb topology makes countably infinite connected hausdorff space already interesting domain field may consider golomb topology neighborhood base given nonzero ideal topology every maximal ideal closed domain field finitely many maximal ideals open thus contains nonzero ideal get another proof theorem golomb topology never hausdorff fact however induced topology leave exploration reader connections ideal theory ring denote maxspec set maximal ideals comaximal ideals lemma let sequence pairwise comaximal proper ideals ring maxspec infinite proof let maximal ideal containing contradiction particular part euclidean criterion implies domain field satisfies condition infinitely many maximal ideals thus get another proof theorem means last adic topology domain always hausdorff furstenberg domain finitely many irreducibles new topology euclidean criterion irreducibles euclid meets jacobson time examine explicitly statement condition nonzero ideals readers see already seen connection jacobson radical assume prior familiarity fact use euclidean criterion motivate discussion concepts proposition prop ring let jacobson radical following equivalent proof contraposition suppose lies maximal ideal also thus also contradiction lie thus contraposition suppose maximal ideal follows thus unit get immediately corollary ring satisfies condition iff gives third proof theorem finitely many maximal ideals apply corollary ring zero jacobson radical called questions answers raise natural questions answer question part euclidean criterion must assume furstenberg domain question semiprimitive domain field infinitely many maximal ideals must domain infinitely many maximal ideals semiprimitive question let furstenberg domain semiprimitive still infinitely many atoms finitely many maximal ideals infinitely many atoms jacobson semisimple pete clark example ring algebraic integers furstenberg domain fact antimatter domain irreducibles whatsoever algebraic integer always factor moreover field integers contradiction nonzero ideal constant coefficient minimal polynomial nonzero element nonzero integer follows contained every maxspec choose prime number unit otherwise least one maximal ideal containing fact set maximal ideals containing continuum cardinality contradiction answer question yes semiprimitive domain field irreducibles whatsoever following result answers questions dedekind domains shows euclidean criterion principle completely efficacious determining whether dedekind domain infinitely many atoms theorem dedekind domain field following equivalent semiprimitive infinitely many maximal ideals iii infinitely many atoms proof know domain dedekind domain nonzero element contained finitely many maximal ideals fact infinite subset maxspec iii dedekind domains noetherian hence furstenberg domains euclidean criterion applies iii contraposition dedekind domain finitely many maximal ideals pid thm pid distinction maximal ideals principal ideals generated prime elements atoms question let number field ring integers set prime numbers infinite sequence pairwise comaximal nonunits well known infinitely many prime ideals thus semiprimitive imaginary quadratic finiteness leads direct verification condition similarly direct verification question leave reader address proposition let noetherian domain dimension one nonzero prime ideals maximal maxspec infinite semiprimitive thus infinitely many pairwise comaximal irreducibles proof semiprimitive every maximal ideal minimal prime ideal since noetherian noetherian ring finitely many minimal prime ideals thm jacobson ring ring every prime ideal intersection maximal ideals containing since domain prime jacobson domain euclidean criterion irreducibles must semiprimitive quotient jacobson ring jacobson ring jacobson ring commutative finitely generated jacobson ring thm theorem jacobson furstenberg domain field infinitely many pairwise comaximal irreducibles let field let prime maximal ideal ring coordinate ring integral affine variety positive dimension infinitely many pairwise comaximal irreducibles domain finitely generated field infinitely many pairwise comaximal irreducibles sum want see domain infinitely many maximal ideals semiprimitive finitely generated field noetherian must nonzero prime ideal maximal cues following example gives negative answer question example consider ring formal power series integral coefficients hard show atomic domain fact noetherian ufd thm since jacobson radical contains thus nonzero since hypotheses euclidean criterion apply nevertheless infinitely many pairwise comaximal prime elements namely prime numbers hence infinitely many maximal ideals could replaced pid infinitely many maximal ideals thus answer question yes moreover nonsemiprimitive domain infinitely many comaximal irreducibles example let field recall ufd let fraction field let subring consisting rational functions written lowest terms ufd factorization proceeds except prime elements become units element unit iff thus unique maximal ideal far semiprimitive nevertheless infinitely many prime elements geometric language irreducibles irreducible curves affine plane passing thus answer question yes however say preceding example vastly generalized using following striking result theorem let atomic domain finitely many atoms finitely many prime ideals noetherian every nonzero prime ideal maximal proof atomic domain whenever prime ideal contains nonzero element may factor irreducibles thus see contains irreducible element dividing thus given set generators prime ideal replace set irreducible generators set generators pete clark ideal replacing element one associates change ideal generated thus finitely many nonassociate irreducibles generate finitely many prime ideals follows proof part every prime ideal finitely generated famous result cohen thm ideals finitely generated instance prime ideal principle prime ideals since noetherian implies infinitely many prime ideals cor domain atomic domain finitely many atoms work give complete classification left case noetherian domain finitely many nonzero prime ideals maximal dedekind domain theorem finitely many atoms remaining case integrally closed fraction field case integral closure dedekind domain finitely many prime ideals cor one might expect forces need case example let field consider subring formal power series ring define least discrete valuation nonzero prime ideal particular pid isomorphic subring generating set standard pid structure theory every ideal canp generated two elements thus noetherian hence atomic thus unique maximal ideal give complete description atoms first claim irreducible iff indeed nontrivial factorization involves hence conversely nontrivial factorization since every irreducible associate one form case one form case associate elements valuation certainly irreducible first type associate irreducible second type claim associate iff associate iff done direct computation euclidean criterion irreducibles unique choice leading case similar thus precisely atoms iff finite example cor prime power ring fqd domain exactly irreducibles none one nonzero prime ideal exactly prime unless paper mostly forgotten many years breakthrough work anderson mott gave complete characterization cohenkaplansky domains fact give characterizations one theorem atomic domain following equivalent domain noetherian dimension one nonzero prime ideals maximal finitely many prime ideals integral closure finitely generated maxspec maxspec nonprincipal ideals maxspec finite example let field characteristic different consider localization localization localization always dedekind domain one maximal ideal never maxspec maxspec iff finite euclid beyond atomicity case atomic domain part euclidean criterion yields infinitely many maximal ideals much weaker theorem however life beyond atomic domains example let hol ring entire functions hol put hol countable sets hence thus domain map gives bijection atoms hol element hol unit iff nonzero nonunit finite product atoms iff finite nonempty hol atomic consider sin furstenberg nonzero nonunit vanishes thus divisible irreducible element moreover hol satisfies condition hol let hol thus euclidean criterion applies hol theorem let cardinal numbers domain satisfying following properties domain every finitely generated ideal principal exactly atoms maximal ideal pete clark iii exactly maximal ideals exactly nonzero prime ideals atomic domain iff furstenberg domain iff vii semiprimitive iff postpone proof theorem order discuss significance taking get furstenberg domains number irreducibles number max nonzero prime ideals particular furstenberg domain finite positive number irreducibles infinite number prime ideals theorem extend atomic domains furstenberg domains get semiprimitive furstenberg domain atomic domain come proof theorem requires somewhat specialized results completely presentation would require space want devote make use material iii treatment level detailed sketch let domain fraction field attach principal fractional ideal coincides usual notion principal ideal iff principal fractional ideals form commutative group pointwise multiplication call group divisibility denote partially ordered reverse inclusion put iff order reversal actually rather familiar write contain divide let indexed family nonzero totally ordered commutative groups let direct sum endowed pointwise partial ordering iff let projection onto ith coordinate theorem thm domain isomorphism partially ordered commutative groups see example let ite maximal ideals precisely thus element lies infinitely many maximal ideals semiprimitive iff infinite atom partially ordered commutative group minimal positive element direct generalization previous use term domain minimal positive elements group divisibility precisely principal fractional ideals irreducible element every atom atom conversely elements give atoms since totally ordered one atom least positive element element exists follows furstenberg iff least positive element similarly nonzero nonunit factors irreducibles iff sum atoms iff least positive element nai thus atomic domain iff domain nonzero prime ideal contained unique maximal ideal loc nonzero prime ideals contained correspond euclidean criterion irreducibles bijectively proper convex subgroups subset totally ordered set convex also take lexicographic product copies subgroups indexed ordinal convex subgroups precisely set elements zero nonzero prime ideals take family nonzero totally ordered commutative groups parameterized gives maximal ideals semiprimitive iff left choose groups terms attain assertions define ordinal finite positive integer infinite successor ordinal matters case set cardinality largest element cases take pid nonzero prime ideals min take take cartesian product copies indexed endowed lexicographic ordering least positive element element factors last last factor least elements furstenberg domain moreover atomic domain nonzero prime ideals take cartesian product copies indexed take take supplement rings infinitely many maximal ideals let briefly consider case arbitrary commutative ring though others done see beyond ambitions pursue factorization theory presence zero divisors still ask criteria infinitely many maximal ideals general context longer sufficient two maximal ideals nevertheless euclid jacobson role play proposition prop let ideal contained jacobson radical image unit unit particular natural map surjective proof image unit mod thus every maximal ideal lies maximal ideal thus theorem dubuque let infinite ring maxspec infinite proof show induction maximal ideals base case since infinite nonzero thus maximal ideal induction step let maximal ideals put case suppose moreover proposition surjective follows chinese remainder theorem hence pete clark injection putting last two sentences together conclude thus since field infinite finally gives contradiction case let maximal ideal containing maximal ideal completing induction step special case theorem appears exc ring consider quotient maximal ideals correspond maximal ideals containing maximal ideals thus semiprimitive thus replace ring semiprimitive ring without changing maxspec however jacobson semisimplification need carry domains domains domain maximal ideals generalization theorem ring following equivalent finitely many maximal ideals finite product fields iii finitely many ideals artinian infinite descending chains ideals semiprimitive ring finitely many maximal ideals finitely many ideals proof maximal ideals chinese remainder theorem thm iii immediately maximal ideals correspond bijectively maximal ideals artinian ring finitely many maximal ideals thm follows part primes take euclid argument criterion existence irreducibles distinction evaporates ufd pid finitely many prime ideals ufd finitely many principal prime ideals turns converse also theorem let ufd field finitely many atoms pid finitely many prime ideals known experts see euclidean criterion irreducibles proof ufd finitely many nonassociate prime elements domain maxspec finite theorem theorem every nonzero prime ideal maximal proof theorem shows every nonzero prime ideal contains prime element since maximal thus every prime ideal principal pid thm another case prime ideal principle let move away ufds example deduce theorem let cardinal noetherian domain exactly one nonzero prime ideal exactly irreducibles prime elements proof let field cardinality example noetherian domain one nonzero prime ideal irreducibles since principal prime elements showed atomic domain neither field ufd must least atoms argument nice one must least one nonprime irreducible since prime properly contained prime ideal must therefore contain nonassociate irreducible since unit therefore divisible irreducible associate either finally consider dedekind domains question let dedekind domain infinitely many prime ideals must infinitely many atoms important classical case answer yes number theorists know theorem number field ring integers infinitely many nonassociate prime elements proof step number field number rational primes split completely infinite special case chebotarev density theorem however proved elementary way shown using basic algebraic number theory omit comes showing every nonconstant polynomial set prime numbers dividing infinite trivial let prime divisors allow let finite set primes dividing let pai pai consider pakk pai set divisible prime finite step prime ideal number field principal iff splits completely hilbert class field every prime ideal lying one infinitely many prime numbers split completely principal looking argument one wonders working working hard perhaps simple argument gives general affirmative answer question fact question answered negatively claborn example pete clark construction impressively direct start dedekind domain pid let set prime elements pass prime ideals precisely nonprincipal prime ideals remain nonprincipal construction also appears work samuel thm therein attributed nagata lemma dedekind domain write ideal class group quotient monoid nonzero ideals equivalence relation iff setting construction localization multiplicative subset generated prime elements theorem let infinite cardinal dedekind domain exactly atoms prime elements proof use properties elliptic dedekind domains details see let algebraically closed field characteristic cardinality put dedekind domain nullstellensatz nonzero prime ideals form pairs words points projective elliptic curve excluding point infinity moreover theorem since prime ideal principal thus dedekind domain maxspec without prime elements dedekind every ideal generated two elements thm together fact dedekind domains atomic domains implies maxspec irreducibles thus number irreducibles maxspec since infinite thus references anderson mott domains integral domains finite number irreducible elements algebra anderson factorization commutative rings zero divisors rocky mountain math clark commutative algebra http clark elliptic dedekind domains revisited enseignement math cohen kaplansky rings finite number primes trans amer math soc claborn dedekind domains rings quotients pacific math cohn unique factorization domains amer math monthly coykendall spicer domains goldbach conjecture proc amer math soc cass wildenberg math bite novel proof infinitude primes revisited mathematics magazine vol dubuque http fuchs salce modules domains mathematical surveys monographs american mathematical society providence furstenberg infinitude primes amer math monthly euclidean criterion irreducibles glaymann characteristic functions sets mathematics teacher golomb connected topology integers amer math monthly kaplansky commutative rings allyn bacon boston mass lam reyes prime ideal principle commutative algebra algebra mercer furstenberg proof infinitude primes amer math monthly nagata remark unique factorization theorem math soc japan pollack always buried deep second course elementary number theory american mathematical society providence poonen http samuel lectures unique factorization domains notes pavman murthy tata institute fundamental research lectures mathematics tata institute fundamental research bombay zafrullah http
| 0 |
sufficient conditions tightness shannon capacity bounds channels lin fady jan sufficient conditions determining closed form capacity region memoryless channels twcs derived proposed conditions relax shannon condition identify twcs certain symmetry property also generalize existing results examples given demonstrate advantages proposed conditions index information theory channels capacity region inner outer bounds channel symmetry ntroduction finding capacity region discrete memoryless channels twcs form open problem difficulty lies causality transmission since senders allowed generate channel inputs adapting previously received channel outputs shannon gave uncomputable expression capacity region another expression using directed information given capacity region twcs known special channels twcs additive white gaussian noise determinisitc twcs twcs discrete additive noise injective twcs thus shannon inner outer bounds still play important role characterizing capacity region literature shannon symmetry condition condition established chaaban varshney alouini cva two known sufficient conditions shannon inner outer bounds coincide thus directly characterizing capacity region shannon condition focuses certain symmetry structure channel transition probabilities cva condition focuses existence independent inputs achieve shannon outer bound although two conditions used determine capacity region large class twcs interest establish new conditions wider families channels paper four sufficient conditions guaranteeing shannon inner outer bounds coincide derived similar cva condition conditions identify independent inputs achieve shannon outer bound based approach twc viewed two channels authors department mathematics statistics queen university kingston canada emails fady linder author department mathematics statistics queen university kingston canada contextere ottawa canada email lin work supported part nserc canada user twc user fig block diagram transmission state two derived results shown substantial generalizations shannon cva conditions moreover simplest condition easily verified observing channel marginal distributions rest paper organized follows section system model prior results reviewed new conditions finding capacity region provided section iii discussion connections new conditions prior results given section along illustrative examples concluding remarks given section reliminaries communication system shown fig two users want exchange messages via uses twc messages assumed mutually independent uniformly distributed respectively integers let respectively denote finite channel input output alphabets user joint distribution inputs outputs memoryless twc governed channel transition probability channel code twc defined follows definition code twc consists two message sets two sequences encoding functions two decoding functions messages encoded channel inputs time functions messages channel inputs generated also adapting previous channel outputs via receiving channel outputs user reconstructs yjn probability decoding error defined based performance index define achievable rate pairs capacity region definition rate pair said achievable exists sequence codes limn capacity region twc closure convex hull achievable rate pairs date computable expression capacity region general memoryless twcs found shannon established inner outer bounds capacity region let denote set rate pairs joint distribution random variables given capacity region discrete memoryless twc transition probability inner bounded outer bounded denotes taking closure convex hull general different coincide exact capacity region obtained independent inputs used achieve point capacity region note exist improved bounds twcs however bounds either restricted particular case binary multiplier twc expressed auxiliary random variables match approach next review shannon cva conditions imply coincidence finite set let permutation bijection two symbols let denote transposition swaps leaves symbols unaffected moreover let denote probability distribution defined finite sets define two functionals conditional entropies log log particular let define log note given pyj pyj marginal channel probability furthermore factorized finally let denote set probability distributions proposition shannon symmetry condition memoryless twc transition probability pair distinct input symbols exists pair permutations depend proposition cva condition memoryless twc transition probability pyj depend given exists remark proposition describes channel symmetry property respect channel input user analogous condition obtained exchanging roles users also invariance pyj proposition fact imposes certain symmetry constraint channel marginal distribution pyj literature twc independent additive noise example satisfies shannon cva conditions iii onditions ightness hannon nner uter ounds section present four results regarding tightness shannon inner outer bounds adopt viewpoint channel consists two channels state example channel user user governed marginal distribution derived channel probability distribution respectively input output channel state let probability distributions finite sets simplify presentation define log mutual information input governed corresponding output channel transition probability useful fact concave first argument second argument fixed moreover conditional mutual information expressed respectively viewing twc two channels state following four theorems comprises two conditions one direction transmission symmetry theorems also valid roles users swapped simplicity use denote conditional mutual information conditional entropy evaluated input distribution pxi pxi conditional entropy proof given let evaluated marginal distribution pyi pxj pyi theorem given memoryless twc following conditions satisfied exists arg maxpx depend fixed proof let argument obtain via moreover given light max moreover holds invariance assumption holds since functional concave first argument obtained invariance assumption combining yields implies hence theorem given memoryless twc following conditions satisfied exists arg maxpx depend given common maximizer also satisfies follow definitions section due condition consequently hence theorem given memoryless twc following conditions satisfied depend fixed depend fixed proof conditions know common maximizer common maximizer let argument conclude thus yields similar cva condition complex computations often inevitable checking conditions next present useful condition needs little computational effort let resp denote marginal transition probability matrix obtained resp whose columns rows indexed according fixed order symbols resp theorem given memoryless twc following conditions satisfied matrices column permutations matrices column permutations since proof similar second part proof theorem next section details omitted iscussion xamples comparison conditions already noted relationship propositions unclear examples satisfy shannon condition cva condition seem hard construct section show theorems fact generalize shannon cva results respectively see suffices show shannon cva conditions imply conditions theorems respectively theorem twc satisfying shannon symmetry condition proposition must satisfy conditions theorem proof twc satisfying condition proposition optimal input probability distribution achieves capacity form result implies condition theorem satisfied common maximizer exists given prove condition also satisfied consider two marginal matrices fixed show matrices column permutations hence former claim true obtained marginalizing sides follows definition transposition second claim verified direct computation result straightforwardly hence details omitted remark example next subsection demonstrates twc satisfies conditions theorem may satisfy shannon symmetry condition proposition since common maximizer necessarily uniform input distribution hence theorem general result proposition theorem twc satisfying cva condition proposition must satisfy conditions theorem proof suppose condition proposition satisfied prove theorem first claim given arbitrary pairs consider two probability distributions otherwise otherwise noting pyj pyj define since depend fact maximizer note may unique maximizer choice works purposes cva condition exists fixed depend next show condition theorem holds constructing common maximizer cva condition let arg maxpx arg maxpx due definitions respectively follows cva condition claim proved since holds since depend maximizer set since max thus since obtain achieves value consequently common maximizer thus condition theorem satisfied moreover since common maximizer provided cva condition condition theorem automatically holds remark example shows twc satisfies conditions theorem necessarily satisfy condition proposition conditions allow depend given hence theorem general proposition examples next illustrate effectiveness conditions via two examples twc example satisfies conditions theorems capacity region rectangular twc example satisfies conditions theorem capacity region however neither constructed twcs satisfy shannon cva conditions example consider twc corresponding channel marginal distributions given twc shannon symmetry condition proposition hold since permutations result furthermore since denotes binary entropy function depends given thus cva condition proposition hold either however theorem shannon inner outer bounds coincide since resp obtained permuting columns resp since conditions theorem imply conditions theorem conditions theorem imply conditions theorem conditions theorems also satisfied moreover optimal input distribution twc obtained searching common maximizer two channels via algorithm yielding thus capacity region achieved input distribution finally note twc also satisfies conditions theorem first condition already implied conditions theorem verify second condition consider fig capacity region twc example using arguments example one easily see twc satisfies neither shannon cva conditions however satisfies conditions theorem since common maximizer exists channel users condition trivially holds verify channel also satisfies conditions theorem argument previous example used finally considering input distributions form capacity region channel determined shown fig onclusions paper four conditions coincidence shannon capacity inner outer bounds derived invariance conditions shown generalize existing results thus enlarging class twcs whose capacity region exactly determined numerical examples illustrate applications new conditions situations prior results apply eferences shannon communications channels proc berkeley symp math stat chicago usa jun massey causality feedback directed information proc int symp information theory applic waikiki usa kramer directed information channels feedback dissertation swiss federal institute technology zurich han general coding scheme channel ieee trans inf theory vol cheng devroye networks adaptation thus depend useless ieee trans inf theory vol mar given together substitutions song alajaji linder adaptation useless two discrete obtain channels proc ieee int symp inf theory barcelona spain jul chaaban varshney alouini capacity injective therefore second condition theorem holds channels proc ieee int symp inf theory aachen germany jun example consider twc schalkwijk binary multiplying channel coding scheme operates beyond shannons inner bound region ieee trans inf theory vol schalkwijk extension achievable rate region binary multiplying channel ieee trans inf theory vol may zhang berger schalkwijk new outer bounds capacity regions channels ieee trans inf theory vol may two channel marginal distributions hekstra willems dependence balance bounds single output channels ieee trans inf theory vol
| 7 |
pseudovarieties form costa nogueira teixeira feb february abstract paper deals reducibility property semidirect products form relatively graph equation systems denotes pseudovariety definite semigroups show pseudovariety reducible respect canonical signature consisting multiplication also reducible respect keywords pseudovariety definite semigroup semidirect product implicit signature graph equations reducibility introduction semigroup resp monoid pseudovariety class finite semigroups resp monoids closed taking subsemigroups resp submonoids homomorphic images finite direct products said decidable algorithm test membership finite semigroup resp monoid pseudovariety semidirect product pseudovariets getting much attention mainly due decomposition theorem turn pseudovarieties form pseudovariety finite semigroups whose idempotents right zeros among studied semidirect products pseudovariety monoids denotes pseudovariety finite semigroups ese idempotents know contained local sense tilson particular equalities lsl hold pseudovarieties semilattices groups costa teixeira cmat dep universidade minho campus gualtar braga portugal jcosta mlurdes nogueira cmat escola superior tecnologia instituto leiria campus morro lena alto vieiro leiria portugal costa nogueira teixeira known semidirect product operator preserve decidability pseudovarieties notion tameness introduced almeida steinberg tool proving decidability semidirect products fundamental property tameness reducibility property originally formulated terms graph equation systems latter extended system equations parameterized implicit signature set implicit operations semigroups containing multiplication speak short given equation system rational constraints pseudovariety relatively existence solution implicit operations implies existence solution satisfying constraints pseudovariety said respect every finite graph equation system implicit signature commonly encountered literature canonical signature consisting multiplication instance pseudovarieties finite semigroups lsl finite semigroups paper study property semidirect products form research essentially inspired papers see also stronger form established lsl prove particular gives new simpler proof though basic idea lsl establishes pseudovarieties combined recent proof problem decidable shows problem proposed almeida years ago also extends part work paper proved mild hypotheses implicit signature relatively pointlike systems equations systems equations form pointlike well use results various kinds semidirect products pseudovariety considered specifically know pseudovariety form pseudovariety defined identity utilize result way achieve property concerning pseudovarieties method used paper similar however significant changes inspired introduced order deal much intricate graph equation systems preliminaries reader referred standard bibliography finite semigroups namely general background undefined terminology basic definitions results combinatorics words reader may wish consult pseudovarieties form words pseudowords throughout paper denotes finite set called alphabet free semigroup free monoid generated denoted respectively empty word represented length word denoted word called primitive written form two words said conjugate words lyndon word primitive word minimal conjugacy class lexicographic order word sequence letters indexed also written set words denoted put set endowed semigroup structure defining product follows already defined words right zeros finally word finite word word word form uuuv said ultimately periodic case word named periodic periodic word primitive word called root length said period pseudovariety semigroups denote relatively free semigroup generated set semigroup function unique continuous homomorphism extending elements called pseudowords implicit operations pseudovariety called subsemigroup generated finite case effectively computable recall pseudovariety finite semigroups identified free semigroup elements called infinite pseudowords pseudoidentity formal equality pseudowords say satisfies pseudoidentity write every continuous homomorphism semigroup equivalent saying natural projection pseudoidentities positive integer let pseudovariety finite semigroups satisfying identity denote set words length set words length notice may identified semigroup whose support set whose multiplication given denotes longest suffix length given finite word pseudovarieties moreover isomorphic semigroup pseudoword denote unique smallest word simetrically denote smallest word costa nogueira teixeira dual pseudovariety defined identity let function sends word sequence factors length order occur still denote see lemma unique continuous extension function homomorphism meaning verifies conditions every every throughout paper denotes trivial pseudovariety semigroups pseudowords known theorem implicit signatures implicit signature mean set pseudowords containing multiplication particular represent implicit signature usually called canonical signature every profinite semigroup natural structure via natural interpretation pseudowords profinite semigroups generated denoted freely generated variety generated pseudovariety elements called directed multi graph vertex set edge set edges associate system equations form let finite semigroup continuous homomorphism respecting choice generators evaluation mapping say mapping respect furthermore implicit signature called pseudovariety said relatively system existence respect pair entails existence respect pair say relatively finite graphs let given trivial pseudovariety purpose paper prove pseudovariety fix finite graph finite semigroup consider system respect pair evaluation mapping continuous homomorphism respecting choice generators construct respect pair pseudovarieties form initial considerations suppose since supposed system respect must particular homomorphism arbitrarily fixed may happen equality holds case would obliged define since want describe algorithm define work given graph solution construct solution verifying following condition suppose next vertex suppose arbitrary graph could include instance edge labeling could since subpseudovariety respect hence condition want preserve finite labels would follow case thus observation suggests preserve projection labelings vertices generally construct way following condition holds let max maximum length finite labels elements able make reductions graph solution described section want verify extra condition integer specified later section simplifications solution begin section reducing case vertices labeled infinite pseudowords suppose first edge drop edge consider restrictions respectively graph system respect pair assume respect verifying condition let extension obtained letting respect induction number edges labeled finite words beginning vertices also labeled finite words may therefore assume edges remove vertices labeled finite words beginning edge thus obtaining graph costa nogueira teixeira build letting coincide letting vertex may assume vertices labeled finite words beginning edge suppose next edge notice since infinite pseudoword written infinite pseudowords drop edge vertex case edge beginning let new vertex new edge thus obtaining new graph let labelings defined follows coincide respectively system respect pair assume respect verifying conditions particular since chosen greater let extension obtained letting case one easily verify respect induction number edges beginning vertices labeled finite words may therefore assume vertices labeled infinite pseudowords suppose last edge labeled finite word denote case drop edge graph let add new vertex new edge graph thus obtained let labelings defined follows coincide respectively hence system respect pair suppose exists respect verifying condition let extension obtained letting solution respect induction number edges labeled finite words may assume edge labeled finite word fact labeled letter alphabet borders solution main objective section define certain class finite words called borders solution since equations deal form borders serve signalize transition vertex edge vertex denote projection let say two words confinal pseudovarieties form common prefix words one easily verifies relation defined confinal equivalence fix word words vertex moreover ultimately periodic choose form lyndon word fix prefix word length said respectively root period solution without loss generality assume least one root otherwise could easily modify graph solution order include one fix integers used construction depend mapping semigroup definition constants let exponent one recalls least integer sns idempotent every element finite semigroup lcm root max integer word factor idempotent notice root uns idempotent positive integer denote set periodic word element said periodic root period words define gap positive integer min notice proposition consider constant introduced definition exists integers following conditions hold costa nogueira teixeira distinct elements element proof suppose every integer elements hence exist strictly increasing sequence positive integers integer ymi ymi constant equal moreover since graph finite may assume ymi tmi ymi tmi every follows word hence confinal words whence therefore every length suffixes word word proves already notice meaning periodic word shows completes proof proposition fix two integers definition constants let integer multiple greater equal integer proposition notice elements set called borders solution remark borders finite words length proposition two distinct occurrences borders finite word either occurrences gap size least periodic border case power root since multiple period getting subpseudovariety respect given pseudovariety assumed corollary therefore respect pair moreover observed remark one constrain values respect properties tested finite semigroup since prefixes suffixes length tested finite semigroup may assume prefixes suffixes length denote notice simplifications introduced section finite word edge letter otherwise pseudovarieties form length words particular condition holds every edge finite word hand lemma stated edges extended easily vertices assumed infinite pseudoword every infinite thus particular infinite pseudoword vertices notice vertex exists border finite word suffix hand definitions infinite word basic transformations objective section introduce basic steps allow transform process construction close one used handle systems pointlike equations procedures supported basic transformations form replace words length procedures differ way indices determined pointlike case condition basic transformation comply minimum value word preserved present case basic transformations preserve value well equations impose extra restriction required pointlike equations indeed need verify particular somewhat informally word occurrence overlapping factors pseudoword introduction factor basic transformation done either simultaneously borders solution introduced help deal extra restriction informally speaking borders used detect passage labeling vertex labeling edge avoid introduction affect labelings consider arbitrary word integer called bound factor border bound said periodic according border periodic admits bounds maximum one name last bound case last bound border called last border notice proposition choice two bounds either periodic border costa nogueira teixeira let word length notice since last bound unique bound split word two parts setting splitting point defined follows last bound otherwise case periodic last bound splitting point said periodic periodic two situations either last border last border factorization called splitting factorization definition exist integers factor verifies begin fixing maximum fix next integer word called essential factor follows notice splitting point periodic root last border uns idempotent form uns hence case let uns thus defining suppose splitting point periodic case let maximum integer idempotent word factorized denote following notice moreover also convenient introduce two derived defines two mappings extended done although formally mappings used paper different choice integers keep notation since selection process integers absolutely irrelevant purpose mappings adjustment mappings maintain properties stated next lemma presents property fundamental purposes lemma word length let two factors length word particular pseudovarieties form proof write let splitting points respectively whence prove exists word show hypothesis deduce occurrence essential factor proves assume first last bound case definition last border occurs one position left relatively hence bound last bound follows case suppose since definition condition holds trivially case suppose last bound moreover either last bound last bound circumstances whence concludes proof lemma conditions lemma define continuous monoid homomorphism extends mapping let function continuous homomorphism since composition continuous homomorphism continuous homomorphism remark word length precisely factors length essential factor ewp aip ajp word ajp replaced expression since indeed expressions represent generally one certainly replace expression form using reduction rule long possible written called reduced form fnp fnq empty word otherwise costa nogueira teixeira definition conditions describe procedure transform mapping defined function defined follows first let indeed every follows fact transforms see next vertex consider length words let mappings defined note moreover occurrence shown factorization last occurrence border hence rtv precisely therefore one eiv consider arbitrary edge suppose finite word letter also case homomorphism since want define instance suppose last also infinite pseudoword let notice indeed follows definition elaborate let vertex consider word word factors length suppose consider reduced form notice words hence unique index enm enq let note word ajr whence next lemma key result justifies definition pseudovarieties form lemma let edge infinite notation moreover proof begin recalling essential factor ewp aip ajp word ajp note also suffix idempotent prove equality suffices show know two cases verify case border consider factor choice prefix occurrence border hence last bound splitting point follows splitting factorization therefore one verify arbitrary one occurrence border precisely splitting factorization whence prefix reduces consider factor hence either last bound last bound situations splitting point splitting factorization therefore one deduces lemma every occurrence aip ajp essential factor ewp fact occurrence suffix since follows whence suffix means particular introduced suffix hence ajh reduced form proves moreover one deduces word suffix ajp proves case periodic border let root since fixed multiple umu prefix occurrence border one deduces lemma case assume another occurrence border hence proposition choice precisely furthermore since lyndon word positive integer word prefix notice since prefix definition word proper prefix hand occurrence shown costa nogueira teixeira factorization last occurrence thus splitting factorization therefore uns uns generally factor border occurs hence splitting point periodic uns moreover one verify prefix analogously case reduces since proper prefix allows already deduce reduced form uns thus concluding proof first part lemma two possible events case trivially verified either eliminated reduction process means splitting point word determined one occurrences border prefix case one deduces cases hence proof lemma complete notice shown proof lemma vertex periodic border root uns definition mapping vertices assures condition proof section dedicated showing respect pair verifying conditions begin noticing every indeed observed easily seen definitions let show following properties proposition conditions hold proof respect equality holds deduce holds suffices establish equality consider first vertex eiv case equality direct application proposition authors proved every pseudoword moreover definition therefore form condition holds pseudovarieties form consider next edge finite word whence holds trivially moreover since case every vertex labeled infinite pseudoword follows condition holds suppose last infinite let hand lemma hence since homomorphism ends proof proposition consider arbitrary edge achieve objectives section remains prove satisfies since satisfies hence thus eiv eiw shown proof proposition follows satisfies hand fact homomorphism one deduces suppose infinite pseudoword case whence moreover lemma therefore conditions satisfies assume finite word whence since thus hence words confinal hence follows case hand word length form splitting factorizations respectively since follows etv etw suppose case clear since ends follows therefore hand one satisfies suppose case one deduces equality periodic word let root etv uns since definition primitive word prefix prefix costa nogueira teixeira conclude case whence therefore moreover uns uns therefore using one deduces satisfies proved main theorem paper theorem result applies instance pseudovarieties since problem pseudovariety local groups already solved obtain following corollary corollary pseudovariety final remarks paper fixed attention canonical signature dealt generic class signatures verifying certain undemanding conditions theorem still valid generic signatures preferred treat instance signature keep proofs clearer little less technical references almeida finite semigroups universal algebra world scientific singapore english translation almeida finite semigroups introduction unified theory pseudovarieties semigroups algorithms automata languages coimbra world scientific almeida azevedo regular implicit operations mathematica almeida azevedo teixeira finitely based pseudovarieties forms pure appl algebra almeida costa teixeira semidirect product pseudovariety tameness semigroup forum almeida costa zeitoun tameness pseudovariety joins involving monatsh math almeida steinberg syntactic global semigroup theory synthesis approach algorithmic problems groups semigroups lincoln trends math boston boston pseudovarieties form almeida steinberg decidability iterated semidirect products applications complexity proc london math soc almeida zeitoun tameness locally trivial pseudovarieties comm algebra ash inevitable graphs proof type conjecture related decision procedures int algebra comput auinger steinberg extension problem partial permutations proc amer math soc costa reducibility joins involving locally trivial pseudovarieties comm algebra costa nogueira complete reducibility pseudovariety lsl int algebra comput costa nogueira teixeira word problem pseudovariety local groups submitted preprint available http costa nogueira teixeira pointlike reducibility pseudovarieties form int algebra doi appear preprint available http costa teixeira tameness pseudovariety lsl int algebra comput eilenberg automata languages machines vol academic press new york krohn rhodes algebraic theory machines prime decomposition theorem finite semigroups machines trans amer math soc lothaire algebraic combinatorics words cambridge university press rhodes undecidability automata pseudovarieties finite semigroups int algebra comput rhodes steinberg finite semigroups new approach springer monographs mathematics steinberg delay theorem pointlikes semigroup forum straubing finite semigroup varieties form pure appl algebra costa nogueira teixeira weiss graph congruences wreath products pure appl algebra tilson categories algebra essential ingredient theory monoids pure appl algebra
| 4 |
mar linearly related polyominoes viviana ene herzog takayuki hibi abstract classify convex polyomino ideals linearly related linear resolution convex stack polyominoes whose ideals extremal gorenstein also classified addition characterize combinatorial terms distributive lattices whose ideals extremal gorenstein linear resolution introduction ideal inner minors polyomino polyomino ideal generated certain subsets indeterminates ideals first studied qureshi include ladder determinantal ideals may also viewed ideal planar distributive lattice challenging problem understand graded free resolution ideals ene rauf qureshi succeeded compute regularity ideals sharpe showed ideal linearly related means linear relations moreover described relations explicitly conjectured also ideals generated certain type linear relations conjecture proved kurano case base field defined contains rational numbers lascoux gives explicit free resolution ideals unfortunately resolution general may depend characteristic base field indeed hashimoto showed min second betti number depends characteristic hand using squarefree divisor complexes introduced bruns second author paper follows theorem independent characteristic paper use main tool squarefree divisor complexes study first syzygy module polyomino ideal particular classify convex polyominoes linearly related see theorem main result paper first section recall concept polyomino ideals show polyomino ideal convex polyomino quadratic basis second section paper devoted state prove theorem mentioned proof heavily depends theory squarefree divisor complexes allow compute betti numbers toric ideal apply theory one observes polyomino ideal convex polyomino mathematics subject classification key words phrases binomial ideals linear syzygies polyominoes first author supported grant uefiscdi may naturally identified toric ideal crucial conclusion deduced observation formulated corollary betti numbers polyomino ideal bounded betti numbers polyomino ideal induced subpolyomino corollary allows reduce study relation polyomino ideals finite number polyominoes small number cells analyzed use computer algebra system last section classify convex polyominoes whose polyomino ideal linear resolution theorem convex stack polyominoes whose polyomino ideal extremal gorenstein theorem since polyomino ideals overlap ideals interest ideals among ideals linear resolution extremal gorenstein answers given theorem theorem turns classifications classes ideals almost lead result polyominoes section consider polyomino ideals class ideals introduced qureshi end consider natural partial order defined follows set together partial order distributive lattice set interval interval called cell elements called vertices called left lower corner egdes cell sets let finite collection cells connected sequence cells given edge addition called path connecting collection cells called polyomino two cells connected see figure set vertices denoted union vertices cells belonging two polyominoes called isomorphic mapped composition translations reflections rotations figure polyomino call polyomino row convex two cells left lower corner respectively follows cells left lower corner belong similarly one defines column convex polyominoes polyomino called convex row column convex polyomino displayed figure convex figure shows convex polyomino note convex polyomino convex common geometric sense figure convex polyomino let collection cells may assume vertices cells belong interval fix field let polynomial ring variables xij ideal inner minors ideal generated xil xkj xkl xij furthermore denote happens polyomino also called polyomino ideal example polyomino displayed figure may embedded interval coordinates generated following result shown qureshi theorem theorem let convex polyomino normal macaulay domain proof theorem based fact may viewed follows toric ideal assumptions notation introduced may assume consider homomorphism xij polynomial ring variables observed qureshi ker follows may identified edge ring bipartite graph vertex set edges interpretation mind using obtain proposition let convex polyomino quadratic basis proof use crucial fact proved toric ideal defines edge ring bipartite graph quadratic basis chord explained identifying vertices edges bipartite graph nothing sequence vertices typical sequence pairs integers following first row sequence first component second row sequence second component vertices pair sequences represents follows lemma exist integers either suppose since vertices since convex follows vertex corresponds chord cycle similarly one argues lemma let integer function exist one either proof let say since let since one let since follows let one case discussed similarly denote graded betti numbers corollary let convex polyomino proof proposition exists monomial order generated degree therefore follows corollary since see example corollary desired conclusion follows first syzygy module polyomino ideal let convex polyomino let minors generating section study relation module kernel homomorphism sei graded module generators degree generators degree seen corollary say simply linearly related generated degree let two distinct generators koszul relation belongs call koszul relation pair minimal generator main result section following theorem let convex polyomino following conditions equivalent linearly related admits koszul relation pairs let may assume smallest interval property refer elements corners shape displayed figure one following conditions hold one corners belong two corners belong opposite words missing corners corners corners iii three corners belong missing corners one may assume without loss generality referring figure following conditions must satisfied either essential tool proof theorem recall squarefree divisor complex introduced let field affine semigroup semigroup ring attached suppose unique minimal set generators consider polynomial ring variables denotes jth component integer vector choose presentation kernel homomorphism called toric ideal assign setting deg well become graded thus admits minimal case monomials degree one assign structure standard graded setting deg degree respect standard grading denoted given define squarefree divisor complex follows simplicial complex whose faces subsets uik divides denote ith reduced simplicial homology simplicial complex proposition notation assumptions introduced one tori particular dimk let subsemigroup generated subset set generators let polynomial ring variables generator furthermore let free since flat free inclusion induces complex homomorphism tensoring complex homomorphism graded maximal ideal obtain following sequence isomorphisms natural maps tors tors later applications need corollary notation assumptions introduced let subsemigroup generated subset set generators let element property whenever natural space homomorphism torsi torsi isomorphism proof let squarefree divisor complex viewed element obtain following commutative diagram tori tori vertical maps isomorphisms also lower horizontal map isomorphism simply due assumptions yields desired conclusion let affine semigroup generated affine subsemigroup generated subset called homological pure subsemigroup follows immediate consequence corollary obtain corollary let homologically pure subsemigroup torsi torsi injective words minimal free minimal free complex homomorphism induces injective map particular minimal set generators syzi part minimal set generators syzi moreover fix field let convex polyomino let polynomial ring variables xij polynomial ring generated monomials uij viewing semigroup ring convenient identify semigroup elements monomial represent given sets integers let subsemigroup generated elements sik tjl homologically pure subsemigroup note also combinatorially pure subsemigroup sense collection cells called collection cells induced columns rows following holds observe always domain since map identifies ideal contained generated involve variables xik following always identify subideal induced collection cells polyomino call induced polyomino induced polyomino convex consider example polyomino left side figure left lower corner induced polyomino shown right side figure induced columns rows figure obviously corollary implies corollary let induced collection cells minimal relation also minimal relation use corollary isolate step step linearly related polyominoes lemma suppose admits induced collection cells isomorphic one displayed figure koszul relation pair proof may assume using cocoa singular compute see minors form koszul relation pair thus assertion follows corollary figure corollary let convex polyomino let smallest interval property assume one vertices belong koszul relation pair hence linearly related proof may assume vertices interval belong since smallest interval containing exist therefore integers cells belong collection cells induced rows columns isomorphic one collections figure thus assertion follows lemma corollary corollary shows convex polyomino contain vertices order linearly related thus polyomino linearly related must shape indicated figure number also allowed case also case polyomino contains corner similar convention applies corners figure corners missing convex polyomino displayed figure however linearly related though shape shown figure thus must still obstructions polyomino linearly related proceed eliminating polyominoes linearly related lemma let convex polyomino let smallest interval property misses two opposite corners say misses four corners admits koszul pair hence linearly related figure possible shape figure linearly related proof let first assume belong belong collection cells induced rows columns shown figure light colored cells none present according whether none equations hold example light colored cells belong two light colored cells belong easily checked ideal displayed figure koszul relation pairs possible cases corollary next assume none four corners belong following arguments refer figure first case suppose collection cells induced columns rows polyomino displayed figure koszul relation pair verified computer thus koszul relation pair similar argument applies next assume symmetry may discuss may assume figure choose columns rows induced polyomino rows columns see figure three cases corresponding induced polyomino ideal koszul relation pair hence figure lemma let convex polyomino let smallest interval property suppose misses three corners say suppose koszul relation pair hence linearly related proof proceed proofs previous lemmata case consider collection cells induced columns rows collection cells depicted figure easily seen generated regular sequence length koszul relation pair case choose columns rows polyomino induced choice rows columns two opposite missing corners hence lemma koszul pair case symmetric cases induced polyomino ideal koszul relation pair hence three cases koszul relation pair figure proof theorem implication obvious implication follows corollary lemma lemma remains prove let convex polyomino satisfies one conditions iii show linearly related corollary need prove viewing semigroup ring follows one check main idea proof use corollary let siq tiq minq maxq minq maxq therefore points lie possible degenerate rectangle vertices degenerate vertices contained vertical horizontal line segment since case simplicial complex simplex let consider vertices belong rectangle induced subpolyomino therefore corollary latter equality true since linearly related next let assume vertices belong one forms iii follows three verices belong consequently analyze following cases case exactly one vertex belong without loss generality may assume implies case relation degree relation degree one polyominoes displayed figure one may check computer algebra system polyominoes displayed figure linearly related hence relation degree actually one check shapes since polyomino displayed isomorphic one hence case two vertices belong may assume missing vertices hence case relation degree relation degree one polyominoes displayed figure note polyominoes isomorphic one easily checks computer polyominoes linearly related thus case finally assume three vertices belong may assume vertices case relation degree relation degree polyomino displayed figure linearly related one may easily check computer therefore get figure figure polyomino ideals linear resolution final section classify convex polyominoes linear resolution convex stack polyominoes extremal gorenstein theorem let convex polyomino following conditions equivalent linear resolution exists positive integer isomorphic polyomino cells proof polyomino shape described ideal ideal matrix linear resolution indeed complex whose chain maps described matrices linear entries provides free resolution ideal maximal minors matrix indeterminates see example page may assume smallest interval containing may assume remaining cases easily checked computer let assume show suppose first assume corners belong polyomino induced columns rows polyomino displayed right figure ideal gorenstein ideal hence linear resolution therefore corollary ideal linear resolution well contradiction next assume one corners say missing since linear linear resolution linearly related hence shape indicated figure let numbers shown figure let polyomino induced columns rows let polyomino induced columns rows case isomorphic one displayed left figure since gorenstein ideal conclude first case linear resolution contradiction mentioned introduction polyomino ideals overlap ideals planar lattices next result show ideal lattice linear resolution polyomino described theorem methods different used paper classification ideals linear resolution first given corollary let finite distributive lattice element element unique minimal element possesses property let set elements regard poset partially ordered set inherits ordering subset called order ideal together imply particular empty set order ideal let denote set order ideals ordered inclusion follows distributive lattice moreover birkhoff fundamental structure theorem finite distributive lattices proposition guarantees coincides let finite distributive lattice polynomial ring variables ideal ideal generated binomials incomparable known prime ideal quotient ring normal moreover gorenstein pure finite poset pure every maximal chain totally ordered subset cardinality let finite poset linear extension permutation descent index let denote set descents sequence number permutations thus particular follows hilbert series form say finite distributive lattice simple elements element satisfies either words simple possesses element every satisfies either theorem let simple finite distributive lattice ideal linear resolution form shown figure figure proof since generated degree follows linear resolution regularity equal may assume infinite since may divide regular sequence linear forms obtain reg reg whose coincides reg since reg max see example exercise follows linear resolution form integer clearly finite poset figure linear extension thus linear resolution conversely suppose linear resolution words one linear extension clutter clutter subset property two elements belonging comparable since simple follows contains clutter hence dilworth theorem says chains let let minimal elements let maximal elements since simple follows thus linear extension thus linear resolution hence either desired gorenstein ideal never linear resolution unless principal ideal however resolution much linear possible called extremal gorenstein since polyomino ideals generated degree restrict following definition extremal gorenstein ideals graded ideals generated degree let polynomial ring field graded ideal principal generated degree following say extremal gorenstein ideal gorenstein shifts graded minimal free resolution projective dimension similar arguments proof theorem see extremal gorenstein ideal gorenstein ideal reg case form integer following theorem classify convex stack polyominoes extremal gorenstein convex stack polyominoes considered paper qureshi characterizes convex stack polyominoes gorenstein let polyomino may assume smallest interval containing called stack polyomino column convex cells belong figure displays stack polyominoes right polyomino convex left number cells bottom row called width number cells maximal column called height figure stack polyominoes let convex stack polyomino removing first bottom rows cells obtain convex stack polyomino denote also set let height polymino let numbers property width pki width furthermore set example convex stack polyomino figure terminology notation introduced characterization gorenstein convex stack polyominoes given following theorem theorem qureshi let convex stack polyomino height following conditions equivalent gorenstein ideal width pki height pki according theorem convex stack polyomino displayed figure gorenstein width height example gorenstein stack polyomino shown figure figure gorenstein stack polyomino combining theorem results section obtain theorem let convex stack polyomino extremal gorenstein isomorphic one polyominoes figure figure extremal convex stack polyominoes proof easily checked extremal gorenstein isomorphic one two polyominoes shown figure conversely assume extremal gorenstein without loss generality may assume smallest interval containing theorem implies suppose first theorem ene rauf qureshi follows regularity equal since extremal gorenstein regularity equal thus next assume properly contained since linearly related corollary together theorem imply top row consists one cell let polyomino induced rows columns polyomino applying theorem follows reg corollary implies reg reg since reg deduce ideal betti numbers since induced polyomino since extremal gorenstein corollary yields contradiction isomorphism exist precisely gorenstein polyominoes displayed figure extremal gorenstein easily checked cocoa singular gorenstein polyomino isomorphic one two polyominoes shown figure yields desired conclusion figure gorenstein polyominoes width following theorem shows besides two polyominoes listed theorem whose polyomino ideal extremal gorenstein exist precisely two ideals property theorem let simple finite distributive lattice joinmeet ideal extremal gorenstein ideal one following displayed figure figure proof suppose simple gorenstein follows pure element every satisfies either since clutter contained suppose clutter contained none elements belonging minimal element since simple exist least two minimal elements hence exists linear extension contradiction thus least one elements belonging minimal element similarly least one elements belonging maximal element let element minimal maximal since pure one let minimal element maximal element let maximal element minimal element neither belongs either minimal maximal exists linear extension contradiction hence neither minimal maximal since pure exist clutter hence exists linear extension contradiction consequently contains clutter must coincide moreover clutter extremal gorenstein ideal suppose contains clutter let chain contained let minimal elements maximal elements since simple since pure follows exist maximal chains one linear extension contradiction hence cardinality maximal chains however cardinality maximal chains equal thus extremal gorenstein ideal cardinality maximal chains equal posets displayed figure ideal extremal gorenstein ideal figure references garsia stanley introduction partially ordered sets ordered sets rival springer netherlands bruns herzog semigroup rings simplicial complexes pure appl algebra cocoateam cocoa system computations commutative algebra available http decker greuel pfister singular computer algebra system polynomial computations http dilworth decomposition theorem partially ordered sets annals math eisenbud commutative algebra view toward algebraic geometry graduate texts mathematics springer ene qureshi rauf regularity ideals distributive lattices electron combin hashimoto determinantal ideals without minimal free resolutions nagoya math herzog hibi monomial ideals graduate texts mathematics springer herzog srinivasan note subadditivity problem maximal shifts free resolutions appear msri arxiv hibi algebraic combinatorics convex polytopes carslaw publications glebe australia hibi distributive lattices affine semigroup rings algebras straightening laws commutative algebra combinatorics nagata matsumura eds adv stud pure math amsterdam kurano first syzygies determinantal ideals algebra lascoux syzygies des determinantales adv math ohsugi herzog hibi combinatorial pure subrings osaka math ohsugi hibi koszul bipartite graphs adv appl math qureshi ideals generated collections cells stack polyominoes algebra schenzel uber die freien extremaler ringe algebra sharpe certain polynomial ideals defined matrices quart math oxford sharpe syzygies certain ideals defined matrices proc london math soc viviana ene faculty mathematics computer science ovidius university mamaia constanta romania simion stoilow institute mathematics romanian academy research group project bucharest romania address vivian herzog fachbereich mathematik campus essen essen germany address takayuki hibi department pure applied mathematics graduate school information science technology osaka university toyonaka osaka japan address hibi
| 0 |
class msr codes clustered distributed storage sohn beongjun choi jaekyun moon jan kaist school electrical engineering email jmoon distributed storage models real data centers repair bandwidths different paper msr codes achieving capacity clustered distributed storage designed focus given two cases ratio available repair bandwidths total number distributed nodes number contact nodes data retrieval former represents scenario communication allowed latter corresponds case minimum bandwidth possible minimum storage overhead constraint case two types locally repairable codes proven achieve msr point explicit msr coding scheme suggested situation specific condition ntroduction distributed storage systems dsss deployed various enterprises reliably store massive amounts data frequent storage node failure events failed node regenerated repaired collecting information survived nodes regeneration process guided predefined network coding scheme setting dimakis obtained expression maximum reliably storable file size denoted capacity function given system parameters node capacity bandwidth required repairing failed node capacity analysis underscores following key messages first exists network coding scheme utilizes resources enables reliable storage file size second feasible find network coding scheme reliably store file larger given available resources subsequent research efforts authors proposed explicit network coding schemes achieve capacity dsss coding schemes optimal sense efficiently utilizing resources maintaining reliable storage systems focus clustered nature distributed storage recent research direction taken several researchers according recent papers storage nodes dispersed multiple racks real data centers seen forming clusters particular authors present paper proposed system model clustered dsss reflects difference bandwidths system model file stored coded distributed storage nodes evenly dispersed clusters node storage capacity data collector contacts arbitrary existing nodes retrieve file since nodes dispersed multiple clusters regeneration process involves utilization repair bandwidths denoted respectively proposed system model authors obtained expression maximum reliably storable file size capacity clustered dss furthermore shown network coding exists achieve capacity clustered dsss however explicit constructions network coding schemes clustered dsss yet found paper proposes network coding scheme achieves capacity clustered dss minimum required node storage overhead words suggested code shown msr code clustered dss paper focuses two important cases represents ratio repair bandwidths former represents system crosscluster communication possible latter corresponds minimum value achieve minimum storage overhead file size shown appropriate application locally repairable codes suggested achieves msr point general settings application rule depending parameter setting case explicit coding scheme suggested proven msr code conditions previous works code construction dss clustered storage nodes limited extent works suggested coding scheme reduce repair bandwidth schemes proven msr code achieves capacity clustered dsss minimum storage overhead authors provided explicit coding scheme reduces repair bandwidth clustered dss condition failed node exactly regenerated contacting one clusters however approach different present paper sense consider scenario unequal repair bandwidths moreover coding scheme proposed shown regenerating mbr code limited parameter setting present paper deals msr code msr code clustered dsss suggested paper data retrieval condition different present paper authors considered scenario data collected contacting arbitrary clusters data retrieved contacting arbitrary nodes present paper thus two models identical condition cluster one node difference data retrieval conditions results different capacity values different msr points short code code paper achieves different msr points data collector retrieves original file contacting arbitrary nodes property called mds property clustered distributed storage system parameters called dss dss given parameters capacity defined maximum data reliably stored expression obtained theorem aiming reliably storing file set pair values said feasible holds according corollaries set feasible points shows optimal relationship illustrated fig optimal curve point minimum node capacity called msr point explicit regenerating codes achieve msr point called msr codes according theorem node capacity msr point satisfies mbr point fig optimal relationship clustered distributed storage modeled given file symbols encoded distributed nodes node capacity storage nodes evenly distributed clusters cluster contains nodes failed node regenerated obtaining information survived nodes nodes cluster help sending nodes clusters help sending thus repairing node requires overall repair bandwidth node capacity repair bandwidth msr point backgrounds otations cluster cluster cluster fig representation clustered distributed storage divides similarly write divide given define mod qni vectors use lower case letters given vector transpose denoted natural numbers set represented matrix entry ith row column denoted also express nodes clustered dss using representation structure illustrated fig represents node lth row column finally recall definitions locally repairable codes lrcs defined represents code length encoded information symbols every coded symbol regenerated accessing symbols defined takes file size encodes coded symbols symbol composed bits moreover coded symbol regenerated contacting symbols code minimum distance note minimum storage overhead satisfy mds property stated thus scenario minimum communication minimum storage overhead constraint imposed introduce useful notations used paper positive integer represents set natural numbers use notation iii msr ode esign section msr codes designed setting communication allowed node repair process first system parameters msr point examined second two types locally repairable codes lrcs suggested proven achieve msr point settings respectively parameter setting msr point consider msr point reliably store file following property specifies system parameters case proposition consider clustered dss reliably store file msr point defined point satisfies mds mds proof see appendix mds precoding code construction examine construct msr code case following theorem shows locally repairable code constructed locality valid msr code theorem msr code construction let explicitly constructed locality consider allocating coded symbols dss nodes within repair group located cluster code msr code clustered dss conditions proof see appendix fig illustrates example msr code case constructed using lrc clustered dss scenario parameters set cluster cluster allocation coded symbols nodes fig msr code construction rule follows instruction concept repair group interpreted cluster present paper authors present paper proves code also achieves msr point clustered dss case code construction thus storage node contains symbols clustered dss aims reliably store file size code two properties exact regeneration data reconstruction failed node exactly regenerated contacting nodes cluster contacting nodes recover original file size first property obtained fact form mds code second property obtained follows contacting arbitrary nodes three distinct coded symbols superscript one three distinct coded symbols superscript two obtained fig information suffice recover similarly information suffice recover completes proof second property note coding scheme already suggested construct msr code given system parameters satisfy theorem shows optimal designed valid msr code holds theorem msr code construction let constructed consider allocating coded symbols dss nodes within repair group located cluster msr code dss conditions proof see appendix fig illustrates example code construction case without loss generality consider case parallel application code multiple times achieves msr point general set code mds mds cluster cluster cluster mds cluster mds encoding structure proposition msr point point satisfies proof see appendix allocation coded symbols nodes fig msr code case encoding structure follows instruction constructed lrc paper utilizes lrc construct msr code clustered dss case positivie integers clustered dss code system parameters proposition code fig satisfies exact regeneration data reconstruction properties failed node exactly regenerated contacting nodes cluster contacting nodes recover original file size note fig set coded symbols generated code statement also holds proves first property second property directly result states minimum distance lrc note lrc already suggested authors present paper proves applying code achieves msr point dss case msr ode esign propose msr code clustered dsss recall minimum value allows minimum storage first obtain system parameters msr point second design coding scheme shown msr code conditions parameter setting msr point following property specifies system parameters case without loss generality set repair bandwidth code construction construct msr code constraints since consider case system parameters proposition set construction suppose given source symbols moreover let encoding matrix matrix encoding matrix node stores node stores mtj remark code generated construction satisfies followings every node cluster contains message symbols every node cluster contains parity symbols note remark consistent states construction following theorem specifies msr construction rule theorem msr code construction square invertible code designed construction msr code proof see appendix following result suggests explicit construction msr code using finite field corollary applying construction encoding matrix set cauchy matrix achieves msr point finite field size suffices design proof proof directly theorem fact cauchy matrix full rank stated moreover cauchy matrix size cluster cluster cluster cluster parities inaccessible messages accessible fig repairing failed node proposed msr code example fig msr example constructed using finite field size according example msr code designed construction illustrated fig case coding scheme utilizes cauchy matrix using finite field primitive polynomial element denoted decimal number abc primitive element example denoted generator matrix system parameters proposition holds example fig show proposed coding scheme satisfies two properties exact regeneration failed node recovery message symbols contacting nodes exact regeneration fig illustrates regeneration process suppose node containing message fails node transmits symbols nodes transmit symbol example respectively received symbols matrix obtain thus contents failed node regenerated matrix inversion note exact regeneration property holds irrespective contents transmitted since encoding matrix cauchy matrix submatrices invertible data recovery first contacts two systematic nodes proof trivial second contacting two parity nodes recover original message since invertible third suppose contacts one systematic node one parity node example retrieve message symbols parity symbols using retrieved symbols information encoding matrix additionally obtains thus obtains completes data recovery property suggested code onclusion class msr codes clustered distributed storage modeled constructed proposed coding schemes applied practical data centers multiple racks available bandwidth limited compared bandwidth two important cases considered represents ratio available repair bandwidth constraint zero repair bandwidth appropriate application two locally repairable codes suggested shown achieve msr point clustered distributed storage moreover explicit msr coding scheme suggested system parameters satisfy proposed coding scheme implemented finite field using cauchy generator matrix ppendix roof heorem focus code explicit constructed section code parameters repair locality minimum distance parameters physical meanings identical present paper setting code node capacity last equality holds condition definition first prove node failure exactly regenerated using system parameters according description section node contained unique corresponding repair group size failed node exactly repaired contacting nodes repair group implies failed node need contact repair groups exact regeneration process setting repair group cluster note cluster contains nodes achieve moreover section illustrates exact regeneration failed node possible contacting entire symbols contained nodes repair group applying xor operation implies result combined conclude code satisfies exact regeneration failed node using parameters prove contacting nodes suffices recover original data clustered dss code applied note minimum distance thus information nodes suffices pick correct codeword completes proof theorem ppendix roof heorem first prove code minimum distance implies original file size recovered contacting arbitrary nodes second prove failed node exactly regenerated setting recall constructed following property stated theorem lemma theorem code constructed locality optimal minimum distance note consider code optimal since divides lemma applied result lemma implies minimum distance since consider case qni inserting cluster cluster fig code construction case second last equality holds since thus proves contacting arbitrary nodes suffices recover original source file need prove failed node exactly regenerated setting system parameters specified proposition according rule illustrated construction code shown fig first source symbols store reliably applying code source symbols obtain partition symbols groups group contains symbols next group symbols encoded code result group symbols finally store symbol yni node allocation rule symbols group located cluster assume node lth cluster containing yni symbol fails fig know symbols yni stored lth cluster decode code group thus contents yni recovered retrieving symbols nodes lth cluster cluster failed node proves ability exactly regenerating arbitrary failed node regeneration process satisfies moreover note code fig source symbols since parameters obtained consistent proposition confirm code valid msr point conditions ppendix roof heorem recall code designed construction allocates systematic nodes cluster parity nodes cluster illustrated fig moreover recall system parameters cluster cluster fig code construction dss proposition definition first show exact regeneration systematic nodes first cluster possible using dss construction use concept projection vector illustrate repair process let lth projection vector assigned repairing similarly let projection vector assigned repairing assume node containing fails node transmits symbols mtj node transmits symbol simplicity set kdimensional standard basis vector ith coordinate elsewhere means node transmits symbols contains transmits last symbol contains symbol thus newcomer node regenerating systematic node obtains following information show newcomer node regenerates using information recall parity symbols message symbols related following equations obtained among parity symbols parity symbols received newcomer node expressed subtracting constant known values results matrix generated removing rows since aware message symbols entries matrix glk note matrix obtained removing columns matrix since every square invertible obtain completes proof exactly regenerating failed systematic node second prove exact regeneration parity nodes second cluster possible let lth projection vector assigned repairing similarly let projection vector assigned repairing assume parity node fails contains node transmits symbols ptj node transmits symbol mtj simplicity set means node transmits symbols contains transmits last symbol contains symbol thus newcomer node regenerating parity node obtains following information show newcomer node regenerates using information among parity symbols parity symbols received newcomer node expressed defined construction note matrix generated removing lth rows since know values message symbols entries matrix subtracting constant known values results generated removing lth columns similarly generated removing lth rows thus invertible matrix obtain contains since contains every message symbol regenerate using completes proof exactly regenerating failed parity node finally prove message symbols obtained contacting arbitrary nodes proof use slightly modified notation representing message parity symbols message symbol parity symbol denoted respectively expressed suppose data collector contacts nodes cluster nodes cluster obtains parity symbols message symbols since exists total message symbols number message symbols obtain let parity symbols obtained pik message symbols obtained mjk known parities expressed pik matrix generated taking lth columns since invertible obtain unknown message symbols mji completes proof defined expressed last equality thus holds case similarly using confirm holds case inserting obtain since obtain completes proof proof proposition consider case without losing generality implies according definition observe expressions corollary msr point illustrated corollary msr point given else proof proposition ppendix roof ropositions matrix obtained taking lth rows since know message symbols elements subtracting known constant values results mjk qni qni defined paper review explicit form definitions shows looks like proof lemma definition setting combining result note expressed last equality holds due combining obtain result using completes proof eferences dimakis godfrey wainwright ramchandran network coding distributed storage systems ieee transactions information theory vol rashmi shah kumar ramchandran explicit construction optimal exact regenerating codes distributed storage communication control computing allerton annual allerton conference ieee cadambe jafar maleki ramchandran suh asymptotic interference alignment optimal repair mds codes distributed storage ieee transactions information theory vol ernvall codes mbr msr points exact repair property ieee transactions information theory vol sohn choi yoon moon capacity clustered distributed storage ieee international conference communications icc may sohn choi yoon moon capacity clustered distributed storage corr vol online available http prakash abdrashitov storage repairbandwidth clustered storage systems arxiv preprint zhang lee zhang zhou feng optimal repair layering data centers theory practice arxiv preprint papailiopoulos dimakis locally repairable codes ieee transactions information theory vol tamo papailiopoulos dimakis optimal locally repairable codes connections matroid theory ieee transactions information theory vol tebbi chan sung code design framework distributed storage information theory workshop itw ieee ieee sahraei gastpar increasing availability distributed storage systems via clustering arxiv preprint bernstein matrix mathematics theory facts formulas second edition princeton university press online available http shah rashmi kumar ramchandran explicit codes minimizing repair bandwidth distributed storage information theory itw cairo ieee information theory workshop ieee suh ramchandran mds code construction using interference alignment ieee transactions information theory vol
| 7 |
nov designing pattern matching algorithms gilles didier laurent tichit cnrs centrale marseille marseille france march abstract given pattern text speed pattern matching algorithm regard ratio length number text accesses performed search first propose general method computing limit expected speed pattern matching algorithms regard iid texts next show determine greatest speed achieved among large class algorithms altogether algorithm running speed since complexity determination makes impossible deal patterns length greater propose polynomial heuristic finally approaches compared pattern matching algorithms theoretical practical point view terms limit expected speed iid texts terms observed average speed real data cases algorithms outperformed introduction focus algorithms solving online string matching problem consists reporting occurrence positions pattern text online meaning text allowed one oldest problems addressed computer science extensively studied refer comprehensive list evaluation pattern matching algorithms developed far authors count algorithms already proposed among half published last ten years fact sounds quite paradoxical since algorithm optimal terms worst case analysis dates back possible explanation wide gap worst case complexity algorithms computation times real data instance pattern matching algorithms worst case complexities perform much better english texts basically average case analysis way suited assess relevance pattern matching algorithm practical point view average case analysis pattern matching algorithms notably already carried various points view provide general method studying limit average behavior pattern algorithm iid texts precisely following consider limit expectation ratio text length number text accesses performed algorithm searching pattern iid texts limit expectation called asymptotic speed algorithm regard iid model computation asymptotic speed based machines structures able simulate behavior pattern matching algorithm searching pattern underlying idea seen generalization string matching automaton companion paper didier provided theoretical analysis asymptotic speed pattern matching algorithms iid texts particular showed given pattern greatest asymptotic speed among large class pattern matching algorithms achieved machine states essentially subsets positions machines called strategies provide brute force algorithm computing fastest strategy given pattern frequencies iid model algorithm based original structure associated pattern called position lattice gives full representation overlap relations subsets positions since brute force algorithm applied patterns length greater high propose polynomial polynomial order may chosen user fastest approaches finally compared several pattern matching algorithms theoretical point view computing limit expected speeds regard various patterns iid models practical point view computing average speeds two sources english text dna sequence cases fastest large enough approaches outperform algorithms software data used perform tests available https rest paper organized follows section presents notations recalls concepts results followed two sections introduce central objects work strategies position lattice pattern particular provide algorithm computing position lattice given pattern section shows use position lattice pattern obtain fastest strategy regard pattern iid model section provide polynomial heuristic allowing compute fast strategies section presents results various comparisons pattern matching algorithms time possible fastest strategy results discussed last section notations definitions notations general definition finite sets power set cardinal alphabet finite set elements called letters symbols word text pattern finite sequence symbols put length word words indexed write subword starting position ending position concatenate two words word length note set words length set finite words unless otherwise specified texts patterns considered fixed alphabet pattern matching algorithm takes pattern text inputs reports occurrence positions patterns say two pattern matching algorithms texts access exactly positions input matching machines generic algorithm patterns machine finite set states initial state subset states function transition state function shift function convention set states matching machine always contains sink state symbols order matching machine defined machines carry information deterministic arithmetic automatons defined generic algorithm takes machine text inputs outputs positions algorithm input machine text output occurrence positions print occurrence position algorithm generic algorithm component machine makes sense regard way used generic algorithm states lead report occurrence pattern current position pattern matches corresponding position text line algorithm condition definition machines technical used machine valid texts execution generic algorithm input outputs occurrence positions since one check positions pattern concluding occurs somewhere text order valid machine least claim pattern matching algorithms developed far patterns exists machine texts generic algorithm pattern matching algorithm access exactly positions inputs respectively instance figure displays machine accesses positions naive algorithm searching abb expansion standard matching machines present transformation matching machines split states according text positions read current position execution generic algorithm main point transformation average complexity matching machines obtained may computed algebraic methods sections set subsets verifying exists one pair first entry put set comprising first entries figure machine naive algorithm check displayed states edges states labelled symbols transition associated match pairs namely subset obtained subtracting first entries pairs keeping pairs first entries full memory expansion machine machine obtained removing unreachable states defined figure full memory expansion machine figure construction iterations generic algorithm input current state position respectively positions exactly positions greater accessed far second entries corresponding elements give symbols read texts generic algorithm access positions inputs let remark full memory expansion full memory expansion matching machine equal full memory expansion state isomorphism machine standard state appears unique full memory expansion equivalently equal full memory expansion instance machine figure standard since matching machine figure full memory expansion standard states standard matching machine put second entry unique appears implemented basic algorithm computing expansion machine time size may vary lot regard matching considered machine compact contains state always leads state formally compact one following assertions holds exists symbol symbols symbols basically machine performs useless text accesses shown machine turned compact faster machine iid markov models independent identically distributed iid model aka bernoulli model fully specified probability distribution alphabet probability symbol model model simply referred probability text markov model given set states probability distribution initial distribution associates pair states probability followed transition probability markov model probability sequence states theorem let machine text follows iid model standard sequence states parsed generic algorithm input follows markov model proof whatever text model machine sequence states always starts probability probability state follows state execution generic algorithm equal exists symbol relative position already checked occurring otherwise independently previous states asymptotic speed let text model algorithm speed limit expectation ratio text length number text accesses performed namely putting number text accesses performed parse probability regard asymptotic speed asm lim asymptotic speed asm machines generic algorithm first input theorem asymptotic speed standard machine iid model exists given limit frequencies states markov model associated theorem otherwise computing asymptotic speed pattern matching algorithm regard pattern iid model performed following stages get machine simulates behavior algorithm looking figure transformation algorithms presented section others see github repository machines given implemented obtain expansion figure section compute limit frequencies markov model associated theorem mainly needs solve system linear equations dimension finally obtain asymptotic speed algorithm limit frequencies using equation stage computation limit frequencies time complexity number states full memory expansion smaller strategies sets define machine figure two conventions figure states symbols min min symbols figure shows two differ notably state proposition standard compact valid machine proof construction standard compact validity follows theorem proposition achieves greatest asymptotic speed among machines order otherwise proof corollary implies exists machine achieves greatest asymptotic speed among order standard compact valid states relevant may lead match without positive shift pair states let verify machine order satisfying properties isomorphic since verifies particular properties set states bijection subset let identify states corresponding element since standard compact order moreover since standard last construction min min otherwise valid min min otherwise relevant position lattices position lattice pattern putting set made subsets positions map map figure position lattice pattern abb vertices represent states abb states outgoing edge pairs outgoing edge labeled abb abb colored according goes min min particular let remark since max thus consistent definition edges pairs see figure otherwise remark position lattice contains states edges remark let state two positions two symbols considering particular case get let precw table indexed positions symbols entry precw defined max precw null otherwise instance table precabb null lemma let state position symbol length longest proper suffix prefix otherwise precw precw precw null otherwise precw null otherwise proof case immediately follow definition given remark relation defined follows sets one following properties holds min difference symmetric relation defines total order write lemma let state position symbol max proof assumption min min max construction fact implies max min max max max thus max otherwise max since necessarily min max max get max theorem algorithm computes position lattice pattern time using amount memory proof let first show algorithm determines shifts transitions state state loop lines computes shifts transitions next loop lines computes shifts transitions singletons last loop lines determines shifts transitions states corresponding subsets increasing cardinals length longest proper suffix also prefix last null last last null last last else else else repeat else else algorithm computation position lattice value last entry partial match table kmp algorithm computation takes time linear inside last loop way next subset computed current subset cardinal ensures lines iterations loop lines symbols last precw beginning inner loop line lemma cases transitions shifts positions symbols correctly computed end loop loop lines computes shifts transitions singleton states pairs positions symbols determining performed distinguishing two cases shifts transitions already computed formula remark gives lines distinguish two subcases according symbol considered shift transition state given lemma case otherwise remark since positive min implies thus shifts transitions state computed lines last loop lines computes shifts transitions states corresponding subsets cardinals states positions symbols corresponding shift transition computed shifts transitions state max following lemma cases algo rithm put max lemma ensures max thus shifts transitions max computed states positions shift transition given lemma case time complexity loop lines use memory needed store lattice remark fastest determining fastest proposition greatest asymptotic speed among machines order may performed computing asymptotic speed returning fastest one order enumerate let remark contained position lattice sense set states included position lattice verify symbols reciprocally map states corresponds unique function coincides finally brute force algorithm takes input pattern iid model computes position lattice enumerates maps states gets corresponding keeping states reachable function computes asymptotic speed returns greatest speed time complexity brute force algorithm first factor stands number functions second one computation asymptotic speed needs solve linear system size equal number states memory space complexity needed store position lattice current implementation brute force determination fastest unfeasible patterns length greater polynomial heuristic two points make complexity brute force algorithm given section high size position lattice exponential length pattern determining fastest strategy position lattice needs time exponential size heuristic based two independent stages one aiming overcome one two points start general idea since current position text probability mismatch occurs nth text access decreases geometrically first relative positions accessed strategy generally pattern algorithm greatest influence asymptotic speed sublattices sufficient condition sublattice contain exists least position sublattice verifying condition said complete figure displays four complete sublattices extracted position lattice abb figure let introduce additional notations sets positions prefix defined max rest positive integers sublattice sublattice contains subsets rest containing less positions subsets form construction sublattice complete contains states transitions adapted algorithm compute sublattice time amount memory space expectation interested fast way finding efficient given complete sublattice integers states sublattice expectation defined greatest shift expectation one could possibly get steps starting conditioned starting parsing text following iid model namely expectation computed following recursive formula max expectation complete sublattice well defined computed time number transitions sublattice using memory space figure four complete sublattices extracted position lattice abb sublattice resp leads strategy always smallest resp greatest relative position unchecked sublattice resp leads strategy top resp bottom figure finally extract setting states arg max combines two approaches order compute time polynomial length pattern given order start computing sublattice thus time order select sublattice next compute expectation states extract described computation performed time since number transition sublattice using memory space let remark order expectation priori strongly related order sublattice computed experimenting various situations observed considering order greater generally improve much performances whereas strategies obtained smaller may significantly slower returns time using memory space insist fact generally return fastest strategy even however see next section performs quite well practice evaluation shall compare approaches introduced sections selected pattern matching algorithms comparison performed first theoretical point view computing asymptotic speeds iid models second practical situations measuring average speed real data average speed regard pattern algorithm matching machine text ratio number text accesses performed algorithm search also interested extent taking account frequencies letters iid model text determining fastest kheuristic strategies actually improves asymptotic average speeds purpose compute fastest strategies uniform iid model next test efficiency terms asymptotic speed iid model terms average speeds data frequencies letters pattern matching algorithms forty years research already led development dozens algorithms selected ones evaluation naive quicksearch tvsbs algorithm shifts given badcharacter rule taking account two letters distances current position text ebom version backward oracle matching algorithm also uses bad rule hashq implements algorithm blocks length using efficient hashing techniques tests performed fjs combines ideas sunday algorithms algorithms classics last four ones chosen known efficient short patterns small alphabets situation determination fastest strategy feasible let remark order machine associated tvsbs equal thus greater fastest strategy compute transformation matching machines implemented pattern matching approaches instance algorithm algorithm based bitwise operations automaton since asymptotic average speeds two algorithms exactly whatever pattern model text point displaying results shall evaluate pattern matching algorithms presented section fastest strategy time possible ris tic ris tic ris tic ris ris aaaa aaab aaba aabb abaa abab abba abbb baaa baab baba babb bbaa bbab bbba bbbb table asymptotic speeds patterns length uniform model asymptotic speed asymptotic speeds computed texts patterns binary alphabet table displays asymptotic speeds patterns length iid texts drawn uniform distribution expected strategy computed brute force algorithm last column actually fastest speeds close algorithms outperformed approaches even patterns observe naive algorithms asymptotic speeds always smaller one expect faster since construction access positions text least following display speeds quicksearch always smaller least one preexisting algorithms full tables easily using software table displays asymptotic speeds regard patterns table iid model table shows asymptotic speeds fastest strategies computed regard uniform iid model columns starting strategies obtained optimized according letter probabilities model may used general purpose approaches strategies obtained model probabilities called adapted overall ris tic ris tic ist ris ist ris aaaa aaab aaba aabb abaa abab abba abbb baaa baab baba babb bbaa bbab bbba bbbb table asymptotic speeds patterns length iid model methods faster algorithms exceptions horspool faster two patterns ending rare letter aaaa baaa ebom faster searching baba fastest strategies computed regard uniform iid model asymptotic speeds smaller counterparts obtained actual probabilities text model highly unbalanced nevertheless uniform approaches still perform quite well notably better algorithms except uniform patterns considering longer patterns leads similar observations table shows asymptotic speeds obtained random patterns length outperforms others approaches fastest strategy computed length uniform slower algorithms ebom hashq uniform overall perform better algorithms though slightly slower patterns average speed data benchmark consists wigglesworthia glossinidia genome known bias nucleotide composition bible english table displays average speeds patterns randomly picked data let remark dealing real texts iid particular fastest strategy could possibly outperformed ris tic tic ris tic ris tic ris tic babbbaabab ababbbbbab aaabaaaaba bbbabbabab bbabaabbab baabbaaaaa abbbababbb baabbbabba baabbaabab bbbbababbb ris tic ris tic ris tic ist ist atat tatg aaat tccc caat aacc acta tatc gtga gatt usal hem fede ist table asymptotic speeds patterns length drawn uniform distribution iid model table average speeds patterns length picked benchmark data wigglesworthia glossinidia complete genome bible english ris tic ris tic tic ris tic ris tic ris tic tccttatgtaaaatataaatgtagcaattt aaaagaaccccggcgaggggagtgaaatag aattttcaactaatattaaaccacgttctg aaaggtccattaagtattactatcacagca agatttgcgtgatttaaaataatcatctaa ataggaaaagattggattaaactagatatg mount called mount ith israel wit esus going jerusalem able hea osee call things come upon thee syria dwelt damascus full darkness therefor syria confederate tggataaaaatttgttattaccatatctat cttctttaattatgttttctatttcttttt gttctatttgttggagatttaaaataatta tcctactttaacctctaaatgtcccttatt table average speeds patterns length picked benchmark data wigglesworthia glossinidia complete genome bible english observed benchmark data uniform adapted faster algorithms patterns whereas sometimes slightly outperformed horspool horspool almost fast approaches bible sometimes significantly outperformed wigglesworthia glossinidia genome average speeds overall greater bible dna sequence cases observe wide performance gap uniform adapted approaches though benchmark data far following uniform iid model let remark almost performances uniform adapted cases table shows averages speeds regard patterns length average speeds bible twice wigglesworthia glossinidia genome one actually expects speed greater average texts large alphabets since less likely match two symbols greater shift expectation per iteration uniform adapted outperforms algorithms speeds differ greater amount patterns length wigglesworthia glossinidia genome smaller extent bible discussion practical situations though take account letter frequencies uniform uniform fastest strategy perform generally almost well adapted counterparts greatest difference observed patterns length wigglesworthia glossinidia genome table relatively small observe notable amount difference quite extreme case asymptotic speed iid model even frequencies uniform approaches show greater asymptotic speeds selected algorithms good results whatever pattern text situation performances far best contrary performance ranking algorithms depends heavily patterns texts model instance horspool may perform well even almost optimally patterns texts models speed may completely plummet situations question selecting efficient order still deserves investigations basic answer could greater better take consideration higher order heuristic comes increased computational cost experiments observed asymptotic speed tends stop improving beyond certain rank instance difference average speed patterns length genome bible probably justify computational cost worth use rather searching patterns length bible much wigglesworthia glossinidia genome best order depends pattern notably length text features particular alphabet size letter frequencies certainly possible obtain efficient heuristic lower computational cost since standard situation length text much greater pattern real reason considering pattern matching algorithms linear pattern extreme case texts arbitrarily long regard patterns whatever computation time would beneficial soon improves overall speed authors contributions gilles didier provided initial idea led software development wrote manuscript section evaluation laurent tichit collaborated software development ran tests wrote section evaluation authors read edited approved final manuscript references allauzen crochemore raffinot efficient experimental string matching weak factor recognition combinatorial pattern matching pages springer gonnet new approach text searching communications acm average running time algorithm theoretical computer science barth analytical comparison two string searching algorithms information processing letters boyer moore fast string searching algorithm communications acm charras lecroq handbook exact string matching algorithms king college publications cormen leiserson rivest introduction algorithms mit press didier optimal pattern matching algorithms http faro lecroq efficient variants algorithm international journal foundations computer science faro lecroq exact online string matching problem review recent results acm comput mar franek jennings smyth simple fast hybrid patternmatching algorithm combinatorial pattern matching pages springer guibas odlyzko string overlaps pattern matching nontransitive games journal combinatorial theory series horspool practical fast searching strings software practice experience karp rabin efficient randomized algorithms ibm journal research development knuth morris pratt fast pattern matching strings siam journal computing mahmoud smythe analysis heuristic random struct algorithms marschall herms kaltenbach rahmann probabilistic arithmetic automata applications trans comput biol bioinformatics marschall rahmann probabilistic arithmetic automata application pattern matching statistics ferragina landau editors combinatorial pattern matching volume lecture notes computer science pages springer berlin heidelberg marschall rahmann exact analysis horspools sundays pattern matching algorithms probabilistic arithmetic automata dediu fernau editors language automata theory applications volume lecture notes computer science pages springer berlin heidelberg marschall rahmann algorithm compute character access count distribution pattern matching algorithms algorithms szpankowski complexity sequential pattern matching algorithms luby rolim serna editors randomization approximation techniques computer science volume lecture notes computer science pages springer berlin heidelberg smythe heuristic markovian input random struct algorithms sunday fast substring search algorithm communications acm thathoo virmani sai lakshmi balakrishnan sekar tvsbs fast exact pattern matching algorithm biological sequences current science tsai average case analysis algorithm random struct algorithms july manber fast algorithm searching tech report university arizona yao complexity pattern matching random string siam journal computing
| 8 |
intrinsic point interest discovery trajectory data matthew piekenbrock derek doran dept computer science engineering research center wright state university dayton usa dept computer science engineering research center wright state university dayton usa dec abstract paper presents framework intrinsic point interest discovery trajectory databases intrinsic points interest regions geospatial area innately defined spatial temporal aspects trajectory data varying size shape resolution trajectory database exhibits points interest hence intrinsic compared point interest definitions said extrinsic require trajectory metadata external knowledge region trajectories observed information spatial temporal aspects qualities trajectory database making framework applicable data domain resolution framework developed recent developments consistency nonparametric hierarchical density estimators enables possibility formal statistical inference evaluation intrinsic points interest comparisons pois uncovered framework synthetic truth data thousands parameter settings common poi discovery methods show marked improvement fidelity without need tune parameters hand acm reference format matthew piekenbrock derek doran intrinsic point interest discovery trajectory data proceedings acm conference washington usa july conference pages doi introduction development deployment location acquisition systems enabled large scale capturing movement trajectory data people cars objects technologies like global positioning systems gps global system mobile communications gsm wide area motion imagery wami identification rfid allow organizations governments collect exploit trajectory patterns many scenarios recent initiatives uber ibm smarter programs even made data available either public city planning experts large rise importance https https permission make digital hard copies part work personal classroom use granted without fee provided copies made distributed profit commercial advantage copies bear notice full citation first page copyrights components work owned others acm must honored abstracting credit permitted copy otherwise republish post servers redistribute lists requires prior specific permission fee request permissions permissions conference washington usa acm doi data comes prevalent use geographic information systems gis related platforms related use cases gis information also emerged surveillance service lbs applications many applications trajectory data exploited knowledge acquisition tasks integration movement patterns uncover patterns life region expand situational awareness crises support value added lbs application many knowledge acquisition tasks notion location point interest poi foundational understanding entirety common space data observed example mapping systems must know position geometry locations navigation automated guidance control purposes lbs applications pois metadata popularity necessary provide useful location recommendations pois available raw trajectory data captured location acquisition systems often extrinsically defined gazetteers google places foursquare geonames openstreetmap yet external sources location data present many difficulties faced problem understanding given trajectory dataset relates underlying geographical area observed example many gazetteers store varying types either poi metadata poi relational data allowing information present source bias furthermore relying gazetteers explicitly defines set pois exist given geographical region disagreement definition analysis becomes difficult furthermore pois defined priori one faced problem fitting observed trajectory data models defined pois many may may relevant given data hand example may desirable gathering movement trajectory data following public event discover bottleneck congestion areas like parking lots roads sidewalk segments purpose traffic analysis situation would useful discover pois directly data event geographical pois may available gazetteer come challenges paper investigates poi discovery problem generic context possible ask given trajectory data without access gazetteers infer subregions within geospace interesting enough call poi seek intrinsic pois pois recoverable without use gazetteer completely defined observed movement patterns used application scale movements within building movements across entire city region make definition meaningful https https build recent theoretical work clustering introduce statistically rigorous definition poi applicable trajectory data even mixed resolution definition follows recent minimax analysis consistency hierarchical density superlevel set estimators use definition present framework extracting intrinsic pois yields optimal unsupervised solution without hoc comparative analysis performed realistic simulations involving vehicle pedestrian traffic validation results show marked improvements fidelity several state art sota algorithms interest authors simulation settings resulting traffic data validation code framework completely reproducible open source available point interest discovery section provides preliminary information poi discovery problem provides context definitions work formally defines poi subsequently intrinsic poi framework discovery preliminaries consider trajectory database discrete spatial data least attributes object spatial component temporal component minimal amount information implies trajectory object form chronologically ordered spatial coordinates geographical sense spatial components often defined latitude longitude pair practice could coordinate system representations require trajectory pattern mining techniques techniques seek mine common spatiotemporal patterns across trajectories assert significance areas trajectory patterns emerge mined patterns trajectories often referred mobility patterns characterizing specific trajectory quality interest heading stopping rate velocity rotation curvature shape mobility patterns exhibit properties make formal retrieval significant areas challenging example timespan observed trajectory long processes driving mobility pattern may road traffic changes due construction congestion effects due time day shifts work schedule may also paths objects transient areas never traveled furthermore spatial components trajectory data often high degree autocorrelation breaking assumptions independence variety models proposed handle thee situations largely focusing estimating individual trajectory statistics assumptions includes examples adaptive kalman see necessary form usability important feature modern clustering era several sensitive parameters results combinatorial explosion parameter space algorithm resulting need user use one methods arrive solution befits application review filters vehicle navigation models trajectory path uncertainty models knowledge mined individual trajectories says little macroscopic patterns driving trajectory observations rather focusing statistics individual trajectories collective models preprocess trajectory data extract characteristics across swath trajectories preprocessing desirable discards highly autocorrelated data representing redundant information favor aggregating trajectory positions observations significance examples preprocessing scheme include extracting semantically enriched points intersect known geographical regions aggregating trajectory positions stay points using supplied spatial temporal thresholds processing trajectory data groups using convex combination spatial temporal semantic similarity kernels collective model refer important semantically meaningful data samples aggregated trajectory points exemplar positions simply exemplars definition exemplar consider sample discrete points constitute trajectory define aggregation function maps subset points trajectory segment set exemplar positions aggregation function choice depends intent analysis example consider urban environmental study defines mapping isolated trajectory segment mean coordinate segment speed object traveling exceeds certain threshold groups exemplar positions may determine highemission zones city alternatively traffic made pedestrians groups may represent tourist attraction areas popularity useful lbs applications difficult find type trajectory preprocessing geospatial applications grouping foundational countless tasks trajectory mining generalize preprocessing step referring exemplar important aspect exemplar extraction choose aggregation function befits intent analyst thus satisfies study interpretation inevitably proposed framework agnostic specific form aggregation used thus irrelevant bestow particular interpretation interesting means consider concrete practical definition using popular type aggregation section defining point interest premise exemplars represent meaningful aggregations observations trajectory data source natural define poi region exemplars seek definition regions statistical rather heuristic foundation means reflecting naturally occurring structure within data towards end define poi contiguous high density region exemplars formalize definition follow notation chaudhuri let subset define path function also denote equivalence relation figure illustrating cluster tree hierarchy interpretation pois consider estimated density right panel exemplar positions extracted trajectories geospace middle bottom panel poi geospatial region inhabited exemplar positions density threshold number pois extracted depend scale parameter setting left panel higher limits poi specific small could cause pois manifested random noise overfitted particular set observations low defines pois broad areas low exemplar position density cluster tree hierarchy left panel summarizes set exemplar positions representing poi every density threshold thus capturing entire collection pois common area middle panel upper layers connected xcy iff partitions connected components clusters component represents area high density called high density cluster definition high density clusters density function consider partitioning called level high density threshold parameter maximally connected components set high density clusters density level relate formal definition trajectory mining domain following definition point interest defined extracted set exemplars definition point interest given set exemplars fixed scale resolution high density cluster exemplars forms point interest density level sets high density clusters across values forms hierarchy often referred cluster tree density hierarchical definition locations common matches intuitive interpretation poi example may particular restaurant mall food court poi food court may also considered poi well entire mall may yet another poi cluster tree conceptualization formalizes poi maximally connected set exemplars falling along higher density area implying areas significant connected exemplars may related visualization hierarchical pois dendrogram corresponding cluster tree provided figure middle figure demonstrates view set trajectories might look like colored dots left middle figures representing exemplars right figure demonstrates density estimate positions exemplars exemplars close said high density thought related constituting scale density depends analysis hand sufficiently low density threshold designate every exemplar one poi definition may seem arbitrary density estimator may used find clusters simply estimate density every point kernel density estimation kde iterate possible values create distinct clusters yet every estimation produce kernels kernel bandwidths may result completely different hierarchy clusters extension different set pois cluster tree perspective ideal kernel one uniformly consistent supx given sample case model could fitted appropriate kernel bandwidth parameter would kde furnish continuous surface cluster tree clusters derived main issue set clusters easy compute typical density estimates generally require significant amount memory store computational inefficiency limits usability large trajectory datasets often observed wide geographical areas long periods time applied perspective many approaches find pois variants hierarchical clustering find groups exemplars proved useful problems largely heuristic common clustering algorithms unstated unknown statistical properties precluding possibility formal inference framework introduce therefore examines clustering methods designed infer cluster tree without facing computational hurdles kdes desirable property density estimator notion hartigan establsihed reasonable definition often referred hartigan consistency definition hartigan consistency let cfn set clusters cluster tree sets let respectively denote smallest set cfn containing respectively cfn consistent whenever different connected components disjoint consistency definition essentially requires two disjoint clusters unknown population density also disjoint components given empirical cluster tree given enough samples proposed framework poi discovery developed implemented first computationally tractable provably consistent algorithm satisfies hartigan consistency analyzed chaudhuri discussed next section nonparametric model satisfying notion consistency important transforms unsupervised problem poi discovery formal statistical estimation problem enabling analysis driven data requiring minimal assumptions regarding nature data relation enables methods formal statistical inference allowing one quantify uncertainty create hypothesis tests discern true pois opposed false pois resulting random noise artifacts low sample sizes create notions confidence estimation consistent cluster tree estimation next motivate recent cluster tree estimator discuss relationship applicability poi discovery propsoed framework recall empirical estimate cluster tree applied exemplars represents hierarchy pois viewed perspective propose seen extension chaudhuri work cluster tree trajectory mining context consider using clustering agglomerative scheme creates hierarchical representation clusters using minimum pairwise distance points tool clustering exemplars beginning every exemplar singleton iteratively merges exemplars clusters according linkage function min clustering often criticized due tendency create excessive chaining wherein two clusters may seen generally unrelated amalgamated chance distance threshold reflect true dissimilarity resulting clusters hartigan proved consistent estimator cluster tree densities consistent implying cluster contains sample points also contain nearly sample points probability reflected geospatial sense well estimator whose value point estimate consistent samples collected converges probability true value parameter plim recall condition related thin bridge two population modes fractional consistency shown pair ratio inf sup inf paths sufficiently large figure excessive chaining example bottom panel denotes possible clustering using pedestrians found stop buildings consider case exemplars represent aggregated stops within set trajectories case also consider later section area observed long enough exemplars naturally form area high density areas people stop frequently within buildings cases may useful categorize exemplars within respective pois done supervised way applications extract semantic information see example however also possible exist stops outside buildings tendency chain together example shown figure discovery motivated efforts modify reduce chaining make robust also achieve least hartigan consistency beyond first provably consistent estimator consider effort generalization referred robust single linkage rsl robust single linkage let subset let denote norm let closed ball radius around point rsl algorithm given listing robust single linkage algorithm set inf contains data points grows construct graph nodes include edge let cfn connected components rsl algorithm two free parameters need set equivalent rsl setting whereas equivalent efficiently computed minimum spanning tree mst computed pairwise distances rsl scales distances constant factor reduces mst components restricted connecting satisfying within mst computation chaudhuri found rsl hartigan consistent established rates convergence optimal rate convergence setting finding intrinsic points interest using consistent cluster tree estimator rsl set exemplars creates hierarchical representation pois however nested set multiple solutions always desirable flat solution point assigned single label may preferred traditional approach hierarchical clustering cutting empirical cluster tree given density threshold value yields set clusters cfn form pois possible flat solution however choice forces pois scale requires user know granularity choose priori affecting size kinds pois discovered example small may define shops mall pois larger may define mall poi may known ahead time granularity level relevant furthermore reasonable expect relevant pois exist multiple levels granularity sprawling city park small restaurant could constitute poi thus would useful sensible notion cluster quality used optimized objective function discover pois dependent analyst choice strongly intrinsic geospace intrinsic pois capture pois scale hence satisfy notion intrinsic pois first recall clusters contiguous relatively dense areas data space separated contiguous relatively areas defined working definition density set exemplars statistical point view think cluster set points high density around neighborhood volume support quantify using functional called excess mass definition excess mass cfn value excess mass given represents lowest density level appears initially measure seems like reasonable definition quality clustering within cluster tree estimate considering definition cluster equation cluster exists along mode local maximum underlying density far likely estimation may empirically find mode given point data sparse allowing arbitrarily low probability associated classified interesting result would associate cluster region exhibits relatively high probability neighborhood see visualization along depth description functional however campello remark measure exhibits monotonic behavior direction varying hierarchy instead propose alternative local measure cluster quality definition relative excess mass cfn value relative excess mass given min density level beyond longer part highest density beyond either becomes disconnected creating separate components disappears creating singleton clusters important note relative excess mass defined terms values associated specific cluster opposed specific clustering implies optimal clustering respect relative excess mass estimate may occur fixed global density threshold rather result several local density thresholds applied hierarchy intuitively given cluster contains many points high density relative cluster exist across several thresholds thus robust fluctuations scale analysis reason relative excess mass thought measure cluster stability across different density levels posit reflects intrinsic poi innately defined dataset independent density level intrinsic pois thus defined follows let indicator equal cluster cfn represents intrinsic poi otherwise assign values indicators following maximized maximize subject exactly one per disjoint branch per disjoint branch constraint means indicator function equals exactly clusters path leaf node root cluster tree optimization objective function beyond scope paper refer campello cluster extraction method general cluster hierarchies solve optimization developed alongside estimator similar rsl capable producing optimal result several density levels accounts density thresholds points become noise fall along densities given threshold experiments discussion next evaluate proposed framework intrinsic poi discovery intrinsic pois rely gazetteers may manifest unknown locations evaluation real data validated ground truth external knowledge imported location data sources openstreetmap google places feasible common approach evaluate clusterings ground truth absent use internal cluster validity index cvi cvis include common indices like silhouette score dunn index criterion see arbelaitz overview techniques references therein recent work recommends validation using multiple cvis score different aspects clustering ratio distances sum squares distance centroid scores based similarity believe scores informative intrinsic poi evaluation cvis operate unrealistic assumptions symmetry convexity cluster shape notion minimal variance existence centroid medoid contrary widespread concepts assume cluster exemplars representing intrinsic poi maintain shape indeed number features within geographical area may considered pois yet inevitably exhibit arbitrary shapes buildings parks gathering areas etc manifest varying densities busy intersection small concentrated exemplar density parking lot large uniform density following advice guyon luxburg evaluate efficacy framework context use external validation truth defined priori simulated data enabling direct evaluation intrinsic pois truly interesting regions ensuring latent patterns generated data mimic real geospatial dynamics cars pedestrians region generating synthetic data generate synthetic data evaluation turn simulation urban mobility sumo software sumo open source traffic simulation system capable generating trajectories many objects multiple modalities car truck person plane given shapefile defines avenues travel road network map footpaths within university campus floor plan mall large building sumo able generate trajectories following avenues provided default parameter settings generate traffic trajectories ways satisfy measured physical properties shown incredibly accurate use sumo generate two simulations pedestrian vehicular traffic different geographical areas urban region mixture vehicle pedestrian traffic area surrounding ohio state university osu suburban area pedestrian traffic prominent area surrounding wright state university wsu details simulation simulation data used paper code produced resulting evaluation publicly available reproducible rsl cluster tree framework part larger open source effort simulation configuration sumo requires every object trip defined departure destination nodes sumo refers junctions junctions connected edges representing possible travel path given file containing trip definitions every object sumo dynamically generates routes sequences edges object travels along get departure junction destination junction leave nearly simulation parameters default settings modifying simulation length arrival parameters binomially distributed arrivals generate pedestrian vehicle demand pedestrian traffic within unrestricted indoor areas may constitute intrinsic pois realistic setting sumo generate outdoor pedestrian traffic extended sumo simulate indoor pedestrian traffic well figure illustrates extension interplays vehicular traffic generated shapefiles denoting location buildings first loaded sumo peach colored regions figure inlet within shapefile random number junctions generated within building registered nearby edges sidewalks generated track labeled pedestrian trip includes junction contained within building region figure lower right inlet pedestrian undergoes random walk within junctions generated building random walk emulated choosing random ordered subset generated junctions random amount see following simulation details review following package review see review see figure view extending sumo support indoor pedestrian traffic shapefiles defining buildings loaded sumo registered junction pedestriantrack visits attached junction trip simulator chooses ordered random set junctions follow within building exiting random period time time pedestrian visits interior junctions travels exit junction attached building polygon continuing along original outdoor route generated sumo defining truth recall intrinsic pois inferred exemplars representing specific mobility pattern interest vehicular realistic pedestrian demand generated next step data generation define aggregation function extract meaningful exemplars give concrete proposed framework align experiment much applied literature related topic extract exemplars representing stay points object stay point position objects stopped significantly slowed extracting points simulated sumo data trivial true speed traveling object known given time pedestrian traffic extract trajectory points pedestrians stopped moving vehicular traffic extract either points vehicles stopped moving slowest point vehicle braking sequence using sumos exported braking signals whichever available stay points exemplars next establish mapping exemplar presence within true intrinsic poi allowing external validation since exemplars represent object stopped moving natural definition intrinsic poi assignment defined mechanism causing objects stop specifically define building pedestrians stop within true intrinsic poi natural useful grouping follow similar pattern vehicular traffic assigning exemplars common label stopped identical intersections stop signs stop light junctions mechanistic assignment creating true intrinsic pois benefit tractable sense sumo provides information directly also semantically meaningful sense mechanisms encouraging objects stop moving intrinsic geospace experimental design evaluate fidelity pois extracted proposed framework multiple settings run sumo simulations osu wsu geospaces parameter settings reflecting differences two regions settings shown table sumo simulation parameters region build veh ings ped region size sim length osu hours wsu hours table osu geospace covers smaller area equal mix vehicles pedestrians nearly three times many buildings osu geospace also larger number roadways traffic intersections intrinsic pois involving vehicles may materialize within main campus wsu geospace larger proportion pedestrian traffic roadways vehicles traverse smaller number buildings pedestrians may visit figures show sumo generated pois labeling creates osu wsu campus areas respectively qualitatively examination clusters appear reasonable labels intrinsic pois example clusters representing true pois across osu figure finds buildings surrounding osu oval quad particular locations ring road around quad tend busy osu intersections vehicles pedestrians parking lots around osu recreation builds west oval represent pois across wsu figure truth pois represent major buildings around campus particularly complex separate areas movement wsu large student union yellow points large building lower left part figure evaluation measures discussed beginning section unsupervised nature intrinsic poi discovery make difficult carry meaningful evaluation poi discovery methods using internal requiring truth labels validation measures instead consider multifaceted approach external quantitative evaluation whether intrinsic pois discovered aligns sumo generated pois using adjusted rand index ari qualitative evaluation quality intrinsic pois approach unearths compared true intrinsic pois defined indices chosen due transparency whereas traditional measures proportion pairwise agreements two partitions ari also adjusts score based expected value agreements null hypothesis agreements completely random thus report algorithms compared compare fidelity intrinsic pois extracted proposed framework clustering algorithms commonly used poi discovery trajectories either downloaded implementation implemented number algorithms comparison aside rsl selected methods includes algorithms dbscan optics widespread hierarchical algorithms single linkage average linkage wards criterion along algorithms clara algorithms chosen due relevance problem availability known success clustering world parameter settings clustering algorithms generally require order fit given data set number semantics parameters often changes algorithm used leaving comparisons parameter settings difficult although hierarchical algorithms carry free parameters create hierarchical set solutions require either threshold value exact number clusters extract specified extract flat clustering similarly clara also require specified priori parameter interpretation multiple algorithms use refer number clusters extracted density based algorithms multiple parameters interpretations compared aforementioned algorithms example dbscan requires minimum cluster size parameter minpts distance scale threshold set optics often cited extension dbscan ordering algorithm parameter setting used extract either flat cluster extraction simplified hierarchy using either distance threshold threshold respectively cluster extraction reported rsl requires setting former relating scaling connection radii used connect components latter saliency cluster estimates note rsl similar minpts parameter minimum neighborhood parameter number clusters automatically determined optimized defined relative excess mass functional section algorithm reflects large set possible solutions parameter setting choosing single parameter setting evaluation would represent source possible bias rather employ comprehensive approach comparing wide range parameter settings algorithm define ranges let seq denote sequential range operator skipping values sequence integers example seq seq hierarchical clustering algorithms number flat clusters extracted varied range seq number true pois assigned sumo see reasonable strategy gives better view multiple levels extracted hierarchy matched data set well well merge criterion linkage function collectively captures true pois geospace use range vary clara algorithms density based methods dbscan optics evaluated first varying minpts value minpts varying scale parameters respectively recall minpts relates minimum neighborhood value constitutes cluster thus allow testing tractable set minpts reflect possible sizes pois along quantiles qnt seq corresponding number exemplars per poi sumo truth data distance thresholds dbscan optics also varied along quantiles seq pairwise distances computed data set since methods mark points fall areas data set sufficiently dense noise according scale severe overfitting guided measure like densitybased solutions deemed valid least data classified label finally rsl also contains true pois inferred intrinsic pois figure intrinsic poi comparison osu true pois inferred intrinsic pois figure intrinsic poi comparison wsu two parameters value parameter use chaudhuri analysis determine set rsl shown optimal rates convergence leave constant value similarly rate holds least large log dimensionality data set case varying small set values similar fashion performed dbscan optics qnt log total cluster configurations performed osu wsu simulations respectively totalling reported configurations validation testing discussion qualitative comparison truth figures compare intrinsic pois discovered framework agains simulation true pois recall points low density discarded noise shown direct comparison true pois defined simulation show clear similarities osu simulation figure framework recovers intrinsic pois within buildings matter shape density closeness buildings also recovers intrinsic pois parking lots street intersections around osu oval buildings decomposed collection individual intrinsic pois example easternmost large campus building northeast corner oval contains three separate intrinsic pois one entrance road another center building third back entrance although labels may match sumo assigned sense natural quite possible large buildings dense isolated areas people movement looking intrinsic pois wsu dataset figure find building general recovered intrinsic poi align shape compared shape true pois figure also note framework determines movement within buildings covering small areas found significant enough intrinsic poi large buildings also showed decomposition like osu simulation example wsu student union large building lower left corner figure framework defines intrinsic poi center two back exits building quantitative comparison approaches proposed intrinsic poi framework measured ari scores osu data set wsu data set note maximum ari rsl solution ari solution found using highest predefined notion stability determined completely without knowledge surrounding geographical area rsl performed consistently terms low variability compared algorithm overall high similarity semantically driven sumo assigned locations figure shows distribution ari algorithms compared method parameter settings discussed section orange line corresponds ari proposed framework compares favorably best possible settings algorithms note although dbscan like others performed well specific configurations minpts settings parameters often intuitive unsupervised scenarios truth unknown thus external measures like ari computed hierarchical algorithms see impact linkage criterion used influenced shape true clusters example clustering performed fairly well well separated wsu data set substantially lower osu data set reflected well able capture much true clustering structure right parameter settings figure distribution adjusted rand index ari scores various clustering algorithms varying free parameters orange line corresponds ari proposed framework wsu simulation ari scores however exhibited degraded performance pois less separated osu data set optics specific parameter settings performed well wsu data set however suffered osu data set related research trajectory field largely progressed extensive intensive individual efforts nonetheless conceptual models proposed deal patterns within trajectories relate patterns geographical areas interest various purposes one model postulates trajectories spatiotemporal patterns essentially driven semantics application associates trajectory contributed significantly stops moves trajectories smot family classification algorithms premise analysis partitioning trajectory data labeled set stop move segments one take annotate segments semantic information derive specific mobility patterns result discover interesting locations alvares developed find interesting positions based semantic annotations describing places trajectory visited palma reduced reliance prior knowledge positions likely interesting incorporating speed tracks traveling variation finds clusters common trajectories based similar direction changes stopping points zhou tackle problem finding positions interest individual track based data track location preferences position time tags locations provided web services google maps many related efforts encode reliant varying notions interesting place using example techniques natural language processing nlp data clustering sequential pattern mining social network analysis methods zheng pioneered use stay points corresponding aggregation consecutive gps points collectively within time distance threshold thereby characterizing virtual location interesting note zheng used optics create hierarchical clustering stay points application microsoft called geolife indeed zheng anticipated number developments trajectory mining field theoretical cluster tree may viewed statistically based conception tree based hierarchical graph used represent pois application well worth noting definition intrinsic poi theoretical foundation clustering conceptually similar optics computationally recently hierarchical dbscan hdbscan exist number commonalities optics dbscan theory cluster tree comprehensive exposition relationship beyond scope paper see campello thorough review subject although mentioned efforts usage rsl relative excess mass functional cluster extraction equivalent flat clusters hdbscan extracts setting minpts however asymptotic consistency setting pair log established alpha must much larger exponential dimensionality data set thus use rsl concluding remarks paper proposed general framework intrinsic poi discovery without needing rely external gazetteers based recent theoretical advances hierarchical nearest neighbor density estimation discussed conceptually sound basis automated poi discovery specifically context geospatial data introduced framework provides rigorous usable solution applied domain primarily dominated intuitively reasonable methods novel extensions sumo support pedestrian movement buildings evaluation simulated trajectory data diverse geographical areas supports conclusion proposed framework useful tool extracting intrinsic pois framework theoretical guarantees practical benefits requires hoc parameter tuning exhibits improved fidelity common approaches thousands parameter settings future work help asymptotic analysis done chaudhuri plan develop techniques poi extraction imperative exploratory settings large urban environments number pois known ahead time little useful knowledge gain hoc cluster analysis especially solution space large relating concept poi theory cluster tree rsl associated estimators enable future theoretical work may augment models reliant poi data location recommendation systems collaborative filtering techniques social networking models built poi data social networks reviewed references luis otavio alvares vania bogorny bart kuijpers jose antonio fernandes macedo bart moelans alejandro vaisman model enriching trajectories semantic geographical information proc annual acm intl symposium advances geographic information systems acm mihael ankerst markus breunig kriegel sander optics ordering points identify clustering structure acm sigmod record olatz arbelaitz ibai gurrutxaga javier muguerza perona extensive comparative study cluster validity indices pattern recognition shumeet baluja reducing vehicle emissions via machine learning traffic signal program selection maike buchin anne driemel marc van kreveld vera adinolfi segmenting trajectories framework algorithms using spatiotemporal criteria journal spatial information science ricardo jgb campello davoud moulavi sander clustering based hierarchical density estimates conference knowledge discovery data mining ricardo jgb campello davoud moulavi arthur zimek sander framework unsupervised optimal extraction clusters hierarchies data mining knowledge discovery ricardo jgb campello davoud moulavi arthur zimek joerg sander hierarchical density estimates data clustering visualization outlier detection acm trans knowledge discovery data aileen chang maria parrales javier jimenez magdalena sobieszczyk scott hammer david copenhaver rajan kulkarni combining google earth gis mapping technologies dengue surveillance system developing countries intl journal health geographics kamalika chaudhuri sanjoy dasgupta rates convergence cluster tree advances neural information processing systems chen jisu kim sivaraman balakrishnan alessandro rinaldo larry wasserman statistical inference cluster trees arxiv preprint martin ester kriegel sander xiaowei densitybased algorithm discovering clusters large spatial databases noise proc intl conf knowledge discovery data mining flavio figueiredo bruno ribeiro jussara almeida christos faloutsos tribeflow mining predicting user trajectories proc intl conference world wide web chris fraley adrian raftery clustering discriminant analysis density estimation amer statist assoc lorenzo gabrielli salvatore rinzivillo francesco ronzano daniel villatoro tweets semantic trajectories mining anomalous urban mobility patterns citizen sensor networks springer tobias gindele sebastian brechtel dillmann probabilistic model estimating driver behaviors vehicle trajectories traffic environments intl ieee conference intelligent transportation systems ieee marta gonzalez cesar hidalgo barabasi understanding individual human mobility patterns nature isabelle guyon ulrike von luxburg robert williamson clustering science art nips workshop clustering theory john hartigan consistency single linkage clusters amer statist assoc congwei chen yongqi chen dajie liu adaptive kalman filtering vehicle navigation journal global positioning systems lawrence hubert phipps arabie comparing partitions journal classification leonard kaufman peter rousseeuw clustering means medoids daniel krajzewicz jakob erdmann michael behrisch laura bieker recent development applications sumo simulation urban mobility intl journal advances systems measurements december liang liu jia tao song guan zhao xiao jia tradbscan algorithm clustering trajectories applied mechanics materials vol trans tech publ siyuan liu shuhui wang kasthuri jayarajah archan misra ramayya krishnan todmis mining communities trajectories proceedings acm international conference information knowledge management cikm acm new york usa dietrich werner sawitzki excess mass estimates tests multimodality amer statist assoc fionn murtagh pierre legendre wardfis hierarchical agglomerative clustering method algorithms implement wardfis criterion journal classification andrey tietbohl palma vania bogorny bart kuijpers luis otavio alvares approach discovering interesting places trajectories proc acm symposium applied computing acm christine parent stefano spaccapietra esteban conceptual modeling traditional applications mads approach springer science business media park hong cho recommendation system using bayesian userfis preference model mobile devices intl conference ubiquitous intelligence computing springer marco pavan stefano mizzaro ivan scagnetto andrea beggiato finding important locations approach ieee intl conference conf mobile data management vol jose antonio rocha valeria times gabriel oliveira luis alvares vania bogorny clustering method intelligent systems ieee intl conference ieee stefano spaccapietra christine parent maria luisa damiani jose antonio macedo fabio porto christelle vangenot conceptual view trajectories data knowledge engineering goce trajcevski roberto tamassia hui ding peter scheuermann isabel cruz continuous probabilistic queries uncertain trajectories proc intl conference extending database technology advances database technology acm reaz uddin chinya ravishankar vassilis tsotras finding regions interest trajectory data ieee intl conference mobile data management vol ieee kirsi virrantaus jouni markkula artem garmash vagan terziyan jari veijalainen artem katanosov henry tirri developing services proc second intl conference web information systems engineering vol ieee ulrike von luxburg robert williamson isabelle guyon clustering science art icml unsupervised transfer learning xiangye xiao zheng qiong luo xing xie inferring social ties users human location history journal ambient intelligence humanized computing josh ying lee weng vincent tseng semantic trajectory mining location prediction proc acm sigspatial intl conference advances geographic information systems acm ping zhang qing deng xiaodong liu rui yang hui zhang spatiotemporal trajectory pattern recognition intelligent sensor devices ieee access zheng social networks users computing spatial trajectories springer zheng trajectory data mining overview acm trans intelligent systems technology zheng xing xie geolife collaborative social networking service among user location trajectory ieee data eng bull zheng lizhu zhang xing xie mining interesting locations travel sequences gps trajectories proc intl conference world wide web acm changqing zhou dan frankowski pamela ludford shashi shekhar loren terveen discovering personally meaningful places interactive clustering approach acm trans information systems
| 2 |
critical parameters particle swarm optimisation nov michael adam erskine thomas joyce institute perception action behaviour school informatics university edinburgh crichton edinburgh scotland abstract particle swarm optimisation metaheuristic algorithm finds reasonable solutions wide range applied problems suitable parameters used study properties algorithm framework random dynamical systems due swarm dynamics yields analytical results stability properties particles considerations predict relationship parameters algorithm marks edge convergent divergent behaviours comparison simulations indicates algorithm performs best near margin instability pso introduction particle swarm optimisation pso metaheuristic algorithm widely used solve search optimisation tasks employs number particles swarm potential solutions particles shares knowledge current overall best solution also retains memory best solution encountered previously otherwise particles random initialisation obey linear dynamics following form represent respectively position search space velocity vector particle swarm time velocity update contains inertial term parameterised includes attractive forces towards personal best location towards globally best location parameterised respectively symbols denote diagonal matrices whose entries uniformly distributed unit interval number particles quite low applications usually amounting dozens order function optimiser algorithm uses nonnegative cost function without loss generality assumed optimal solution many problems pso applied also states costs considered good solutions cost function evaluated state particle time step better personal best replaced similarly one particles arrives state cost less replaced particles position particle discovered new solution velocity particle depart current best location may still chance return guided force terms dynamics numerous modifications variants proposed since algorithm inception continues enjoy widespread usage ref groups around pso papers discernible application areas google scholar reveals results particle swarm optimisation total year next section report observations simulation particle swarm move standard matrix formulation swarm dynamics order describe existing corresponding author analytical work pso sect argue formulation pso random dynamical system enable derive novel exact characterisation dynamics system generalised towards realistic case swarm sect compare theoretical predictions simulations representative set benchmark functions finally sect discuss assumption made theoretical solution sect address applicability results metaheuristic algorithms practical optimisation problems swarm dynamics empirical properties success algorithm locating good solutions depends dynamics particles state space problem contrast many evolution strategies straight forward interpret particle swarm following landscape defined cost function unless current best positions change particles interact follow intrinsic dynamics even indirectly obtain gradient information particle dynamics depends parameterisation obtain best result one needs select parameter settings achieve balance particles exploiting knowledge good known locations exploring regions problem space visited parameter values often need experimentally determined poor selection may result premature convergence swarm poor local minima divergence particles towards regions irrelevant problem empirically execute pso variety problem functions range values typically algorithm shows performance form depicted fig best solutions found show curved relationship small small large values found cause particles diverge leading results far optimality small values parameters particles converge nearby solution sometimes acceptable cost functions similar relationships observed numerical tests see sect unless good solutions found due problem complexity run time limits see sect simple cost functions single well potential also parameter combinations small small usually lead good results choice constant may effect cost functions seem big effect cases matrix formulation order analyse behaviour algorithm convenient use matrix formulation inserting velocity explicitly second equation unit matrix dimensions note two occurrence refer realisation random variable similarly two realisation different since second third term right constant time analysis algorithm focus properties matrix spite wide applicability pso subject deeper theoretical study may due multiplicative noise simple dynamics previous studies effect noise largely ignored analytical results early exploration pso dynamics considered single particle space personal global best locations taken random components figure typical pso performance function parameters particle swarm run pairs values cost function rotated rastrigin function parameter pair repeated times minimal costs iterations averaged replaced averages apart random initialisation algorithm deterministic varying parameters shown result range periodic motions divergent behaviour case addition random vectors seen beneficial adds noise deterministic search control velocity requiring enforcement arbitrary maximum value ref derived analytical manner eigenvalues derived dynamic matrix simplified version pso algorithm used imply various search behaviours thus case expected diverge various cyclic motions shown exist version algorithm ref single particle considered one dimensional problem space using deterministic version pso setting eigenvalues system determined functions combined leads three conditions particle shown converge harmonic oscillations occur zigzag motion expected preceding papers discussion random numbers algorithm views purely enhancing search capabilities adding drunken walk particle motions replacement expectation values thus believed simplify analysis loss generality show contribution iterated use random factors fact adds level complexity dynamics swarm affects behaviour algorithm way ref factors given consideration regions convergence divergence separated curved line predicted line separating regions equation given ref fails include parameter settings lead convergent swarms analytical solution stability problem swarm dynamics explains parameter settings derived deterministic approaches line experiences practical tests purpose formulate pso algorithm random dynamical system present analytical solution swarm dynamics simplified representative case critical swarm conditions single particle pso random dynamical system refs dynamics particle swarm studied well case justified particles interact via global best position unchanged single particles exhibit qualitatively dynamics swarm case necessarily shift invariance allows set zero leads following given formulation pso dynamics extending earlier approaches explicitly consider randomness dynamics instead averages consider random dynamical system dynamical matrices chosen set rij rii rows realisation random diagonal matrix combines effects parameter sum diagonal elements uniformly distributed distribution random variable rii given convolution two uniform random variables namely max max min min max max variable otherwise tent shape box shape limits either case swarm obtain information fitness function considered expect pso well represented simplified version latter case irrelevant practice deviations theory may occur case different particles discuss well effects switching dynamics discovery better solutions sect marginal stability swarm discover new solutions dynamical properties determined infinite product matrices set products studied several decades found applications physics biology economics provide convenient way explicitly model stochasticity swarm dynamics claim performance pso determined stability properties random dynamical system since equation linear analysis restricted vectors unit sphere space unit vectors denotes euclidean norm unless set matrices shares eigenvectors case standard stability analysis terms eigenvalues applicable instead use means theory random matrix products order decide whether set matrices stochastically contractive properties asymptotic dynamics described based double lebesgue integral unit sphere set lyapunov exponents effect dynamics measured logarithmic units order account multiplicative action log negative algorithm converge probability positive arbitrarily large fluctuations possible measure inner integral given determine stationary distribution unit sphere outer integral given solution integral equation existence invariant measure requires dynamics ergodic ensured least elements complex eigenvalues case see condition excludes small region parameters space small values take ergodic components account two components due symmetry stability properties depends parameters differs strongly homogenous distribution see fig examples case critical parameters obtained relation figure stationary distribution unit circle plane system distribution peak near otherwise main peaks highest largest solving difficult higher dimensions rely linearity system considering representative curve fig represents solution settings distribution random factors smaller variance rendering dynamics stable contour moves towards larger parameter values see fig inside contour negative meaning state approach origin probability along contour outside region large state fluctuations possible interesting parameter values expected near curve due coexistence stable unstable dynamics induced different sequences random matrices theoretically optimal combination exploration exploitation possible specific problems however deviations critical curve expected beneficial personal best global best due linearity particle swarm update rule subject scaling invariance already used consider consequences linearity case personal best global best differ interval remain unchanged particle personal best behave like particle swarm together also scaled factor approximation lyapunov exponent see log figure solution representing single particle one dimension fixed best value curve higher right magenta curve green except regions near numerical instabilities occur simulation produces indistinguishable curve simulation tracked probability particle either reach small region near origin escape beyond radius starting random location unit circle along curve probabilities equal changed amount log scaling although effect asymptotic behaviour expect effect stability swarm finite times may relevant practical applications parameters swarm stable less stable provided initial conditions scaled way likewise kpk increased critical contour move inwards see fig note figure low number iterations lead erroneous trials parameter pairs outside outer contour omitted also consider behaviour near complex irrelevant pso contour seen limit increase kpk relevant comparison theoretical stability result comparing stability results numerical simulations real optimisation problems need take account effects caused differences swarm finite runtimes optimisation benchmark functions metaheuristic algorithms often tested competition benchmark functions designed present different problem space characteristics functions contain mix unimodal basic multimodal composite functions domain functions test set defined dimensionality problem particles initialised within domain use problems throughout implementation pso performed spatial velocity clamping trials swarm particles used repeated algorithm times occasion allowing iterations pass recording best solution found swarm competition fitness evaluation allowed corresponds iterations particles iteration numbers included comparison protocol carried pairs repeated functions averaged solution costs function two parameters showed curved valleys similar fig problems function obtain different best values along near theoretical curve appears preferable location within valley individual functions yield best performance near case near although global average performance test functions better valley near near see fig figure best parameter regions blue green magenta iterations iterations region shifts towards critical line cost averaged runs cec benchmark functions red outer curve represents zero lyapunov exponent medium values difference analytical solutions cases strongest see fig simulations shows lesser extent thus revealing shortcoming approximation case often different resulting vector smaller norm case case violates assumption theory dynamics described based unit vectors particle far away behave predicted case length scales smaller retractive forces tend reduced inertia becomes effective particle locally less stable shows numerically optimal parameters smaller predicted discussion relevance criticality analytical approach predicts locus pairings maintain critical behaviour pso swarm outside line swarm diverge unless steps taken constrain inside swarm eventually converge single solution order locate solution within search space swarm needs converge point line represents upper bound mix swarm manifests parameters critical line fluctuations still arbitrary large therefore subcritical parameter values preferable settling time order scheduled runtime algorithm addition typical length scale problem known finite standard deviation particles stable parameter region used decide distance parameter values critical curve dynamical quantities approximately set based theory presented precise control behaviour algorithm principle possible observation distribution empirically optimal parameter values along critical curve confirms expectation critical behaviour main reason success algorithm critical fluctuations plausible tool search problem apart certain smoothness assumption nothing known cost landscape majority excursions exploit smoothness cost function local search whereas fat tails distribution allow particles escape local minima figure define neutral stability equilibrium divergence convergence convergence means particle approaches line connecting curves problem scaled see sect outer curve inner curve results iterations averaged repetitions switching dynamics discovery better solutions shows discovery better solution affects constant terms linear dynamics particle whereas dynamical properties governed linear coefficient matrices however time step particle found new solution corresponding force term dynamics zero see particle dynamics slows compared theoretical solution assumes finite distance best position finite times affects usually one particle time new discoveries tend become rarer time effect small asymptotic dynamics although could justify empirical optimality parameters unstable region test cases question nevertheless often changes occur weakly converging swarm still produce good results often discovers better solutions means fluctuations performs settling current best position cost functions deceptive local optima tend near better optima parameter values far inside critical contour see fig may give good results cases exploration needed role personal best global best numerical scan plane shows valley good fitness values small fixed positive roughly linear described relation const joint parameter matters large accordingly small predicted optimal values valley less straight may effect known solutions relatively weak interaction two components becomes important words movement particles mainly due inertia relation global local best low inertia particles adjust vectors quickly towards vector terms become interchangeable finally mention particles longer runtime well lower search space dimension increase potential exploration lead empirically determined optimal parameters closer critical curve conclusion pso widely used optimisation scheme theoretically well understood existing theory concentrates deterministic version algorithm possess useful exploration capabilities studied algorithm means product random matrices allows predict useful parameter ranges may allow precise settings typical length scale problem known weakness current approach focuses standard pso known include biases necessarily justifiable outperformed benchmark set practical applications many existing pso variants similar analyses certainly possible expected carried variants even though field metaheuristic search often portrayed largely inert theoretical advances dynamics particle swarms better understood algorithms may become useful efficient particle filters many applications beyond heuristic optimisation acknowledgments work supported engineering physical sciences research council epsrc grant number references kennedy eberhart particle swarm optimization proceedings ieee international conference neural networks volume pages ieee poli analysis publications applications particle swarm optimisation journal artificial evolution applications http kennedy behavior particles porto saravanan eiben editors evolutionary programming vii pages springer clerc kennedy particle stability convergence multidimensional complex space ieee transactions evolutionary computation trelea particle swarm optimization algorithm convergence analysis parameter selection information processing letters jiang luo yang stagnation analysis particle swarm optimization swarm intelligence symposium sis ieee pages ieee cleghorn engelbrecht generalized theoretical deterministic particle swarm model swarm intelligence furstenberg kesten products random matrices annals mathematical statistics tutubalin limit theorems product random matrices theory probability applications khas minskii necessary sufficient conditions asymptotic stability linear stochastic systems theory probability applications clerc confinements biases particle swarm optimisation technical report open archive hal spears green spears biases particle swarm optimization international journal swarm intelligence research
| 9 |
detection low rank matrices regime antoine chevreuil philippe loubaton gaspard monge computer science laboratory ligm umr cnrs descartes france apr abstract address detection low rank ndeterministic matrix noisy observation complex gaussian random matrix independent identically distributed entries thanks large random matrix theory results largest singular value verifies possible exhibit consistent tests contribution prove contrario condition consistent tests proof inspired previous works devoted case rank matrices index terms statistical detection tests large random matrices large deviation principle ntroduction problem testing whether observed matrix either independent identically distributed gaussian random matrix variance low rank deterministic matrix known structure called also spike fundamental problem arising numerous applications detection multivariate signals gaussian hidden clique problem two dimensions converge towards way rank remaining fixed known results additive spiked large random matrix models enabled fundamental detection problem see established long time ago see references therein asymptotic regime largest singular value converges almost surely towards recently mild technical extra assumptions proved still converges towards converges towards limit strictly less limit strictly greater converges towards limit strictly greater result implies generalized likelihood ratio test glrt consistent probability false alarm probability missed detection converge towards asymptotic regime threshold order simplify exposition assume ratio reduces detection problem extensively addressed zone case much less studied montanari consider zone rank matrix thanks simple information geometry tools prove region impossible find consistent test detection spike irrespective standard random matrix tools approach extended general case tensors order namely frobenius norm tensor stricly less threshold depending probability distributions observation two hypotheses asymptotically undistinguishable detection test behave better random guess property stronger consistent test hold matrix case see instance test exhibited better performance random guess paper extend methodolodgy general case rank contribution prove consistent detection impossible theoretical result unexpected believe provides better understanding fundamental detection problem large dimensions without resorting machinery large random matrices mention works spike symmetric case clearly related problem however two major differences arise firstly detection addressed rather estimation second statistical model spike needed results general explicit however certain prior rank one model deduced zone impossible find estimates spike better performance dummy estimate estimate rely observation authors rely computation mutual information computation involves results extending approach tallagrand studying model odel notation asumption set matrices complex endowed standard scalar product frobenius norm kxkf spectral norm matrix denoted spike signal assumed matrix fixed rank hence admits svd singular values sorted descending order diagonal matrix gathering descending order defined impose behavior namely singular values depend large enough hypothesis could replaced condition converge towards finite limit hoc rate however would introduce purely technical difficulties noise matrix assumed entries distributed consider alternative versus denote probability probability density density likelihood ratio denote expectation recall fundamental information geometry results used order address detection following properties well known see also section bounded consistent detection test exists moroever total variation distance converges towards test performs better decision random however mention sufficient conditions particular unbounded imply existence consistent tests iii rior spike xpression second order moment density seen collection random variables obviously exp kzkf one hand notice study likelihood ratio suited deterministic model spike presented previously indeed case simple expression exp always diverges hand noise matrix shows invariance property unitary matrices density equals hence modify data according procedure pick two independent unitary according haar measure corresponds uniform distribution set unitary matrices change data tensor according said affect distribution noise amounts assume certain prior spike indeed amounts replace following data noise tensors procedure still denoted respectively position give expression moment mathematical expectation prior distribution spike equivalently haar matrices holds exp expectation independent copies spike stands real part respectively associated expression exp haar independent also independent haar distributed holds exp expectation independent haar matrices rtr ultimate simplification comes decomposition implies rtr clear independent matrices distributed upper diagonal block haar unitary matrix esult main result contribution following theorem lim sup possible find consistent test remind looking condition due condition exp bounded evidently divergence may occur hence consider exp exp prove certain small enough specified later bounded term computation grf clear boundedness integral achieved rarely deviates remarked natural machinery consider large deviation principle ldp essence follows ldp rate found certain function called good rate function grf borel set log converges towards existence grf allows one analyze asymptotic behaviour integral next section thus justify follows large deviation principle rate compute associated grf computation grf inequality imply random variable bounded first recall random matrix follows ldp rate grf parameter log det see theorem besides function matrices therefore contraction principle applies see theorem ensures follows ldp rate grf real solution following optimization problem problem maximize log det log det constraints rtr provide solution problem respect define interval defined easy check disjoint following result holds theorem maximum problem given log iik easy check function continuous proof theorem provided appendix illustrate theorem following experiment rank spike fixed singular values set computed millions random samples matrices pair associated point defined rtr log det obtain cloud points upper envelope expected also plotted graph function addition mention general context tensors order moment still given random variable call complicated form see asymptotics term still studied evaluating grf grf solution optimization problem apparently solved closed form bound opposite true grf computed upper upper bound valid given log thus also represent figure upper bound clearly tight log det fig graph seen upper envelope upper curve upper bound computed computation varadhan lemma see theorem states log exp supx hence term converges towards supx consider intervals defined derivative decreasing limit left extremity simply indices shows strictly decreasing every hence every proved term concentration notice upper block unitary haar matrix distribution matrix entries distributed top block obviously standard result random variable distributed concentrated around mean easily extended matrix lemma exists constant exp take independent distributed consider upper blocks follows distribution take may split integral two parts exp exp defined events thanks concentration result exp exp exp always possible choose follows let inspect term since exist hence exp expand sum four terms take instance thanks von neumann lemma ppr yields invoking von neumann lemma three times holds similar manipulations done terms expansion less exp expectation understood expectation independent consider first expectation gives factor exp exp always possible choose integral exp finally obtain multiplying exp taking expectation exp always possible adjust integral converges condition must true arbitrarily small hence result ppendix prove theorem function maximized converges towards argument maximization problem satisfies therefore kkt conditions imply existence scalar lagrange multiplier stationary point lagrangian defined log det rtr real valued function stationary point computedwhen setting differential entries zero checked stationary point first step equations shown satisfied diagonal permutations columns deduced exists diagonal matrix matrix permutation log det log det log det rtr invites consider following problem maximize log det jointly permutations diagonal matrices verifying constraint first step set problem consider problem maximize log constraints maximum denoted variant celebrated problem see chap solved evaluate capacity frequency selective gaussian channel difference latter problem log replaced log order solve problem assume non zero singular values distinct case standard perturbation argument used order address general case function maximized strictly concave set defined constraints maximum reached verifying unique point consider lagrangian corresponding problem given log partial derivatives parameters zero leads first remark necessarily equations imply numbers sorted decreasing order verify claim assume holds implies therefore contradiction denote number entries hence first entries non zero morever equations imply analytically characterize one hand computed imply hand constraint imposes verifies therefore holds coincides integer see definition intervals maximum log direcly computed log order show grf remains show solution problem reached permutation matrix identity respect introduce nested problem motivated following observation denote vectors whose components respectively diagonal entries arranged decreasing order evidently majorizes sense thus consider relaxed problem problem maximize log det diagonal matrices vectors satisfying majorization constraint equality constraint maximum problem maximum problem maximum problem actually show maximum problem less reached vector coincides imply optimal permutation problem give elements solving problem consider stationary point associated lagrangian compute kkt conditions suppose stationary point attains maximum denotes number components prove necessarily let index exists otherwise problem solved implies first index indices notice fact suppose condition true whatever possible add small update way majorization constraints still hold constraint holds updated increases function maximize ispin contradiction definition means exists index choose smallest shown necessary equal algebraic gymnastics shown case inequalities saturated hence implying value log equals eferences bai silverstein spectral analysis large dimensional random matrices series statistics banks moore vershynin verzelen bounds phase transitions clustering sparse pca submatrix localization ieee international symposium information theory isit pages june florent raj rao nadakuditi singular values vectors low rank perturbations large rectangular random matrices journal multivariate analysis bianchi debbah najim performance statistical tests single source detection using random matrix theory ieee transactions information theory chevreuil loubaton spiked large random tensors arxiv cover thomas elements information theory edition wiley interscience dembo zeitouni large deviations techniques applications berlin heidelberg fabrice gamboa alain rouault spectral measures large deviations stat planning inference marc lelarge miolane fundamental limits symmetric matrix estimation miolane fundamental limits matrix estimation case mirsky trace inequality john von neumann monatshefte mathematik dec andrea montanari daniel reichman ofer zeitouni limitation spectral methods gaussian hidden clique problem rank one perturbations gaussian tensors ieee trans inf march nadakuditi edelman sample eigenvalue based detection signals white noise using relatively samples ieee transactions signal processing onatski moreira hallin asymptotic power sphericity tests data ann statistics michel talagrand mean field models spin glasses book subtitle volume basic examples berlin heidelberg witsenhausen determinant maximization problem occuring theory data communications siam appl math
| 10 |
recurrent neural network language models open vocabulary cyber anomaly detection aaron ryan nicolas brian nicole robert pacific northwest national laboratory richland washington western washington university bellingham washington abstract automated analysis methods crucial aids monitoring defending network protect sensitive confidential data hosts work introduces flexible powerful unsupervised approach detecting anomalous behavior computer network logs one largely eliminates feature engineering employed existing methods treating system logs threads interleaved sentences event log lines train online unsupervised neural network language models approach provides adaptive model normal network behavior compare effectiveness standard bidirectional recurrent neural network language models detecting malicious activity within network log data extending models introduce tiered recurrent architecture provides context modeling sequences users actions time compared isolation forest principal components analysis two popular anomaly detection algorithms observe superior performance los alamos national laboratory cyber security dataset red team detection best performing model provides test set area receiver operator characteristic curve demonstrating strong anomaly detection performance approach open vocabulary logging sources introduction minimize cyber security risks essential organizations able rapidly detect mitigate malicious activity computer networks threats originate variety sources including malware phishing port scanning etc attacks lead unauthorized network access perpetrate damage theft credentials intellectual property business sensitive information typical scenario cyber defenders network administrators tasked sifting vast amounts data various logging sources assess potential security risks unfortunately amount data even network quickly grow beyond ability single person team assess leading delayed response desire automated assistance continues encourage research cyber security machine learning approaches automated detection highly effective characterizing individual threats spite high precision suffer low recall may fail detect subtle mutations novel attacks alternatively given unlabeled training set typically benign activity logs one build model normal behavior online joint training evaluation model patterns normal usage reinforced atypical malicious activity stand anomalous features used identify unusual behavior typically statistical feature vectors associated time slices vectors counts types activities taking place window systems developed research criticized brittle differences properties operational networks security constraints variable usage patterns sommer paxson approach introduce aims minimize assumptions implicit feature engineering effectively model variability network usage direct online learning language models log lines language models assign probabilities sequences tokens core component speech recognition machine translation language processing systems specifically explore effectiveness several recurrent neural network rnn language models use network anomaly detection system system dynamically updates network language model day based previous day events language model assigns low probability flagged anomalous several advantages approach reduced feature engineering model acts directly raw string tokens rather domainspecific statistics dramatically reduces time deployment makes agnostic specific network logging source configuration also removes blind spots introduced tens thousands distilled single aggregated feature vector allowing model capture patterns would otherwise lost fine grained assessment response time analysts improved providing specific relevant events interest baseline systems alert user day aggregate require sifting tens thousands actions approach provide even scores analyst helping quickly cate suspicious activity real time processing ability process events real time fixed bounds memory usage grow time approach suitable common scenario events appearing log stream assess models using publicly available los alamos national laboratory lanl cyber security dataset contains real data ground truth red team attacks demonstrate language models definitively outperforming standard unsupervised anomaly detection approaches prior work machine learning widely explored network anomaly detection techniques isolation forest gavai liu ting zhou principal component analysis novakov ringberg attracting significant interest machine learning classifiers ranging decision trees bayes used cyber security tasks malware detection network intrusion insider threat detection extensive discussion machine learning applications cyber security presented bhattacharyya kalita buczak guven dua kumar kumar sachdeva zuech khoshgoftaar wald lawson heard deep learning approaches also gaining adoption specialized cyber defense tasks early use recurrent neural networks debar becker siboni model sequences unix shell commands network intrusion detection anomaly detection demonstrated using deep belief networks kdd cup dataset alrawashdeh purdy bivens use perceptrons darpa dataset approaches use aggregated features synthetic network data tuor veeramachaneni employ deep neural network autoencoders unsupervised network anomaly detection using time aggregated statistics features works note previously published lanl data turcotte heard kent develop online statistical model anomaly detection network activity using models similarly turcotte use poission factorization gopalan hofman blei lanl authentication logs authentication count matrix constructed assuming count comes poisson distribution parameterized latent factors users computers learned distributions used predict unlikely authentication behavior several variants tiered recurrent networks explored machine learning natural language processing communities koutnik ling ling chung often realized lower tier network whose output fed upper tier network separate tiers jointly trained ling use convolutional neural network feed word level long memory lstm rnn machine translation predictions made hwang sung ling use lstm feed second word lstm language modeling pascanu create activity models real world data command basis sequences system calls modeled using rnn echo state networks learned features used independently train neural network logistic regression classifiers max pooling applied hidden layers unsupervised rnn time step session result concatenated final hidden state produce feature vectors classifier similar tiered approach use average hidden states concatenated final hidden state input rnn contrast model completely unsupervised components jointly trained approach approach learns normal behavior users processing stream computer network follows initialize model weights randomly day chronological order given model produce anomaly scores events day optionally produce aggregated anomaly score user day scores send anomaly scores rank order analysts inspection update model weights minimize loss day yielding model methodology interleaves detection training online fashion section detail components approach tokenization work directly arbitrary log formats treat loglines sequences tokens work consider two tokenization granularities word tokenization assume tokens logline delimited known character space comma splitting delimiter define shared vocabulary words log fields consisting tokens appearing training set allow model handle previously unseen tokens add vocbulary token vocabulary oov instance every address represented training set likewise new pcs users continually added large networks ensure oov probability replace sufficiently infrequent tokens training data oov evaluation tokens seen labeled oov order accommodate shifting word distributions online environment fixed size vocabulary could periodically updated using sliding window word frequency statistics simplicity assume fixed training set produce fixed vocabulary avoid challenges managing vocabulary also develop language models using characterlevel tokenization case primitive vocabulary alphabet printable ascii characters circumvents open vocabulary issue ability represent log entry irrespective network logging source log field tokenization keep delimiter token sequence provide models cues transitions fields recurrent neural network language models produce anomaly scores use recurrent neural networks two ways language model individual model state user time first present two recurrent models focus tiered model accomplishes experiments using tensorflow abadi event model first consider simple rnn model operates token word sequences individual events specifically consider long memory lstm hochreiter schmidhuber network whose inputs token embeddings whose output predict distributions next token tokens drawn shared vocabulary size let denote sequence representations tokens model hidden representation token make predictions function according usual lstm equations tanh tanh initial hidden cell states set zero vectors denote multiplication logistic sigmoid respectively vector hidden representation based current input previous hidden state vectors standard lstm gates matrices bias vectors model parameters use produce probability distribution token time follows softmax code soon available https softmax softmax softmax softmax lstm lstm lstm lstm sos lstm lstm lstm lstm eos figure event models set black bordered nodes connections illustrate model set nodes connections illustrate bem model use loss two important purposes first anomaly score second training objective update model weights train model using stochastic time bidirectional event model bem following language model formulation suggested schuster paliwal alternatively model structure log lines bidirectional lstm define new set hidden vectors running lstm equations backwards time starting initial zero cell hidden states time set zero weights biases backward lstm denoted superscript probability distribution token time softmax tiered event models incorporate context propose recurrent neural network either event model bem additional input context vector generated concatenated token embedding time step input model hidden states model upper tier models dynamics user behavior time producing context vectors provided rnn model illustrated fig model denotes user jth log line consists sequence tokens described previous subsections models sequence user log lines using lstm user log line user log line sequence lstm applied tokens input model concatenation final hidden state average hidden states case context lstm mean lstm lstm final lstm sos context lstm eos mean lstm sos lstm final lstm eos figure tiered event model refers hidden state time bem concatenation forward hidden state time backward hidden state time average hidden states primarily provide many connections lstm aids trainability output lstm hidden state hidden vector serves provide context model next time step specifically concatenated inputs model operating jth note model serves propagate context information across individual loss computed directly values produced model models trained jointly minimize loss model unroll model fixed number fully unrolling models within window model loss also used detect anomalous behavior described section minibatching becomes challenging tiered model number per day vary dramatically users poses two problems first introduces possibility active users may disproportionate impact model weights second means toward end day may enough users fill minibatch counteract first problem fix number per user per day model train remaining used gradient updates leave compensating inefficiency results second future work baselines anomaly detection streaming network logs often relies upon computing statistics windows time applying anomaly detection techniques vectors describe aggregate features two anomaly detection techniques typical prior work aggregate features first define set features summarize users activities day aggregate features small number distinct values logon orientation count number occurrences distinct value fields larger number distinct values pcs users domains count number common uncommon events occurred rather number occurrences distinct value approach avoids high dimensional sparse features furthermore define two categories individual relative users value defined uncommon user accounts fewer values observed field point time common otherwise value defined uncommon users occurs fewer times average value field common otherwise lanl dataset prior featurization strategy yields aggregate feature vector per userday feature vectors serve input baseline models described next models consider two baseline models first uses principal components analysis pca learn low dimensional representation aggregate features anomaly score proportional reconstruction error mapping compressed representation back original dimension shyu second isolation forest iso based approach liu ting zhou implemented outlier detection tools pedregosa noted best performing anomaly detection algorithm recent darpa insider threat detection program gavai experiments section describe experiments evaluate effectiveness proposed event modeling algorithms data los alamos national laboratory lanl cyber security dataset kent consists event logs lanl internal computer network collected period consecutive days data set contains one billion loglines authentication process network flow dns logging sources identifying fields users computers processes anonymized recorded network activities included normal operational network activity well series red team activities compromised account credentials first days data information known red team attack events used evaluation approach strictly unsupervised experiments presented paper rely authentication event logs whose fields statistics summarized figure filter events linked actual user removing computercomputer interaction events events weekends holidays contain drastically different frequencies distributions activities real deployment separate model would trained use days malicious events data also withheld table statistics data split first days serve development set remaining days independent test set assessment granularity model learns normal behavior assigns relatively high loss events unexpected principal advantage approach ability score anomaly field time source user dest user source dest auth type logon type auth orient success example negotiate batch logon success unique labels model pca iso bem bem days events attacks dev test figure dataset statistics authentication log fields statistics dataset splits metrics consider two performance metrics first assess results using standard area receiver operator characteristic curve auc characterizes model detection performance true positives false positives effectively sweeping possible analyst budgets false positives detections truly red team events true positives detections quantify proportion data analyst must sift diagnose malicious behavior network use average percentile metric specifically red team event note percentile anomaly amongst anomaly scores day average percentiles malicious events auc table granularity test set auc language model anomaly scores calculated average userday normalization diff individual events allowing flag aggregate anomalies larger timescale work consider two timescales first assess based individual events list events would presented analyst sorted descending anomaly score second facilitate comparison traditional aggregation methods aggregate anomaly scores user events day specifically taking max producing single anomaly score scenario list would provided analyst sorted descending anomaly score refer approach max anomaly scores provided analyst produced taking maximum score event scores window user scoring taking max singleton set one event order counter systematic offsets users anomaly scores day also consider simple normalization strategy refer diff every raw score first normalized subtracting user average anomaly score day tokenization word word word word char char char char bem bem log diff max day diff max table comparison auc analysis without normalization figures provide visualization results note true malicious events flagged anomalous respective days malicious events ranked least anomalous respective days auc higher score better model hyperparameters manually tuned maximize diff scores development set separate training set needed approach unsupervised trained online results analysis begin exploring granularity performance table summarizes model detection performance granularity test set auc metrics using diff method produce day level scores language models trends evident results first aggregate feature baselines nearequivalent performance metrics isolation forest approach slight edge hypothesize feature representation common methods could bottleneck performance highlights blind spot issue feature engineering introduces second despite context single time opposed features aggregated entire day event model performs comparably baseline models forward pass lstm network used auc auc diff max log day log day bem log day log day diff max log day log day bem log day log day figure word model comparison auc granularities figure character model comparison auc granularities character tokenization outperforms baselines word tokenization pronounced performance gain results using bidirectional models finally tokenization performs better however even bidirectional character models perform appreciably better baselines clear results tiered models perform comparably better models suggests event level model able characterize normal user behavior information stored model weights network trained day model user activity given context past day activity stored model weights categorical variables represented fields individual log line may eliminate need explicit event context modeling leave tracking state individual computers rather users future work hypothesize may make tiered approach effective next broaden analysis language modeling approaches comparing performance across language models tokenization strategies anomaly granularity normalization techniques figure plots auc language model types using word tokenization contrasting max diff normalization modes figure compares variations character tokenization table presents results tabular form exceptions granularity vastly outperforms true tokenization strategies average gain auc interesting outcome comparisons word tokenization performance gains heavily reliant diff normalization whereas character tokenization diff normalization minor detrimental effect models suggests model could used provide immediate response time wait day done obtain day statistics used diff mode two tokenization strategies may fact complementary versatility response time gains character tokenization come expense easy interpretibility word tokenization word tokenization allows anomaly scores decomposed individual fields enabling analysts pinpoint features event contributed flagged since tuned hyperparameters using diff mode model potential better additional tuning next figures visualize average percentiles red team detections subset test set activity anomaly scores word character tokenizations computed without average userday offset normalization red team scores plotted purple coordinate second time event occurred coordinate anomaly score event percentile ranges colored provide context anomaly scores backdrop network activity spread anomaly scores much greater tokenizations fig fig could explain different sensitivity word level tokenization normalization also notice expected bump percentiles windows frequent redteam activity curiously end day massive bumps percentile suggest unplanned anomalous events lanl network hours notice character tokenization almost red team anomaly scores percentile large proportion percentile finally figure plots roc curves best aggregate baseline iso best granularity language model word bem best granularity model character bem illustrates qualitatively different curves obtained baselines granularity granularity since proportion normal events vanishingly low rate effectively proportion data flagged achieve particular recall observation figure shows character event model achieve recall data whereas models considered achieve recall nearly data percentile percentile anomaly score figure anomaly scores relation percentiles time true positive iso agg auc bem day auc bem auc false positive figure roc curves best performing baseline word language model evaluated character language model evaluated handed analyst character event model achieve recall flagging data whereas word day language model needs data aggregate isolation forest model needs data achieve result conclusion work builds upon advances language modeling address computer security log analysis proposing unsupervised online anomaly detection approach eliminate usual feature engineering stage making approach fast deploy agnostic system configuration monitoring tools confers key advantage detection allows near immediate alert response following anomalous activity experiments using los alamos national laboratory cyber security dataset bidirectional language models significantly outperformed standard methods figure anomaly scores relation percentiles time detection best detection performance achieved bidirectional language model obtaining area roc curve showing constrained language domain network logs character based language modeling achieve comparable accuracy word based modeling event level detection therefore demonstrated simple effective approach modeling dynamic networks open vocabulary logs new users pcs addresses propose extend work several ways first potential modeling advantages tiered architectures merit investigation use tiered architectures track pcs instead network users richer set logging sources simply authentication logs may take better advantage modeling power next anticipate interpretability become lost detailed granularity provided detection characterbased model therefore future work explore alternate methods providing context analyst finally interested exploring robustness approach adversarial tampering similarly performing models could different levels resilience would lead selection one another acknowledgments research described paper part analysis motion initiative pacific northwest national laboratory conducted laboratory directed research development program pnnl national laboratory operated battelle department energy authors would also like thank nvidia corporation donations titan titan gpus used research references abadi abadi agarwal barham brevdo chen citro corrado davis dean devin ghemawat goodfellow harp irving isard jia jozefowicz kaiser kudlur levenberg monga moore murray olah schuster shlens steiner sutskever talwar tucker vanhoucke vasudevan vinyals warden wattenberg wicke zheng tensorflow largescale machine learning heterogeneous systems software available alrawashdeh purdy alrawashdeh purdy toward online anomaly intrusion detection system based deep learning machine learning applications icmla ieee international conference ieee bhattacharyya kalita bhattacharyya kalita network anomaly detection machine learning perspective crc press bivens bivens palagiri smith szymanski embrechts networkbased intrusion detection using neural networks intelligent engineering systems artificial neural networks buczak guven buczak guven survey data mining machine learning methods cyber security intrusion detection ieee communications surveys tutorials chung chung gulcehre cho bengio gated feedback recurrent neural networks international conference machine learning debar becker siboni debar becker siboni neural network component intrusion detection system proc ieee symposium research security privacy dua dua data mining machine learning cybersecurity crc press gavai gavai sricharan gunning hanley singhal rolleston supervised unsupervised methods detect insider threat enterprise social online activity data journal wireless mobile networks ubiquitous computing dependable applications gopalan hofman blei gopalan hofman blei scalable recommendation poisson factorization arxiv preprint hochreiter schmidhuber hochreiter schmidhuber long memory neural computation hwang sung hwang sung language modeling hierarchical recurrent neural networks arxiv preprint kent kent cyber security data sources dynamic network research dynamic networks koutnik koutnik greff gomez schmidhuber clockwork rnn arxiv preprint kumar kumar sachdeva kumar kumar sachdeva use artificial intelligence based techniques intrusion detection review artificial intelligence review ling ling marujo astudillo amir dyer black trancoso finding function form compositional character models open vocabulary word representation arxiv preprint ling ling trancoso dyer black neural machine translation arxiv preprint liu ting zhou liu ting zhou isolation forest proc icdm novakov novakov lung lambadaris seddigh studies applying pca wavelet algorithms network traffic anomaly detection high performance switching routing hpsr ieee international conference ieee pascanu pascanu stokes sanossian marinescu thomas malware classification recurrent networks acoustics speech signal processing icassp ieee international conference ieee pedregosa pedregosa varoquaux gramfort michel thirion grisel blondel prettenhofer weiss dubourg vanderplas passos cournapeau brucher perrot duchesnay machine learning python journal machine learning research ringberg ringberg soule rexford diot sensitivity pca traffic anomaly detection sigmetrics lawson heard rubindelanchy lawson heard anomaly detection cyber security applications dynamic networks schuster paliwal schuster paliwal bidirectional recurrent neural networks ieee transactions signal processing shyu shyu chen sarinnapakorn chang novel anomaly detection scheme based principal component classifier proc icdm sommer paxson sommer paxson outside closed world using machine learning network intrusion detection proc symposium security privacy tuor tuor kaplan hutchinson nichols robinson deep learning unsupervised insider threat detection structured cybersecurity data streams artificial intelligence cybersecurity workshop aaai turcotte turcotte moore heard mcphall poisson factorization anomaly detection intelligence security informatics isi ieee conference ieee turcotte heard kent turcotte heard kent modelling user behavior network using computer event logs dynamic networks veeramachaneni veeramachaneni arnaldo korrapati bassias training big data machine defend proc hpsc ids zuech khoshgoftaar wald zuech khoshgoftaar wald intrusion detection big heterogeneous data survey journal big data
| 9 |
ieee signal processing letters look wider match image patches convolutional neural networks sep haesol park kyoung lee human matches two images viewer natural tendency view wide area around target pixel obtain clues right correspondence however designing matching cost function works large window way difficult cost function typically intelligent enough discard information irrelevant target pixel resulting undesirable artifacts paper propose novel convolutional neural network cnn module learn stereo matching cost window unlike conventional pooling layers strides proposed layer cover large area without loss resolution detail therefore learned matching cost function successfully utilize information large area without introducing fattening effect proposed method robust despite presence weak textures depth discontinuity illumination exposure difference proposed method achieves performance middlebury benchmark index matching pooling cnn ntroduction stereo matching methods first compute matching cost pixel certain disparity optimizing whole cost volume either globally locally using specific prior knowledge decades many researchers focused second step designing good prior function optimizing studies conducted designing selecting better matching cost function one widely used matching cost functions matching cost function one used along sophisticated prior models sometimes produces good results especially preserving detailed structures near disparity discontinuities however function fails image contains areas repetitive textures cases matching cost census sad produces reliable distinctive measurement critical shortcoming matching cost functions unreliability around disparity discontinuities figure visually illustrates characteristics different matching cost measures one method handle make windowbased versatile input patterns key idea making shape matching template adaptive discard information pixels irrelevant target pixel however knowing background pixels actual matching difficult making park lee automation systems research institute seoul national university seoul korea matching cost pixelwise matching cost sad sad census census proposed fig top image shows reference image two interested points pixel positions marked blue dots whereas red green boxes represent windows centered respectively bottom matching costs pixel visualized normalized function disparity different matching cost functions positions true disparities marked red vertical lines pixelwise cost shows lowest values true disparity also gives zero costs disparities sad census matching cost functions become less ambiguous matching window becomes larger however functions affected pixels irrelevant target pixel even matching cost learned using baseline convolutional neural network cnn architecture fails surface nearly flat texture hand proposed cnn architecture works well weakly textured regions disparity discontinuities problem therefore use cnn appropriate automatically learns proper shape templates input pattern existing methods however based conventional cnn architectures resembling alexnet vgg network optimized image classification task image matching architectures comprise several convolution layers followed rectified linear unit relu pooling layers strides one limitations using architectures matching difficulty enlarging size patches compared effective size patch directly related ieee signal processing letters spatial extent receptive field cnn increased including strided layers using larger convolution kernels layer increasing number layers however use strided layers makes results downsampled losing fine details although resolution recovered applying convolution reconstructing small thin structures still difficult lost downsampling increasing size kernels also problematic number feature maps required represent larger patterns increases significantly furthermore previous study reported repetitive usage small convolutions always result large receptive field paper contributes literature proposing novel cnn module learn better matching cost function module innovative pooling scheme enables cnn view larger area without losing fine details without increasing computational complexity test times experiments show use proposed module improves performance baseline network showing competitive results middlebury benchmark elated orks given introduction stereo datasets disparity maps many attempts made learn matching cost function using machine learning algorithms impressive results obtained using cnn architecture proposed takes small window processes without use pooling computed cost volume noisy due limited size window thus using crossbased cost aggregation cbca matching sgm additional refinement procedures hand method uses multiple pooling layers spp process larger patches however results show fattening effect owing loss information introduced pooling main contribution paper proposing novel pooling scheme handle information large receptive field without losing fine details recently several attempts made accomplish goal context semantic segmentation methods combine feature maps highlevel layers lower layers aim correctly aligning information along details approach successfully align boundaries big objects inherent limitation inability recover small objects final output lost abstraction due multiple uses pooling context flownet architecture upsample flow original scale using feature maps however fails recover extreme flow elements hidden due low resolution feature maps architecture closely related current work proposed unlike approaches fig module pooling size vector visualized figure shows action one channel feature maps brevity job channels spp network excludes pooling layers convolutional layers instead first computes feature maps cascading convolutional layers several times generates information pooling different scales keeping original feature maps along feature maps pooled multiple scales spp network combine features multiple levels without losing fine details although previously mentioned stereo method uses spp also employs conventional pooling layers convolutional layers thus losing detailed information iii rchitecture eural etwork proposed architecture takes two input patches produces corresponding matching cost following subsections newly proposed module first introduced detailed architecture entire network presented pyramid pooling use pooling layers cnn considered desirable accuracy efficiency image classification tasks use layers reported provide additional invariance spatial transformation important gain comes downsampling feature maps performing pooling stride larger one output feature maps pooling scaled final scale cnn output decreased exponentially terms number pooling layers given parameters related pooling operation exist method effective way widen receptive field area cnn without increasing number parameters drawback strided pooling network loses fine details original feature maps pooling applied thus exists seeing larger area preserving small details inspired idea discussed propose novel pooling scheme overcome instead using small pooling window stride large pooling window used achieve desired size receptive field use one large pooling window lead loss finer details thus multiple poolings varying window sizes performed outputs concatenated ieee signal processing letters matching score matching score conv sigmoid conv sigmoid table quantitative results training dense set iddlebury benchmark shown error represents percentage bad pixels disparity threshold weighting scheme applied computing average conv relu conv conv relu conv relu conv relu conv relu conv relu conv relu methods wta conv relu conv relu conv relu conv relu conv relu conv relu conv relu conv relu conv relu conv relu baseline proposed fig network structures visualized baseline network proposed network parenthesized numbers layer represent number feature maps corresponding operations note figure drawn terms fully convolutional network create new feature maps resulting feature maps contain information scales pooling operation performed every pixel without strides call whole procedure pyramid pooling formally defined follows vector number elements pooling operation size stride one structure module illustrated figure proposed model validate effect proposed module trained tested cnns without module baseline architecture selected module proposed architecture constructed using size vector structures two cnns visualized figure mplementation etails fair comparison followed details train proposed architecture exceptions mentioned first size training patch became furthermore parameters last three convolution layers proposed architecture figure parameters earlier layers borrowed network experiments resulted better performance training network random initializations moreover training convolution layers features easier making converge faster run avg error proposed proposed parameters proposed parameter tuning total four epochs training last two epochs run decreased learning rate also used pipeline test phase pipeline includes use cbca sgm disparity maps refined continuous values undergo median filtering bilateral filtering xperiments verify effect proposed module compared results baseline proposed network performance measured using training dense set middlebury benchmark quantitative results briefly summarized table using average errors experiments performed using intel core cpu single nvidia geforce gtx titan gpu proposed method outperforms baseline architecture regardless use benefit using module clear disparity maps obtained using wta rule without given images dataset contain many areas window distinguish true matches false ones without aid hand proposed architecture effectively sees larger window inserting module final decision layer less straightforward understand proposed architecture still outperforms baseline even postprocessing sense worth mention best parameter setting proposed method largely differ notable changes original parameter setting use much less number cbca means multiple uses cbca become redundant proposed architecture fact interpret role module adaptive local feature aggregation compared algorithm cbca influence neighboring pixels certain pixel automatically learned following conventions best parameter setting follows ieee signal processing letters true disparity left image proposed fig results playtablep vintage visualized datum upper row shows disparity map bottom row shows corresponding error maps shows errors around areas surfaces chair table playtablep white wall vintage proposed method shows reliable results jointly trained cost function furthermore information exchange among pixels done feature space contains richer contextual information final cost volume space note improvement baseline clearly results neither use extra layers use parameters authors already shown additional use layers less significant using two additional layers leads improvement approximately whereas using module results improvement terms average error main contribution proposed method lies learning less ambiguous matching cost function inspecting larger area figure shows proposed network actually works better around area quantitative qualitative results dataset including ones test dense set available middlebury benchmark website onclusions viewing large area estimate dense pixel correspondence necessary fully utilize texture information achieve less ambiguous accurate matching conventional matching cost function fails neighboring pixels surface target pixel unknown paper novel cnn module proposed make cnn structure handle large image patch without losing small details enables learn intelligent matching cost function windows learned cost function discriminate false matches areas repeating textures also conserve disparity discontinuities learned cost function achieves competitive performance middlebury benchmark ieee signal processing letters eferences scharstein szeliski taxonomy evaluation dense stereo correspondence algorithms ijcv vol kolmogorov zabih computing visual correspondence occlusions using graph cuts iccv vol ieee stereo processing semiglobal matching mutual information pami vol woodford torr reid fitzgibbon global stereo reconstruction smoothness priors pami vol rhemann hosni bleyer rother gelautz fast filtering visual correspondence beyond cvpr ieee yang cost aggregation method stereo matching cvpr ieee birchfield tomasi depth discontinuities stereo international journal computer vision vol hirschmuller scharstein evaluation stereo matching costs images radiometric differences ieee transactions pattern analysis machine intelligence vol scharstein evaluation cost functions stereo matching cvpr ieee wang adaptive stereo matching algorithm based edge detection icip vol ieee yoon kweon adaptive approach correspondence search pami vol tombari mattoccia stefano addimanda classification evaluation cost aggregation methods stereo correspondence cvpr ieee lecun stereo matching training convolutional neural network compare image patches journal machine learning research vol zagoruyko komodakis learning compare image patches via convolutional neural networks cvpr june krizhevsky sutskever hinton imagenet classification deep convolutional neural networks advances neural information processing systems simonyan zisserman deep convolutional networks image recognition arxiv preprint radford metz chintala unsupervised representation learning deep convolutional generative adversarial networks arxiv preprint zhou khosla lapedriza oliva torralba object detectors emerge deep scene cnns arxiv preprint scharstein kitajima krathwohl wang westling stereo datasets ground truth pattern recognition springer geiger lenz urtasun ready autonomous driving kitti vision benchmark suite cvpr menze geiger object scene flow autonomous vehicles cvpr pollefeys learning matching function arxiv preprint zhang lafruit local stereo matching using orthogonal integral images circuits systems video technology vol zhang ren sun spatial pyramid pooling deep convolutional networks visual recognition eccv springer long shelhamer darrell fully convolutional networks semantic segmentation cvpr hariharan arbelaez girshick malik hypercolumns object segmentation localization cvpr june noh hong han learning deconvolution network semantic segmentation arxiv preprint fischer dosovitskiy ilg golkov van der smagt cremers brox flownet learning optical flow convolutional networks arxiv preprint
| 1 |
asymptotic structure brownian motions small effect apr yuta april abstract paper considers two brownian motions situation one correlated slight delay study problem estimating time lag parameter brownian motions observations possibly subject measurement errors measurement errors assumed centered gaussian independent latent processes investigate asymptotic structure likelihood ratio process model lag parameter asymptotically infinitesimal show structure limit experiment depends level measurement errors measurement errors locally dominate latent brownian motions model enjoys lan property otherwise limit experiment result typical ones appearing literature also discuss efficient estimation lag parameter highlight statistical implications keywords phrases asymptotic efficiency endogenous noise effect local asymptotic normality microstructure noise introduction let bivariate brownian motion also let sequence bivariate standard normal variables independent denote law vector generated following model number denotes unknown parameter interested especially sign unknown aim paper study asymptotic structure sequence experiments time lag parameter asymptotically infinitesimal tends denotes borel precisely study limit experiment rnu proper convergence rate department business administration graduate school social sciences tokyo metropolitan university marunouchi eiraku bldg marunouchi tokyo japan institute statistical mathematics tachikawa tokyo japan crest japan science technology agency model special case hry model introduced hoffmann describe effects financial data similar model also studied robert rosenbaum asymptotic regime different current setting effect refers situation one time series correlated another time series later time especially drawn attention analysis economic time series data long time associated econometric methods developed many authors see section hoffmann section robert rosenbaum references therein practicality hry model empirical work recently established several authors alsayed mcgroarty huth abergel bollen financial data iacus social media data empirical studies show time lag parameters typically comparable observation frequencies scales motivates study hry model small situation one would especially interested small lag parameters identified principle author knowledge however theoretical study hry model particular nothing known optimality statistical inferences hry model purpose paper trying fill gap paper well special case hry model also consider situation model contains measurement errors motivated recent studies volatility estimation ultra high frequency financial data typically modeled discretely observed semimartingale market microstructure noise refer chapter jacod brief description subject particular asymptotic structure asymptotic efficiency bound established work gloter jacod see also cai statistical model estimating scale parameter discrete observations standard wiener process sequence centered standard normal variables independent proved lan property model constructed asymptotically efficient estimators indeed considered general setting extensions lan result multivariate setting also studied several authors correlation estimation bivariate setting studied bibinger general setting containing sampling case studied ogihara hand studied asymptotic structure model function time rather constant established asymptotic equivalence model gaussian white noise model result extended bivariate case bibinger multivariate setting containing case bibinger another type extension replacing wiener process different process also studied example sabel consider efficient estimation situation general gaussian process especially fractional brownian motion main contribution paper determine proper convergence rate derive stochastic expansion likelihood ratio process rnu analogously gloter jacod proper convergence rate depends behavior sequence nvn intuitively natural var thus behavior nvn determines strongly measurement errors locally dominate nature observed returns particular find generally nvn rate much faster usual parametric rate even faster rate since time resolution model result suggests could estimate lag parameters smaller time resolution observation data implication least true restrictive situation shown section since convergence rate estimator lag parameter proposed hoffmann faster see proposition hoffmann discussion proposition result shows estimator suboptimal setting considered paper although estimator works general setting given proper convergence rate following stochastic expansion likelihood ratio process random variables defined numbers dpn rnun log bounded sequence real numbers therefore contiguity argument deduce experiments converge weakly experiment cam sense see lary numbers determined asymptotic behavior nvn precisely defined particular always positive positive nvn bounded otherwise case corresponds situation measurement errors locally dominate signal case model enjoys lan property commonly appears regular experiments result interest model exhibits irregularity sense likelihood function smooth limit experiment model typically deviates lan structure illustrated chapters ibragimov minskii result means measurement errors kind regularizing effect asymptotic structure model hand corresponds cases signal dominates balanced measurement errors addition observation usual gaussian shift experiment limit experiment contains extra observation experiment although experiment looks simple author knowledge result cases ibragimov minskii definition asymptotically efficient estimators case obvious obtain asymptotic efficiency bound estimating lag parameter case section apply ibragimov minskii theory problem common approach establish asymptotic efficiency bounds experiments generated diffusion type processes see kutoyants details result find bayesian estimators asymptotically efficient maximum likelihood estimator always asymptotically efficient common phenomenon irregular models see chapters ibragimov minskii kutoyants chapter kutoyants rubin song chapter van der vaart example paper organized follows section presents main result paper section discuss efficient estimation lag parameter setting section devoted proof auxiliary technical result indeed intuition fact already appeared hoffmann see remark general notation denotes matrix matrix denote kaksp kakf spectral norm frobenius norm respectively kaksp sup kaxk kxk also denote aij entry main result start completing definitions quantities appearing introduction first following gloter jacod assume sequence nvn converges set lim nvn also assume lim supn gloter jacod set otherwise considered effective sample size sense proper convergence rate estimating model given seen regular parametric rate regard sample size using effective sample size define proper convergence rate constants appearing defined remark always positive evident proven follows first suppose hence hand applying inequality replacing obtain hence following statement main result theorem two sequences random variables satisfying bounded sequence real numbers explicitly give variables theorem theorem immediate consequences first one direct consequence definition lan property corollary lan property rate asymptotic fisher information second one follows cam first lemma see lemma van der vaart corollary mutually contiguous sequence real numbers satisfies third one derived corollary theorem strasser refer drost cam chapter strasser chapters van der vaart definition applications weak convergence experiments corollary sequence experiments converges weakly experiment turn proof theorem although consists gaussian distributions problem simple covariance matrix complicated function lag parameter particular simultaneously diagonalizable general even asymptotically could troublesome analysis gaussian experiments asymptotically neous diagonalizability covariance matrices statistical model different parameters typically plays important role section davies lemma gloter jacod lemma sabel reason first transfer model tractable model defined follows set law vector denote yen defined yei covariance matrix denote precise hellinger distance following show tends provided tends sufficiently fast hellinger distance two probability measures measurable space defined measure dominating example easily checked depend choice see appendix section strasser section tsybakov information hellinger distance expectation respect resp throughout paper denote resp proposition sequence positive numbers satisfies proof claim immediately follows focus symmetry may assume let canonical variables moreover simple computation shows otherwise otherwise otherwise yields claim centered therefore gaussian hand identities also kcn positive semidefinite satisfies kcn ksp monotonicity theorem eigenvalues corollary horn johnson positive semidefinite therefore eqs obtain hence claim holds true following frequently use fact hellinger distance dominates total variation distance see lemma tsybakov proof following properties total variation distance immediate consequences definition important purpose let two sequences probability measures measurable space let random variable taking value metric space probability measure following statements hold true tractable form purpose introduce next express covariance matrix notation matrix denotes backward difference operator yen denote set explicitly expressed covariance matrix sign sign convenient rewrite expression follows let symmetric skewn symmetric parts respectively set obtain simple function easily check tractable although simultaneously diagonalizable different sufficient consider relationship matrices fact turns following result sufficient purpose proposition ksp lim proof proposition consists elementary complicated calculations postpone appendix section remark proof requires calculation essentially different fisher information scale parameter estimation observations form gloter jacod sabel see remark also note proposition yields invertibility sufficiently large proof theorem define function zbn setting zbn set virtue proposition suffices prove following statements dpn rnun log follows proposition proposition dalalyan yoshida hand setting kan proposition therefore proposition proposition chapter cam obtain show rnun kan log strategy proof theorem davies first davies sufficiently large log det log det kan log det note holds sufficiently large kan ksp proposition combining fact inequality appendix davies obtain kan ksp kan kan ksp kan ksp sufficiently large hence proposition yields next noting rewritten zbn zbn obtain davies therefore using identity obtain kan kan kan ksp sufficiently large hence proposition yields obtain finish section remarks remark worth mentioning infer hoffmann rate proper convergence rate model case follows let set principle used hoffmann close true correlation close true parameter since accuracy estimating correlation parameter order naturally consider quantity measure distance would take large value sufficiently close true parameter fact proposition hoffmann implies diverge distance true parameter order information allows estimate true parameter accuracy order remark econometric point view proposition independent interest model given economic interpretation different model model contains measurement errors correlated latent returns integrated volatility estimation presence type measurement error studied kalnina linton example market ture theory correlation often explained effect asymmetric information glosten interestingly economic arguments suggest information asymmetry would cause effect see chan chordia instance would also worth emphasizing jong schotman connect type model investigation price discovery price discovery processes closely related effects seen jong hasbrouck remark proof main result heavily depends gaussianity model especially require gaussianity measurement errors obvious need restriction distribution measurement errors derive specific limit experiment fact take values integers completely recover signal sufficiently large apart trivial example recent study bibinger shown another specification distribution measurement errors improve convergence rate estimating scale parameter light connection convergence rates models naturally conjecture similar specification measurement errors would affect convergence rate model issue beyond scope paper left future research efficient estimation lag parameter application results previous section construct efficient estimators lag parameter models consider slightly extended setup follows letting sequence positive numbers tending bounded open interval construct efficient estimators parameter models every make use results previous section impose following condition positive integer invertible due proposition throughout section always assume larger remark practical point view dependence parameter sampling frequency theoretical device control relative size compared corresponds asymptotic theory important whether asymptotic order condition corresponding case acceptable approximation namely asymptotic theory concerns whether parameter comparable given fixed sampling frequency possible values change accordance noise level require parameter varies proportion sampling frequency type asymptotic theory standard econometrics example one considers volatility estimation financial asset taking account rounding one usually lets rounding level shrink sampling frequency increases see rosenbaum mykland sato kunitomo example start generalizing proposition matrix perturbation argument lemma sup kvn ksp sup kvn proof setting therefore ostrowski theorem theorem horn johnson implies kvn ksp khn ksp ksp ksp khn ksp khn ksp hence proposition implies proof completed show khn ksp khn ksp since share eigenvalues theorem horn johnson desired results follow proposition neumann series representation using result prove uniform version theorem proposition let defined dpn log dpn uniformly bounded sequence real numbers moreover uniformly proof prove first claim similar manner proof theorem using lemma instead proposition prove second claim suffices show sequence numbers follows lemma inequality dalalyan yoshida proposition implies experiments enjoy lan property erwise lan property holds true theory define asymptotic efficiency estimators section ibragimov minskii sequence estimators experiments said asymptotically efficient variables converge law see definition ibragimov minskii lan property definition asymptotic efficiency supported several theorems convolution theorem theorem ibragimov minskii local asymptotic minimax theorem theorem ibragimov minskii moreover maximum likelihood bayesian estimators asymptotically efficient general settings chapter iii ibragimov minskii hand lan property fails generally obvious define asymptotic efficiency estimators adopt approach kutoyants define asymptotic efficiency based theorem ibragimov minskii derives asymptotically minimax lower bound asymptotic properties bayesian estimators consequence bayesian estimators turned asymptotically efficient explain strategy obtain asymptotically efficient estimators setting previous rather original model reason section would like work tractable model consider function based former follows exp zbn zbn det consider quasi maximum likelihood bayesian estimators based estimators give using general scheme ibragimov minskii asymptotic behavior experiments see proposition next consider case lan property holds true thus transferred proposition finally convergence law sufficiently large due consider case proposition hence apply minskii method define obtain asymptotically efficient estimators quasi maximum likelihood estimator qmle defined solution equation sup note equation always least one solution belonging closure continuous moreover choose measurable measurable selection theorem see theorem pfanzagl also quasi bayesian estimator qbe prior density respect quadratic loss defined cln prior density assumed continuous satisfy inf corresponding qmle qbe experiments given respectively remark since quantity seems exact order true parameter one may consider practical setting difficult know beforehand thus difficult use estimators however construct estimator considered maximum order true estimator parameter follows let set considered solution equation sup therefore practical situation resp inf interpreted upper bound resp lower bound possible parameters often difficult find bounds practical setting typically small pointed introduction example find computing via hoffmann method huth abergel remark applied estimator rewritten prior density describe limit distribution estimators introduce likelihood ratio process limit experiment exp two mutually independent variables set otherwise using first give asymptotic behavior estimators experiments general scheme ibragimov minskii note situation true maximum likelihood bayesian estimators respectively proposition compact subset uniformly holds converges law also uniformly holds converges law proof every set define according theorems ibragimov minskii suffices prove following statements lim supu constant lim sup sup sup marginal distributions converge law marginal distributions uniformly immediate consequence proposition hand obtain kvn kvn hence lemma yields claim consider corollary mathai provost log log det log det consider following decomposition log log det kan log det kbn kan kan kbn iin iiin ivn let set hence holds sup sup kan ksp sup sup ksp ksp use fact particular kan ksp sufficiently large lemma thus kbn ksp kan ksp kan ksp kbn kan kan ksp obtain latter estimate use inequality kan kan kan ksp therefore sufficiently large kan ksp kan kan kbn ksp kbn kbn appendix davies kan kan ksp kan ksp well inequality kan kan ksp kan ivn kan ksp consequently constant sufficiently large holds kan log consider giving upper bound therefore noting lemma sufficiently large consequently obtain setting equivalent condition therefore proposition yields following result replaced corollary statement proposition still holds true return efficient estimation parameter model first consider case case know enjoy lan property every proposition definition asymptotic efficiency estimator sequence explained theorem asymptotically efficient every experiments converge law particular asymptotically efficient experiments next turn case case experiments longer enjoy lan property definition asymptotic efficiency obvious explained follow approach kutoyants define asymptotic efficiency experiments obtain following result virtue corollary theorem ibragimov minskii theorem lim lim inf sup estimator sequence experiments particular also lim lim inf sup estimator sequence experiments thanks theorem estimator sequence said asymptotically efficient ments holds lim lim inf sup similarly estimator sequence said asymptotically efficient experiments holds lim lim inf sup sequence positive numbers satisfying following result immediate consequence definition theorem sequence asymptotically efficient every experiments particular sequence asymptotically efficient experiments contrast guarantee asymptotic efficiency mle fact may perform much better shown following proposition proposition holds arctan dxdy denotes bivariate normal density standard normal marginals correlation particular proof let denote normal density mean variance simple calculation yields formulae gradshteyn ryzhik arctan hence obtain next change variable obtain moreover formulae gradshteyn ryzhik imply since distribution vector density obtain finally prove latter statement define functions arctan dxdy since suffices prove dxdy dominated convergence theorem yields completes proof appendix proof proposition starting proof introduce notation set define matrix uij uij cos cos often referred discrete cosine transform dct see sabel references therein note real orthogonal known diagonalizes follows cos diag see lemma kunitomo sato lemma sabel proof define functions cos sin also set diag remark turns components play dominant role calculate limit essentially different case calculating fisher information scale rameter estimation observations form similarity transformations toeplitz matrices sufficiently approximated diagonal matrices manifested lemma sabel reason need rather specific calculations seen lemmas square matrix spr denotes spectral radius frequently use identity kaksp spr holding normal matrix start main body proof frequently use following inequality sine function sin lemma sup sup proof claim immediately follows identity cos lemma let continuous also let sequence positive integers lim provided proof fundamental theorem calculus hence desired result follows standard riemann sum approximation lemma let sequence positive integers lim nnn arctan tan tan arctan lim arctan tan proof first using lower upper darboux sums integral obtain formula gradshteyn ryzhik yields arctan tan hence obtain next simple calculation yields therefore lemma implies lim sin xdx sin hand sufficiently large cos hence nvn therefore obtain nvn nvn hence desired result follows lemma let sequence positive integers lim proof since cos sin cos even odd therefore using formula sin sin cos decompose target quantity cot tan even odd first prove limn using monotonicity tangent function assumption obtain since formula gradshteyn ryzhik yields limn limn conclude next prove limn proof relies following inequality tangent function tan lower estimate upper estimate known inequality becker stark using obtain therefore using formula conclude limn lemma holds kgn ksp kgn proof first definition kgn ksp spr hence theorem horn johnson yield kgn ksp max uik therefore lemma implies kgn ksp moreover since holds kgn lemma yields desired result lemma kgn kgn set proof since lemma suffices prove case replaced imply min kgn ksp max kgn first equation immediately follows order prove second equation prove right side converges first desired result follows lemma next noting hence lemma obtain desired equation prove lim monotonicity cosine function yields since formula gradshteyn ryzhik implies obtain desired result finally using inequality obtain nvn hence deduce desired result lemma positive numbers lim kgn proof since ukj using trigonometric identities cos cos sin sin sin cos sin sin obtain sin sin sin using summation formula gradshteyn ryzhik sin sin sin sin since unitary invariance frobenius norm obtain kgn kgn sin first consider using inequalities max max max max thus lemma yields log next consider first prove lemma yield property max hence lemma yields lemma max log log also holds true case due consequently log since therefore lemma implies lim inf lim inf lim sup lim sup letting lemma obtain symmetry hence complete proof due lemma proof proposition set hence obtain kgn kgn kgn therefore lemmas yield hand since ksp kgn ksp ksp lemmas also yield ksp hence proof completed prove ksp note positive semidefinite note also symmetric therefore eigenvalue monotonicity theorem eigenvalues corollary horn johnson ksp since take inequality implies ksp ksp ksp yields desired result acknowledgements author grateful two anonymous referees careful reading insightful comments significantly improved former version paper author also thanks participants asymptotic statistics computations statistics stochastic processes analysis high frequency data statistique asymptotique des processus stochastiques statistics stochastic processes analysis high frequency data valuable comments work supported crest jst references jacod financial econometrics princeton university press alsayed mcgroarty algorithmic arbitrage across international index futures forecast becker stark hierarchy quolynomial inequalities tan univerzitet beogradu publikacije fakulteta serija matematika fizika bibinger efficient covariance estimation asynchronous noisy data scand stat bibinger hautsch malec estimating quadratic covariation matrix noisy observations local method moments efficiency ann statist bibinger jirak volatility estimation errors applications limit order books ann appl probab bibinger spectral estimation covolatility noisy observations using local weights scand stat bollen neill whaley tail wags dog intraday price discovery vix markets journal futures markets cai munk sharp minimax estimation variance brownian motion corrupted gaussian noise statist sinica chan imperfect information among stock prices journal finance chordia sarkar subrahmanyam liquidity dynamics journal financial quantitative analysis dalalyan yoshida asymptotic expansion covariation estimator ann inst henri probab stat davies asymptotic inference stationary gaussian adv appl probab jong mahieu schotman price discovery foreign exchange market empirical analysis rate journal international money finance jong schotman price discovery fragmented markets journal financial econometrics drost van den akker werker asymptotic structure nearly unstable integervalued models bernoulli glosten components spread statistical properties transaction prices journal finance gloter jacod diffusions measurement errors local asymptotic normality esaim probab stat gloter jacod diffusions measurement errors optimal estimators esaim probab stat gradshteyn ryzhik table integrals series products elsevier seventh edn hasbrouck one security many markets determining contributions price discovery journal finance hoffmann rosenbaum yoshida estimation parameter data bernoulli horn johnson matrix analysis cambridge university press huth abergel high frequency relationships empirical facts journal empirical finance iacus porro salini siletti social networks happiness health sentiment analysis multidimensional indicator subjective working paper available arxiv http ibragimov minskii statistical estimation asymptotic theory springer kalnina linton estimating quadratic variation consistently presence endogenous diurnal measurement error econometrics kutoyants delay estimation stationary processes scand stat kunitomo sato separating information maximum likelihood estimation integrated volatility covariance noise north american journal economics finance kutoyants statistical inference ergodie diffusion processes springer cam asymptotic methods statistical decision theory springer mykland rounding errors volatility estimation journal financial econometrics zhang unified approach volatility estimation presence rounding random market microstructure noise working paper available ssrn http mathai provost quadratic forms random variables theory applications marcel dekker ogihara parametric inference nonsynchronously observed diffusion processes presence market microstructure noise working paper available arxiv http pfanzagl parametric statistical theory walter gruyter asymptotic equivalence inference volatility noisy observations ann statist robert rosenbaum limiting spectral distribution covariance matrices processes multivariate anal rosenbaum integrated volatility error bernoulli rubin song exact computation asymptotic efficiency maximum likelihood estimators discontinuous signal gaussian white noise ann statist sabel asymptotically efficient estimation scale parameter gaussian time series expressions fisher information bernoulli sato kunitomo robust estimation integrated volatility errors price adjustments noises cirje discussion papers university tokyo strasser mathematical theory statistics walter gruyter tsybakov introduction nonparametric estimation springer van der vaart asymptotic statistics cambridge university press
| 10 |
robust estimation via robust gradient estimation feb adarsh arun sai sivaraman pradeep machine learning department carnegie mellon university pittsburgh abstract provide new class estimators risk minimization show estimators robust general statistical models classical huber model settings workhorse novel robust variant gradient descent provide conditions gradient descent variant provides accurate estimators general convex risk minimization problem provide specific consequences theory linear regression logistic regression estimation canonical parameters exponential family results provide first computationally tractable provably robust estimators canonical statistical models finally study empirical performance proposed methods synthetic real datasets find methods convincingly outperform variety baselines introduction robust estimation rich history statistics seminal contributions due box tukey huber hampel several others classical analysis statistical estimators statistical guarantees derived strong model assumptions cases guarantees hold absence arbitrary outliers deviations model assumptions strong model assumptions rarely met practice led development robust inferential procedures various associated statistical concepts influence function breakdown point huber model assess robustness estimators despite progress however statistical methods strongest robustness guarantees instance based tournaments notions depth computationally intractable paper present class estimators computationally tractable strong robustness guarantees estimators propose obtained robustifying classical algorithms risk minimization applicable parametric statistical models parameter estimation cast within framework contrast classical work instance attempt replace risk minimization objective robust counterpart instead focus making canonical optimization usual risk minimization objective robust find shift perspective enables unified treatment different statistical models leads computationally tractable estimators leads estimators strong robustness guarantees risk minimization framework target parameter defined solution optimization problem argmin argmin appropriate population risk set feasible parameters goal empirical risk minimization procedures compute approximate minimizer program given access samples classical setting standard assumption imposed data outliers arbitrary deviations model assumptions typically assumed independent identically distributed according distribution many analyses risk minimization assume follows distribution otherwise tails order appropriately control deviation population risk empirical counterpart general results specialized obtain results variety models notions robustness focus developing estimators robust two canonical classes deviations model assumptions robustness arbitrary outliers setting focus huber model rather observe samples directly instead observe samples drawn arbitrary distribution defined distribution allows arbitrary outliers may correspond gross corruptions subtle deviations assumed model model equivalently viewed model total variation metric robustness setting interested developing estimators weak moment assumptions assume distribution obtain samples finite moments see section precise characterization heavy tailed distributions arise frequently analysis financial data biological datasets see instance examples contrast classical analyses empirical risk minimization setting empirical risk uniformly close population risk methods directly minimize empirical risk perform poorly see section goal work develop estimators computationally tractable robust models provide outline results contributions first contribution introduce new class robust estimators risk minimization estimators based robustly estimating gradients population risk computationally tractable design building prior work robust mean estimation huber model model design robust gradient estimators population risk main insight general risk minimization setting gradient population risk simply multivariate mean vector leverage prior work mean estimation design robust gradient estimators able significantly generalize applicability mean estimation methods general parametric models estimators practical second contribution conduct extensive numerical experiments real simulated data proposed estimators provide guidelines tuning parameter selection compare proposed estimators several competitive baselines across different settings according various metrics find estimators consistently perform well finally provide rigorous robustness guarantees estimators propose variety canonical statistical models including linear regression logistic regression estimation canonical parameters exponential family contributions direction building prior work provide general result stability gradient descent risk minimization showing favorable cases gradient descent quite tolerant inaccurate gradient estimates subsequently concrete settings provide careful analysis quality gradient estimation afforded proposed gradient estimators combine results obtain guarantees final estimates broadly discuss sequel work suggests estimators based robust gradient estimation offer variety practical conceptual statistical computational advantages robust estimation related work extensive work broadly area robust statistics see instance references therein focus section lines work related paper classical work already developed several estimators known optimally robust variety inferential tasks including hypothesis testing mean estimation general parametric estimation estimation however major drawback classical line work estimators strong robustness guarantees computationally intractable remaining ones heuristics optimal recently flurry research theoretical computer science designing provably robust estimators computationally tractable achieving contamination dependence special classes problems proposed algorithms practical rely ellipsoid algorithm require solving semidefinite programs slow modern problem sizes build work lai study practical robust mean covariance estimators distributions appropriately controlled moments complementary line recent research focused providing minimax upper lower bounds performance estimators model without constraint computational tractability model arbitrary lot work settings contamination distribution restricted various ways example recent work statistics instance studied problems like principal component analysis linear regression assumption corruptions evenly spread throughout dataset another line research focused designing robust estimators heavy tailed distribution setting approaches relax distributional assumptions typically imposed target distribution allow heavy tailed distribution approaches category use robust mean estimators exhibit type concentration around true mean distributions satisfying mild moment assumptions estimator catoni mean estimator two popular examples robust mean estimators hsu sabato use estimator develop alternative erm heavy tails although estimator strong theoretical guarantees computationally tractable noted authors performs poorly practice recent work brownlees replace empirical mean empirical risk minimization framework erm catoni mean estimator perform risk minimization authors provide risk bounds similar bounds one achieve distributional assumptions however estimator easily computable authors provide practical algorithm compute estimator recent works lerasle oliveira lugosi mendelson use similar ideas derive estimators perform well theoretically situations however approaches involve optimization complex objectives computationally tractable algorithms exist emphasize contrast work works focus robustly estimating population risk directly lead computable estimator instead consider robustly estimating gradient population risk complemented gradient descent algorithm leads naturally computable estimator outline conclude section brief outline remainder paper section provide background risk minimization huber noise models section introduce class estimators provide concrete algorithms setting section study empirical performance estimator variety tasks datasets complement empirical results theoretical guarantees sections defer technical details appendix finally conclude section discussion open problems background problem setup section provide necessary background risk minimization gradient descent introduce two notions robustness consider work risk minimization parametric estimation setting risk minimization assume access differentiable loss function convex subset let population loss risk let minimizer population risk set argmin goal risk minimization minimize population risk given samples whereas parameter estimation interested estimating unknown parameter samples work assume population risk convex ensure tractable minimization moreover order ensure identifiability parameter impose two standard regularity conditions population risk properties defined terms error taylor approximation population risk defining assume parameters denote smoothness parameters respectively gradient descent empirical risk minimization starting point techniques develop paper classical projected gradient descent method empirical risk minimization given data empirical risk minimization erm estimates unknown parameter minimizer empirical risk argmin popular method solving optimization problem projected gradient descent projected gradient descent generates sequence iterates refining initial parameter via update step size projection operator onto despite simplicity gradient descent method robust general convex losses furthermore empirical risk minimizer poor estimator presence outliers data since erm depends sample mean outliers data effect sample mean lead erm estimates observation led large body research focuses developing robust favorable statistical properties often computationally intractable work take different approach relies important observation gradient population risk simply mean vector one estimated robustly leveraging recent advances robust mean estimation leads general method risk minimization based robust gradient estimation see algorithm robust estimation one goals work develop general statistical estimation methods robust one following two models huber model model briefly review two notions robustness huber model huber proposed model observe samples obtained mixture form true distribution expected fraction outliers arbitrary outlier distribution given observations drawn estimate minimizer population risk robust contamination model model assumed data follows distribution distributions various possible characterizations paper consider characterization via gradients fixed let denote multivariate distribution gradient population loss refer distribution one finite second moments illustrate section various concrete examples translates relatively weak moment assumptions data distribution given observations objective estimate minimizer population risk conceptual standpoint classical analysis relies uniform concentration empirical risk around true risk fails setting necessitating new estimators analyses gradient estimation gradient descent variants heart modern optimization literature suppose access true distribution minimize population risk use projected gradient descent starting initial appropriately chosen update estimate according however access samples key technical challenges estimate gradient samples ensure appropriate modification gradient descent stable resulting estimation error address first challenge observe gradient population risk point mean multivariate distribution accordingly problem gradient estimation reduced multivariate mean estimation problem goal robustly estimate true mean samples given confidence parameter define gradient estimator definition function gradient estimator functions probability least fixed estimator satisfies following inequality subsequent sections develop conditions obtain gradient estimators strong control functions huber models furthermore investigating stability gradient descent develop sufficient conditions functions gradient descent inaccurate gradient estimator still returns accurate estimate minimize replace equation gradient estimator perform projected gradient descent order avoid complex statistical dependency issues arise analysis gradient descent theoretical results consider variant algorithm iteration performed fresh batch samples see algorithm assume number gradient iterations specified accordingly define jnk discuss methods selecting impact later sections confirmed experiments see section viewed device introduced theoretical convenience likely eliminated via complex uniform arguments see instance work algorithm projected gradient descent function pgd step size number iterations split samples subsets size end end function next consider two notions robustness described section derive specific gradient estimators models using framework described although major focus work huber contamination models class estimators general restricted two notions robustness gradient estimation huber model flurry recent interest designing mean estimators huber contamination model robustly estimate mean random vector results focused case uncorrupted distribution gaussian isotropic interested robust mean oracles general distributions lai proposed robust mean estimator general distributions satisfying weak moment assumptions leverage existence estimator design huber gradient estimator works huber contamination model see algorithm briefly describe main idea behind algorithm mean estimator lai algorithm builds upon fact relatively easy estimate gradient robustly crucial insight lai effect contamination mean uncontaminated distribution effectively provided accurately estimate direction along mean shifted context compute gradient shift direction direction difference sample corrupted mean gradient true population gradient true gradient estimated using robust algorithm along direction orthogonal direction since contamination effect gradient orthogonal direction order identify gradient shift direction use recursive singular value decomposition svd based algorithm stage recursion first remove via truncation algorithm described detail appendix subsequently identify two subspaces using svd clean subspace contamination small effect mean another subspace contamination potentially larger effect use simple estimator clean subspace recurse computation subspace building work lai lemma appendix provide careful analysis gradient estimator algorithm huber gradient estimator function hubergradientestimator sample gradients corruption level dimension huberoutliergradienttruncation return mean else let covariance matrix let span top principal components complement projection operation set let hubergradientestimator let mean let return end end function gradient estimation model design gradient estimators model leverage recent work designing robust mean estimators setting robust mean estimators build classical work alon nemirovski yudin jerrum estimator problem mean estimation catoni lerasle oliveira propose robust mean estimators achieve exponential concentration around true mean distribution bounded second moment work require mean estimators multivariate distributions several recent works extend estimator general metric spaces paper use geometric estimator gmom originally proposed analyzed minsker design gradient estimator basic idea behind gmom estimator first split samples subsamples estimate sample mean subsamples gmom estimator given subsamples formally let random variables sampled distribution gmom estimator estimating mean described follows partition samples blocks size let sample means block gmom estimator given median high dimensions different notions median considered minsker uses geometric median argmin algorithm presents gradient estimator obtained using gmom mean estimator algorithm heavy tailed gradient estimator function heavytailedgradientestimator sample gradients define number buckets log partition blocks size end let argmin return end function experiments section demonstrate proposed methods huber contamination heavytailed models variety simulated real data examples huber contamination first consider huber contamination model demonstrate practical utility based robust estimator described algorithms synthetic experiments linear regression linear regression observe paired samples assume pairs sampled true distribution linked via linear model drawn normal distribution variance use squared loss loss function note true parameter minimizer resulting population risk describe experiment setup data model present results setup fix contamination level next generate clean covariates corresponding clean responses using simulate outlier distribution drawing covariates setting responses total number samples set sample size increases dimension scaling used ensure statistical minimax error absence contamination roughly optimally robust method error close roughly equal corruption level see figure ols robustgd torrent huber plugin ransac parameter error parameter error parameter error log parameter error iterations log different figure robust linear regression metric measure parameter error also study convergence properties proposed method different contamination levels use code provided lai implement gradient estimator baselines use ols torrent ransac plugin estimator baselines torrent iterative based alternating minimization algorithm one step calculates active set examples keeping samples smallest absolute values residual step updates current estimates solving ols active set bhatia shown superiority torrent based outlier techniques hence compare plugin estimator implemented using algorithm estimate mean vector covariance matrix xti results summarize main findings estimators except proposed algorithm perform poorly figure note torrent algorithm strong guarantees response corrupted performs poorly huber contamination model may contaminated error robust plugin estimator increases dimension investigate theoretically section find error plugin estimator grows norm experiment choose thus figure corroborates corollary section figure find parameter error increases linearly contamination rate study section finally figure shows convergence rate decreases increasing contamination high enough algorithm remains stuck corroborating lemma appendix next study performance proposed method context classification synthetic experiments logistic regression logistic regression observe paired samples assume pairs sampled true distribution linked via linear model probability otherwise case use negative conditional loss function log exp setup simulate linearly separable classification problem clean covariates sampled corresponding clean responses computed sign simulate outlier distribution adding asymmetric noise flip labels one class increase variance corresponding covariates multiplying total number samples set metric measure classification error separate clean test set study error changes convergence properties parameter error proposed method different contamination levels baselines use logistic regression mle linear support vector machine svm baselines results figures show qualitatively similar results linear regression setting error proposed estimator degrades gracefully grows linearly contamination level gradient descent iterates converge linearly figure observe svm logistic regression mle perform poorly logistic regression mle completely flips labels error close whereas linear svm outputs random hyperplane classifier flips label roughly half dataset robustgd logisticregression svm error error robustgd epsilon error error log parameter error iterations log different figure robust logistic regression robust face reconstruction setup experiment show efficacy algorithm attempting reconstruct face images corrupted heavy occlusion occluding pixels play role outliers use data cropped yale dataset dataset contains subjects image pixels following methodology wang choose face images per subject taken mild illumination conditions computed eigenface set eigenfaces given new corrupted face image subject goal get best true face remove scaling effects normalized images range one image per person used test reconstruction occlusions simulated randomly placing blocks size repeated times test image note example use linear regression model uncontaminated statistical model almost certainly table fitting original image error mean rmse best possible proposed torrent ols scrrr exact match unknown ground truth distribution despite model misspecification results show robust mean based gradient algorithms well metric use root mean square error rmse original reconstructed image evaluate performance algorithms also compute best possible reconstruction original face image using eigenfaces methods use torrent ols baselines wang implemented popular robust estimators ransac huber loss etc showed poor performance wang proposed alternate robust regression algorithm called self scaled regularized robust regression scrrr showed equivalence method also compare best possible rmse obtained reconstructing image using eigenfaces results table shows mean rmse best proposed gradient descent based method recovered images cases closer original image figure figure shows case none methods succeed reconstruction successful reconstruction successful reconstruction failed reconstruction figure robust face recovery results top order original image occluded image best possible recovery given basis bottom order reconstructions using proposed algorithm torrent ordinary least squares ols estimation consider model present experimental results synthetic real world datasets comparing gradient descent based robust estimator described rithms call gmom erm several recent proposals experiments focus problem linear regression described section work noise distributions synthetic experiments simple linear regression setup covariate sampled isotropic gaussian distribution set entry noise sampled pareto distribution mean zero variance tail parameter tail parameter determines moments pareto random variable specifically moment order exists hence smaller distribution setup keep dimension fixed vary always maintain least methods use erm baseline compare gmom since always setting solution erm closed form expression simply ols solution also study performs gradient descent erm equivalent using empirical mean gradient oracle framework also compare robust estimation techniques hsu sabato duchi namkoong experiments iterative techniques run convergence hyper parameter selection gmom estimator depends confidence parameter needs tuned experiments noticed performance gmom varies little selected reasonable range close see figure discussion set simulations metrics experiments vary parameters pareto distribution change minimal risk compare various approaches across parameter values use scaled version excess risk define estimator compare performance two estimators define notion relative efficiency releff roughly corresponds percentage improvement excess risk obtained using whenever releff lower risk higher value fractional improvement results reduce variance plots presented next section averaged results repetitions figure shows benefits using gmom erm figure plot excess risk erm gmom number iterations see upon convergence gmom much lower population risk erm expected converges erm however population risk first iterations much lower risk erm suggesting early stopping next figure plot scaled excess risk erm gmom increases see gmom always better erm even number samples times dimension figure plot relative efficiency gmom erm shows percentage improvement excess risk gmom decreases noise level decreases behavior expected noiseless setting methods would similar behavior similar study see relative efficiency noise distribution noted increased moments exist underlying distribution figure shows noise distribution becomes benefit using gmom erm erm erm gmom excess risk true risk population risk erm gmom population risk iterations releff gmom erm iterations releff gmom erm relative efficiency relative efficiency relative efficiency relative efficiency figure linear regression performance comparison gmom erm dependence confidence level figure shows performance gmom estimator various values seen choice little effect performance estimator however notice small values performance gmom degrades practice one use either cross validation validation set choosing theoretical preliminaries section develop theoretical preliminaries begin description canonical examples risk minimization section next develop general erm population risk iterations population risk figure linear regression dependence confidence level theory convergence projected gradient descent section analyze gradient estimators defined algorithms sections respectively finally sections present consequences general theory canonical examples huber contamination models examples assume certain mild moment conditions concretely random vector let covariance matrix bounded moments exists constant every unit vector illustrative examples framework risk minimization central paradigm statistical estimation widely applicable section provide illustrative examples fall framework linear regression observe paired samples assume pairs sampled true distribution linked via linear model drawn distribution normal distribution variance distribution pareto distribution suppose covariates mean covariance setting use squared loss loss function induces following population risk note true parameter minimizer population risk strongconvexity smoothness assumptions setting require generalized linear models observe paired samples suppose pairs sampled true distribution linked via linear model conditioned covariates response variable distribution yhx exp fixed known scale parameter link function focus random design setting covariates mean covariance use negative conditional loss function true parameter minimizer resulting population risk easy see linear regression gaussian noise lies family generalized linear models instantiate glms logistic regression logistic regression case pairs linked probability otherwise corresponds setting log exp hessian population risk given exp exp note diverges minimum eigenvalue hessian approaches loss longer strongly convex prevent case take parameter space bounded exponential families canonical parameters finally consider case true distribution exponential family canonical parameters vector sufficient statistics obtained map note linear logistic regression models indeed exponential family interest cases canonical parameters details write true distribution case exp arbitrary nuisance function negative gives following loss function smoothness assumptions require constants stability gradient descent section develop general theory convergence projected gradient descent described algorithm note gradient estimators could biased guaranteed consistent estimators true gradient especially true huber contamination model impossible obtain consistent estimators gradient risk bias caused contaminated samples hence turn attention understanding behavior projected gradient descent biased inexact gradient estimator form present main result define notion stability gradient estimator plays key role convergence gradient descent definition stability gradient estimator stable given risk function denote following contraction parameter note definitions place state main result stability gradient descent theorem suppose gradient estimator satisfies condition stable risk function algorithm initialized returns iterates probability least contraction parameter defer proof result appendix theorem provides general result risk minimization parameter estimation concrete instantiation given gradient estimator risk pair first study distribution gradient risk estimate apply theorem error suffered gradient estimator bound first term decreasing second term increasing suggests given need run enough iterations first term bounded second hence fix number iterations smallest positive integer since obtain linear convergence typically logarithmic number iterations suffice obtain accurate estimate general analysis algorithm analyze gradient estimator described algorithm huber contamination model study error suffered stated algorithm uses robust mean estimator lai hence proof strategy mimics lai present different result obtained careful analysis algorithm define log log log log log definition place following result lemma let true probability distribution let true distribution gradients mean covariance bounded fourth moments exists positive constant given samples distribution huber gradient estimator described algorithm instantiated contamination level probability least returns estimate log note particular parameters held fixed error gradient estimator satisfies log weak dependence dimension general analysis algorithm section analyze gradient estimator setting described algorithm following result shows gradient estimate exponential concentration around true gradient mild assumption gradient distribution bounded second moment proof follows analysis geometric estimator minsker use denote trace matrix lemma let probability distribution distribution gradients mean covariance heavy tailed gradient estib satisfies following exponential mator described algorithm returns estimate concentration inequality probability least log consequences estimation model turn attention examples introduced earlier present specific applications theorem parametric estimation huber contamination model shown lemma need added assumption true gradient distribution bounded fourth moments suggests need additional assumptions make assumptions explicit defer technical details appendix linear regression assume covariates bounded noise bounded moments theorem robust linear regression consider statistical model equation suppose number samples large enough log contamination level constants universal constants algorithm initialized stepsize algorithm gradient estimator returns iterates probability least log contraction parameter asymptotic setting number samples parameters held fixed see huber gradient estimator corresponding maximum allowed contamination level says covariance matrix higher contamination level tolerate plugin estimation linear regression true parameter written closed form xxt way estimate separately estimate xxt using robust covariance mean oracles respectively assumption one reduce problem robustly estimating setting present result using lai mean estimator estimation recall definition following result corollary consider model equation covariates drawn universal constants returns estimate probability least log comparing bounds see error plugin estimator depends would make estimator vacuous scales dimension hand asymptotic rate robust gradient estimator independent disadvantage plugin estimation inescapable due known minimax results robust mean estimation show dependence unavoidable oracle estimates mean setting next apply estimator generalized linear models generalized linear models assume covariates bounded moments additionally assume smoothness around precise assume exist universal constants also assume tth theorem robust generalized linear models consider statistical model equation suppose number samples large enough log contamination level log constants universal constants algorithm initialized stepsize algorithm gradient estimator returns iterates probability least log contraction parameter note case linear regression gaussian noise relatively straightforward see assumption bounded moments covariates essentially leads equivalence theorem theorem setting following section instantiate theorem logistic regression compare contrast results existing methods logistic regression observing bounded logistic regression see exists universal constant corollary robust logistic regression consider model equation universal constants algorithm initialized stepsize algorithm gradient estimator returns iterates probability least log contraction parameter restrictive assumption exploited stein trick derive plugin estimator logistic regression however similar linear regression error plugin estimator scales avoided robust gradient descent algorithm also note algorithm extends general covariate distributions exponential family assume random vector bounded moments theorem robust exponential family consider model equation universal constants algorithm initialized stepsize algorithm gradient oracle returns iterates probability least log contraction parameter plugin estimation since true parameter minimizer negative loglikelihood know implies shows true parameter obtained inverting operator whenever possible robust estimation framework use robust mean sufficient statistics estimate instantiate estimator using mean estimator estimate corollary consider model equation universal constants probability returns estimate least log projection operator onto feasible set discussion limitations asymptotic setting algorithm algorithm gradient estimator converges point log hence error scales logarithmically dimension dependency dimension facet using estimator lai gradient estimation using better oracles improve performance next would like point difference maximum allowed contamination three models logistic regression exponential family linear regression differences large part due differing variances gradients naturally depend underlying risk function scaling variance gradients linear regression also provides insights limitations algorithm gradient estimators appendix provide upper bound contamination level based initialization point algorithm would work gradient estimator consequences estimation section present specific applications theorem parametric estimation heavy tailed setting proofs results found appendix linear regression first consider linear regression model described equation assume covariates bounded noise bounded moments assumption needed bound error gradient estimator see lemma theorem heavy tailed linear regression consider statistical model equation universal constants log algorithm initialized stepsize algorithm gradient estimator returns iterates probability least log contraction parameter generalized linear models section consider generalized linear models described equation covariate allowed heavy tailed distribution assume covariates bounded moment additionally assume smoothness around specifically assume exist universal constants also assume tth derivative theorem heavy tailed generalized linear models consider statistical model equation universal constants log algorithm initialized stepsize algorithm gradient estimator returns iterates probability least log contraction parameter instantiate theorem logistic regression model corollary heavy tailed logistic regression consider model equation universal constants log algorithm initialized stepsize algorithm gradient estimator returns iterates probability least log contraction parameter exponential family instantiate theorem parameter estimation exponential family distributions assume random vector bounded moments obtain following result theorem heavy tailed exponential family consider model equation algorithm initialized stepsize algorithm gradient estimator returns iterates probability least log contraction parameter universal constant discussion paper introduced broad class estimators showed estimators strong robustness guarantees huber model distributions estimators leverage robustness gradient descent together observation risk minimization statistical models gradient risk takes form simple multivariate mean robustly estimated using recent work robust mean estimation estimators based robust gradient descent work well practice many cases outperform robust estimators several avenues future work including developing better understanding robust mean estimation improvement robust mean estimation would immediately translate improved guarantees estimators propose general parametric models finally would also interest understand extent could replace gradient descent optimization methods accelerated gradient descent newton method note however although methods may faster rates convergence classical risk minimization setting setup stability using inexact gradients far crucial warrants investigation acknowledgements research supported part grant thank larry wasserman helpful comments paper references noga alon yossi matias mario szegedy space complexity approximating frequency moments proceedings annual acm symposium theory computing stoc pages new york usa acm sivaraman balakrishnan martin wainwright bin statistical guarantees algorithm population analysis annals statistics kush bhatia prateek jain purushottam kar robust regression via hard thresholding advances neural information processing systems pages box tests variances biometrika christian brownlees emilien joly lugosi empirical risk minimization heavytailed losses annals statistics bubeck convex optimization algorithms complexity foundations trends machine learning emmanuel xiaodong john wright robust principal component analysis journal acm jacm olivier catoni challenging empirical mean empirical variance deviation study annales institut henri statistiques volume pages institut henri moses charikar jacob steinhardt gregory valiant learning untrusted data stoc mengjie chen chao gao zhao ren robust covariance matrix estimation via matrix depth arxiv preprint mengjie chen chao gao zhao ren general decision theory huber epsiloncontamination model electronic journal statistics yudong chen constantine caramanis shie mannor robust sparse regression adversarial corruption proceedings international conference machine learning icml atlanta usa june pages devroye nonparametric density estimation view wiley series probability mathematical statistics wiley ilias diakonikolas gautam kamath daniel kane jerry ankur moitra alistair stewart robust estimators high dimensions without computational intractability foundations computer science focs ieee annual symposium pages ieee david donoho richard liu automatic robustness minimum distance functionals annals statistics pages simon sivaraman balakrishnan aarti singh computationally efficient robust estimation sparse functionals conference learning theory john duchi hongseok namkoong regularization convex objectives arxiv preprint jianqing fan weichen wang ziwei zhu shrinkage principle data highdimensional robust matrix recovery martin fischler robert bolles random sample consensus paradigm model fitting applications image analysis automated cartography commun acm chao gao robust regression via mutivariate regression depth frank hampel elvezio ronchetti peter rousseeuw werner stahel robust statistics approach based influence functions volume john wiley sons cecil hastings frederick mosteller john tukey charles winsor low moments small samples comparative study order statistics annals mathematical statistics pages daniel hsu sivan sabato loss minimization parameter estimation heavy tails journal machine learning research huber robust statistics john wiley sons peter huber robust estimation location parameter annals mathematical statistics peter huber robust version probability ratio test annals mathematical statistics mark jerrum leslie valiant vijay vazirani random generation combinatorial structures uniform distribution theoretical computer science kakade shai ambuj tewari applications strong smoothness duality learning matrices corr kevin lai anup rao santosh vempala agnostic estimation mean covariance foundations computer science focs ieee annual symposium pages ieee lee jeffrey david kriegman acquiring linear subspaces face recognition variable lighting ieee transactions pattern analysis machine intelligence matthieu lerasle roberto oliveira robust empirical mean estimators arxiv preprint jerry robust sparse estimation tasks high dimensions conference learning theory loh statistical consistency asymptotic normality robust mestimators ann loh martin wainwright regression noisy missing data provable guarantees advances neural information processing systems pages gabor lugosi shahar mendelson risk minimization tournaments arxiv preprint lugosi shahar mendelson estimators mean random vector annals statistics stanislav minsker geometric median robust estimation banach spaces bernoulli ivan mizera depth deep points calculus annals statistics pages nemirovski yudin problem complexity method efficiency optimization publication wiley yurii nesterov introductory lectures convex optimization basic course volume springer science business media joel tropp tail bounds sums random matrices foundations computational mathematics john tukey mathematics picturing data proceedings international congress mathematicians volume pages van geer empirical processes cambridge university press yin wang caglayan dicle mario sznaier octavia camps self scaled regularized robust regression proceedings ieee conference computer vision pattern recognition pages yannis yatracos rates convergence minimum distance estimators kolmogorov entropy ann xinyang dohyung park yudong chen constantine caramanis fast algorithms robust pca via gradient descent advances neural information processing systems annual conference neural information processing systems december barcelona spain pages zhou koushiki bose jianqing fan han liu new perspective robust mestimation finite sample theory applications multiple testing proof theorem section present proof main result projected gradient descent inexact gradient estimator ease notation often omit proof iteration step assumption probability least taking union bound holds iteration steps probability least remainder analysis assume event true notation let noisy gradient let brevity following lemma bubeck lemma lemma let convex assumptions kek update rule kek equation follows contraction property projections write second step follows lemma last step follows step size combining equations using assumption kek get assumption choose since get let therefore solving induction get proof theorem prove result robust generalized linear models first study distribution gradients corresponding risk function lemma consider model equation exist universal constants kcov bounded fourth moments var proof gradient expectation written sup sup sup last line follows assumption smoothness bound maximum eigenvalue cov kcov sup sup sup sup xxt sup xxt sup bound make use inequality inequality random variables using inequality last line follows assumption exponential family hence cumulants higher order derivatives function kcov bounded fourth moment show fourth moment gradient distribution bounded control last step follows fact central moment written polynomial involving lower cumulants turn derivatives lognormalization function control assumption bounded implies exist constants previously say universal constants hence gradient bounded fourth moments studied distribution gradients use lemma characterize stability huber gradient estimator using lemma know point huber gradient estimator satisfies probability kcov log substituting upper bound kcov lemma get universal constants probability least log log using equation ensure stability gradient descent need get gradient descent stable long number samples large enough contamination level log log constants plugging theorem get back result theorem sponding proof corollary begin studying distribution random variable xxt lemma consider model equation exist universal constants kcov var bounded fourth moments proof mean xxt covariance cov xxt xxt cov xxt xxt xxt written hence covariance matrix written cov therefore kcov bounded fourth moment start lhs xxt last line follows two applications following inequality inequality random variables control term control using cauchy schwartz normality projections normal distribution control control using independence normality projections normal distribution therefore rhs var cov kcov saw kcov lhs rhs scale hence bounded fourth moments established bounded fourth moments implies use mean estimation oracle using theorem know oracle outputs estimate probability least kcov log using lemma subsitute kcov recover statement corollary proof theorem prove result robust exponential family first study distribution gradients corresponding risk function lemma consider model equation exists universal constant kcov var bounded fourth moments proof fisher consistency negative know mean covariance bounded moments follows assumption sufficient statistics bounded moments studied distribution gradients use lemma characterize stability huber gradient estimator using lemma know point huber gradient estimator satisfies probability kcov log substituting upper bound kcov lemma get universal constants log assumption therefore case theorem universal constant plugging corresponding get back result corollary proof corollary using contraction property projections know fisher consistency negative know true parameter obtained inverting operator whenever possible convex conjugate use following result control lipschitz smoothness theorem duality assume closed convex smooth parameter convex conjugate strongly convex parameter proof theorem found hence assumption fourth moments sufficient statistics bounded also know cov implies use oracle using lemma get exists universal constants probability least log combining equation recovers result corollary proof theorem present proof theorem first study distribution gradients loss function help bound error gradient estimator lemma consider model equation suppose covariates bounded noise bounded moments exist universal constants kcov xxt proof start deriving results xxt next bound operator norm covariance gradients point covariance cov xxt xxt cov want bound kcov cov cov xxt xxt sup sup xxt xxt sup sup xxt sup sup sup second last step follows last step follows assumption bounded moments see equation proceed proof theorem lemma know point satisfies following gradient estimator described algorithm dne probability least dne cov log substitute upper bound kcov lemma equation dne cov log log log log complete proof theorem use results theorem note holds gradient estimator satisfies stability condition log theorem gives suppose satisfies condition plugging required result proof theorem prove theorem use result lemma derived following expression covariance kcov lemma know point gradient estimator described algorithm satisfies following probability least dne dne cov log substitute upper bound kcov equation get log dne cov log log use results theorem gradient estimator satisfies stability condition holds log theorem gives suppose satisfies condition plugging required result proof theorem proof proceeds along similar lines proof theorem prove theorem utilize result lemma showed kcov combining result lemma get probability least log dne cov log stability condition always satisfied long substituting since theorem gives required result upper bound contamination level provide complementary result gives upper bound contamination level based initialization point algorithm would work key idea error incurred mean estimation oracle lower bounded variance distribution zero vector lies within error ball mean oracle forced output mean algorithm implies estimating mean gradient error high one force mean forces algorithm converge remainder section consider case linear regression asymptotic regime lemma consider model equation exists universal constant every gradient oracle exists contamination distribution algorithm converge even number samples proof using lemma know point xxt kcov let represent distribution similarly let represent corresponding distribution using theorem know minimax rate estimating mean distribution gradients given inf sup statement says point mean oracle always incur error estimating gradient forp oracle exists adversarial contamination whenever suppose contamination level every oracle exists corresponding algorithm remain stuck plugging recover statement lemma chen provide general minimax lower bound setting contrast using algorithm oracle log close true parameter even contamination small implies procedure minimax optimal approach nonetheless practical algorithm robust estimation general statistical models details analysis algorithm section present refined analysis algorithm begin introducing preliminaries subsequently analyze algorithm finally turn attention general algorithm preliminaries unless otherwise stated assume throughout random variable bounded fourth moments every unit vector summarize useful results bound deviation conditional true lemma lemma let univariate random variable bounded fourth moments let event probability lemma lemma let univariate random variable let let event probability corollary corollary let event probability let random variable bounded fourth moments denote conditional covariance matrix random variables bounded fourth moments use chebyshev inequality obtain tail bounds lemma lemma let bounded fourth moments every unit vector proofs also use matrix bernstein inequality rectangular matrices preliminary consider finite sequence independent random matrices size assume random matrix satisfies kzk kop almost surely define max zkt kop zkt kop preliminaries place use following result lemma exp equivalently probability least log log let denote set intervals following standard uniform convergence result lemma suppose probability least log log sup algorithm huber outlier gradients truncation function huberoutliergradienttruncation sample gradients corruption level dimension let smallest interval containing log tion points return else let samples ith hubergradientestimator end let ball smallest radius centered containing log fraction points return end end function turn attention analysis algorithm case case firstly analyze algorithm lemma suppose distribution mean variance bounded fourth moments exist positive universal constants given samples distribution algorithm probability least returns estimate log log log log simplified log log log proof application hoeffding inequality obtain probability least fraction corrupted samples samples distribution less log condition event remainder proof let denote fraction corrupted samples let samples true distribution let cardinality set let interval around containing mass using lemma length using lemma obtain probability least number samples distribution fall interval least upper bounded log log let set points smallest interval containing fraction points using theory know every interval exists universal constant exp probability least exists universal constant log log sup using equation know fraction lie let set points smallest interval containing fraction points know length minimum interval containing fraction points less length smallest interval containing fraction points turn less length minimum interval containing fraction points need overlap large enough hence extreme points interval atmost away hence distance chosen within length moreover interval minimum length fraction contain least fraction controlling sources error hence bound error mean chosen noise points within length atmost hence maximum error next mean chosen good points converge mean conditional distribution points sampled conditioned lie minimum length interval variance random variables upper bounded using lemma control distance mean conditional mean event sample chosen interval know hence using lemma get exists constant hence probability least mean within log length taking conditioning statements upper bounding log recover statement lemma case prove case use series lemmas lemma proves outlier filtering constrains points ball around true mean lemma controls error lemma controls mean covariance true distribution outlier filtering error mean projected onto bottom span covariance matrix lemma suppose distribution mean covariance bounded fourth moments exist positive universal constants given samples distribution equation find vector probability least log log log proof pick orthogonal directions use method using union bound recover result next prove case firstly prove outlier step lemma outlier removal step exists universal constants probability least every remaining point satisfies log log fraction samples corrupted log proof let set points chosen outlier filtering let sed set good points chosen outlier filtering let sen set bad points chosen outlier filtering using theory know every closed ball exists constant probability least log sup let claim see suppose letpz let orthogonal directions let maxi plugging hence using lemma least fraction good points away hence minimum radius ball containing radius atmost combined triangle inequality recovers statement lemma let set points outlier filtering let mean mean sed mean sen lemma let sed set clean points remaining outlier filtering probability least log log log log log proof first prove bounds mean shift control use lemma event removed outlier filtering control using lemma use bernstein inequality lemma get probability least log log next prove bound covariance matrix corollary control use bernstein inequality lemma know points constrained ball plugging lemma log log plugging values get log log finally log log lemma let bottom principal components covariance matrix filtering exists universal constant probability least projection matrix bottom defined lemma log proof weyl inequality control log log control hence using space spanned bottom eigenvectors corresponding projection operator following algebraic manipulation get established required results ready prove lemma restate result sake completeness theorem suppose distribution mean covariance bounded fourth moments exist positive universal constant given samples distribution equation algorithm probability least returns estimate log log log log log log log log log proof divide samples different sets choose first set keep active set samples run outlier filtering set let remaining samples outlier filtering sed orthogonality subspaces spanned eigenvectors coupled triangle inequality contraction projection operators mean vector span top principal components returned running algorithm reduced dimensions dim lemma monotonically increasing dimension moreover upper bound lemma also monotonically increasing dimension hence error step algorithm upper bounded error incurred running dimension log samples probability log hence overall error recursive algorithm upper bounded log combining lemma lemma instantiated log samples probability log get log log log log
| 2 |
quasi periodicity quantification video data using topology jan christopher jose perea january abstract work introduces novel framework quantifying presence strength recurrent dynamics video data specifically provide continuous measures periodicity perfect repetition quasiperiodicity superposition periodic modes periods way require segmentation training object tracking surrogate signals methodology operates directly video data approach combines ideas nonlinear time series analysis delay embeddings computational topology persistent homology translating problem finding recurrent dynamics video data problem determining circularity toroidality associated geometric space extensive testing show robustness scores respect several noise show periodicity score superior methods compared periodicity rankings furthermore show quasiperiodicity score clearly indicates presence biphonation videos vibrating vocal folds never accomplished end end quantitatively introduction periodicity characterizes many natural motions including animal locomotion spinning wheels oscillating pendulums etc quasiperiodicity thought superposition frequencies occurs naturally transitions ordinary chaotic dynamics goal work automate analysis videos capturing periodic quasiperiodic motion order identify classes motion unified framework generalize sliding window embeddings reconstruct periodic quasiperiodic attractors analyze resulting attractors using persistent homology technique combines geometry topology section return scores range indicate degree periodicity quasiperiodicity corresponding video show periodicity measure compares favorable others literature ranking videos section furthermore knowledge method able quantify existence quasiperiodicity directly video data approach fundamentally different others quantify periodicity video instance common derive signals video apply fourier autocorrelation measure periodicity contrast technique operates raw pixels avoiding common video department electrical computer engineering duke university durham usa ctralie department mathematics department computational mathematics science engineering michigan state university east lansing usa joperea analysis results appeared part thesis first author code replicate results https supplementary material videos https preprocessing tracking entirely using geometry also advantages applications fact simple synthetic example shows figure fourier transform quasiperiodic signals often close fourier transform periodic signals contrast sliding window embeddings design yield starkly different geometric structures periodic quasiperiodic cases exploit devise quasiperiodicity measurement use indicate degree biphonation videos vibrating vocal folds section useful automatically diagnosing speech pathologies context applied topology quasiperiodicity score one first applications persistent high dimensional data largely possible due recent advancements computational feasibility persistent homology prior work recurrence videos surrogate signals one common strategy detecting periodicity video derive function act surrogate dynamics use either frequency domain fourier transform time domain autocorrelation peak finding techniques one earliest works genre finds level set surfaces spatiotemporal xyt volume video frames stacked top uses curvature scale space curves live spatiotemporal surfaces function use fourier transforms pixels exhibit motion define measure periodicity based energy around fourier peak harmonics extract contours find eigenshapes contours classify parameterize motion within period frequency estimation done using fourier analysis peak detection top statistics derived contours area center mass finally derive surrogate function based mutual information first subsequent frames look peaks similarity function help watershed method matrices another class techniques relies matrices ssms frames similarity defined variety ways track set points foreground object compare affine invariant similarity another widely recognized technique periodicity quantification derives periodicity measures based matrices pixel differences technique inspired diverse array applications including analyzing cycles jellyfish analyzing bat wings analyzing videos autistic spectrum children performing characteristic repetitive motions hand flapping compare technique section miscellaneous techniques periodic video quantification also number works fall two categories works focus solely walking humans since one common types periodic motion videos interest people look braiding patterns occur xyt slices videos walking people perform blob tracking foreground walking person use ratio second first eigenvalues pca blob general periodic videos make codebook visual words look repetitions within resulting string take deep learning approach counting number periods occur video segment use convolutional neural network spatially downsampled regions interest uniformly spaced time estimate length cycle finally perhaps philosophically similar work work use cohomology find maps mocap data circle parameterizing periodic motions though work provide way quantify periodicity work show geometry provides natural way quantify recurrence periodicity quasiperiodicity video measuring shape delay embeddings particular propose several optimizations section make approach feasible resulting measure quasiperiodicity quantitative approaches lacking used section detect anomalies videos vibrating vocal folds finally contrast frequency time domain techniques method rely period length integer multiple sampling rate background delay embeddings geometry recurrence video data captured via geometry delay embeddings describe next video delay embeddings regard video sequence image frames indexed positive real numbers given positive integers width height video pixels function particular sequence images sampled discrete times yields one function via interpolation integer known dimension real number known delay video define sliding window also referred time delay embedding parameters time vector swd subset resulting varying referred sliding window embedding remark since pixel measurement locations fixed sliding window embedding eulerian view dynamics video note delay embeddings generally applied time series viewed videos framework hence equation essentially concatenation delay embeddings individual pixel video one large vector one main points leverage paper fact geometry sliding window embedding carries fundamental information original video explore next geometry video delay embeddings motivating example consider harmonic periodic signal cos cos quasiperiodic signal cos cos color videos treat channel independently yielding vector practice much difference color grayscale embeddings framework videos consider refer harmonic constitutive frequencies commensurate linearly dependent rational numbers way contrast underlying quencies signal linearly independent hence use term quasiperiodicity dynamics literature denote superposition periodic processes whose frequencies differs definitions literature regard quasiperiodic deviation perfect repetition geometric argument see equation discussion follows shows given periodic function exactly harmonics sliding window embedding swd topological circle closed curve without selfintersections wraps around torus illustration show figure plot sliding window embedding swd via pca principal component analysis projection figure sliding window embedding harmonic signal colors signal correspond colors points pca plot sliding window embedding traces topological circle wrapped around torus however quasiperiodic distinct frequencies appropriate swd dense fills figure shows plot quasiperiodic signal projection via pca sliding window embedding swd figure sliding window embedding quasiperiodic signal colors signal correspond colors points pca plot sliding window embedding dense torus difference geometry delay embeddings stark compared difference power spectral densities shown figure figure power spectral densities samples commensurate signals relative harmonics ratios respectively difference nearly evident geometry sliding window embeddings additionally unless sampling commensurate frequency fixed fourier basis causes frequency component bleed many frequency bins pattern making precise peak finding difficult moreover see next interpretation periodicity quasiperiodicity circularity toroidality sliding window embeddings remains true videos higher resolution max rest paper show one use persistent homology tool field computational topology quantify presence quasi periodicity video measuring geometry associated sliding window embedding short propose periodicity score video measures degree sliding window embedding swd spans topological circle quasiperiodicity score quantifies degree swd covers torus approach validated extensively show quasi periodicity detection method robust several noise models motion blur additive gaussian white noise mpeg bit corruption compare several periodicity quantification algorithms show approach closely aligned human subjects finally provide application automatic classification dynamic regimes laryngeal geometry video delay embeddings though may seem daunting compared case geometry delay embedding shares many similarities periodic videos shown let argue sliding window embeddings quasi periodic videos geometry described far end consider example video contains set frequencies let amplitude nth frequency ith pixel simplicity without loss generality assume cosine zero phase offset time series pixel written cos grouping coefficients together matrix write cos stands nth column constructing delay embedding equation cos swd cos applying cosine sum identity get swd cos sin constant vectors words sliding window embedding video sum linearly independent ellipses lie space frame videos resolution shown case commensurate frequencies window length length period vectors become orthogonal recovered pca swd figure shows components first pca vectors horizontal line pixels video oscillating pendulum note oscillations present temporally spatially figure showing slice principal components synthetic video oscillating pendulum chosen period length frames high dimensional geometry repeated pulses using eulerian coordinates important impact geometry delay embeddings natural videos figure shows pixels often jump foreground background pattern similar square waves types abrupt transitions require higher dimensional embeddings reconstruct geometry see first extract one period signal period pixel otherwise rewritten terms pulse since repeats regardless looks like periodic summation discretizes frequency domain figure example eulerian pixel witnessing transition video woman jumping jacks red green blue channels plotted time transitions induce per pixel periodic signal sharp transitions leads high dimensionality appropriate sliding window embedding switching back time domain write words pixel sum constant offset plus possibly infinite set harmonics integer multiples instance applying equation square wave period centered origin roundabout way deriving fourier series sin sin sin sampling sinc function sin intervals every odd coincides proportional every even harmonic zero conciding general sharper transitions longer tail high frequency harmonics exist embedding calling higher delay dimension fully capture geometry since every harmonic lives linearly independent ellipse similar observations harmonics made images collections patches around sharp edges figure persistent homology informally topology study properties spaces change stretching without gluing tearing instance number connected components number essentially different loops bound disk topological properties space follows circle square topologically equivalent since one deform one onto circle line segment would require either gluing endpoints line segment tearing circle homology tool algebraic topology designed measure types properties persistent homology adaptation ideas discrete collections points sliding window embeddings briefly introduce concepts next simplicial complexes simplicial complex combinatorial object used represent discretize continuous space discretization available one compute topological properties algorithmic means formally simplicial complex vertices nonempty set collection nonempty finite subsets always implies element called simplex elements called cases special called vertices called edges called faces example keep mind circle continuous space topology captured simplicial complex three vertices three edges terms topological properties simplicial complex regarded combinatorial surrogate connected component one loop bound region features higher dimensions persistent homology point clouds sliding window embedding video practice finite set swd swd determined choice finite moreover since swd restriction ambient euclidean distance endows swd structure finite metric space discrete metric spaces also referred point clouds trivial topological point view point cloud points simply connected components features holes higher dimensions however point cloud sampled continuous space topology circle torus one would expect appropriate simplicial complexes vertices point cloud reflect topology underlying continuous space exploit next given point cloud finite set distance function complex rips complex short scale collection subsets diameter less equal simplicial complex vertex set equal constructed adding edge two vertices apart adding triangular faces whose bounding edges present generally adding whose bounding facets included show figure evolution rips complex set points sampled around unit circle epsilon epsilon epsilon figure rips complex three different scales point cloud points sampled around idea behind persistent homology track evolution topological features complexes scale parameter ranges maximum value instance figure one see distinct connect components one point three connected components one connected component continue case every similarly closed loops bounding empty regions changes increases indeed three holes central prominent hole two small ones left side notice however increases beyond holes filled addition new simplices particular one one connected component topological features higher dimensions family known rips filtration topological features dimension connected components holes voids etc changes codified referred persistence diagrams specifically dimension connected components holes voids etc one record value particular topological feature rips filtration appears birth time disappears death time times features form multiset dgmn set whose elements come repetition known persistence diagram rips filtration since dgmn collection points region visualize scatter plot persistence topological feature times quantity lifetime also include diagonal scatter plot order visually convey persistence pair setting points far diagonal large persistence represent topological features stable across scales hence deemed significant points near diagonal small persistence often associated unstable features illustrate figure process going point cloud persistence diagram rips filtration original point cloud time death persistence diagram time birth class death class birth class death class birth figure point cloud persistence diagram rips filtration connected edges rips filtration drawn blue class indicated red filled triangles shaded green remark computational task determining persistent homology classes filtered simplicial complex surprisingly reduced computing homology single simplicial complex fact problem linear algebra solved via elementary row column operations boundary matrices persistent homology swd particular persistence diagrams objects use quantify periodicity quasiperiodicity video figures show persistence diagrams rips filtrations sliding window embeddings commensurate signals figures respectively use fast new code ripser software package make persistent computation feasible figure sliding window embedding harmonic signal left persistence diagrams right associated rips filtration sliding window embedding swd traces topological circle wrapped around torus persistence diagram dimension one shows one pair prominent persistence consistent point cloud sampled around space topology circle figure sliding window embedding quasiperiodic signal left persistence diagrams right associated rips filtration sliding window embedding swd dense torus persistence diagram dimension one shows two pairs prominent persistence persistence diagram dimension two shows one prominent pair consistent point cloud sampled around space topology torus implementation details reducing memory requirements svd suppose video discretely sampled different frames resolution delay embedding dimension arbitrary assuming bit floats per grayscale value storing sliding window embedding requires bytes low resolution video seconds long using already exceeds memory follows address memory requirements ensuing computational burden construct access sliding window embedding indeed constructing rips filtration requires pairwise distances different delay vectors enables optimizations first points exists linear subspace contains particular let matrix video frame along column performing singular value decomposition yields matrix whose columns form orthonormal basis aforementioned linear subspace hence finding coordinates original frame vectors respect orthogonal basis using coordinates columns instead original pixels get sliding window embedding lower dimension swd kswd swd kswd swd note computed finding eigenvectors cost dominated example alone reduces memory requirements course procedure effective short videos actually many fewer frames pixels encompasses examples work fact point video minutes similar approach used classical work eigenfaces computing principal components set face images distance computation via diagonal convolutions different optimization possible delays taken exactly frames interpolation needed case squared euclidean distance let matrix pairwise squared euclidean distances frames possibly computed memory optimization section let matrix pairwise distances delay frames equation implies obtained via convolution rect function vector length diagonals moving average implemented time cumulative sums hence regardless chosen computation memory requirements computing depend number frames video also simply computed taking entry wise square root another computation similar scheme used comparing distances shape descriptors videos meshes figure shows matrices embeddings pendulum video delay delay approximately matching period effect moving average along diagonals delay eliminates caused video mirror symmetry even videos without mirror symmetries video running dog figure introducing delay brings geometry focus shown figure pairwise distances tau pairwise distances tau figure matrices video oscillating pendulum bright colors indicate far distances dark colors indicate near distances example clearly shows adding delay embedding like performing block averaging along diagonals pairwise distance matrices gets rid mirror symmetry time figure animation periodic video running dog unlike oscillating pendulum mirror symmetry second half period pairwise distances tau pairwise distances tau figure matrices video running dog even without delay embedding video frames still form topological loop however delay embedding cleans geometry leads rounder loop seen resulting ssm normalization normalization steps needed order enable fair comparisons videos different resolutions different range periodic motion either spatially intensity first perform sphere normalize vector normalization shown nice theoretical properties swd swd swd vector ones words one subtracts mean component vector vector scaled unit norm lives unit sphere subtracting mean component eliminate additive linear drift top periodic motion scaling addresses resolution magnitude differences note still use memory optimization section longer use optimizations section since window normalized independently moreover order mitigate nonlinear drift implement simple convolution derivative gaussian pixel original video applying delay embedding bandpass filter could replaced bandpass filter leveraging application specific knowledge expected frequency bounds added advantage reducing number harmonics enabling smaller embedding dimesion scoring videos normalized scale score periodicity quasiperiodicity based geometry sliding window embeddings let dgmn persistence diagram rips filtration sliding window embedding video define mpi dgmn largest difference dgmn particular dgmn max dgmn mpi dgmn dgmn propose following scores periodicity score like exploit fact rips filtration persistence diagram one prominent pair coordinates since limit shape normalized perfectly periodic sliding window video periodicity score periodic perfectly periodic quasiperiodicity score qps score designed torus mind score based second largest persistence times largest persistence since want shape two core circles encloses void get large score based theorem homology void die moment smallest dies modified periodicity score mps design modified periodicity score lower quasiperiodic videos original periodicity score would yield note use field coefficients persistent homology computations since shown works better periodic signals strong harmonics embark experiments let explore choice two crucial parameters sliding window embedding delay dimension practice determine equivalent pair parameters dimension window size dimension window size takens embedding theorem one fundamental results theory dynamical systems short contends appropriate hypotheses exists integer generic sliding window embedding swd reconstructs state space underlying dynamics witnessed signal one common strategy determining minimal false scheme idea keep track nearest neighbors point delay embedding change increased prior estimates low algorithm used recent work video dynamics instance even estimate however one choose delay shown sliding window embedding periodic signals roundest periodicity score maximized window size satisfies following relation number periods signal verify experimentally show figure periodicity score changes function window size pendulum video choice window size equation maximizes generate figure fixed sufficiently large varied let describe general approach given video perform estimation step see section next results positive real number given large enough let figure varying window size delay embedding synthetic pendulum video period length around frames red dashed lines drawn window lengths would expected maximize roundness embedding period length based theory fundamental frequency estimation though figure suggests robustness window size long window half period may know practice automate window size choices coarse estimate using fundamental frequency estimation techniques surrogate signal get signal extract first coordinate diffusion maps using nearest neighbors raw video frames delay taking smoothed time derivative note similar diffusionbased method also used recent work analyze frequency spectrum video oscillating pendulum spring system quasiperiodic state diffusion time series apply normalized autocorrelation method estimate fundamental frequency particular given discrete signal length define autocorrelation however observed robust function detecting periodicities squared difference function rewritten finally suggest normalizing function range control window size interpretation akin pearson correlation coefficient fundamental frequency inverse period largest peak right zero crossing zero crossing condition helps prevent offset largest peak defining normalized autocorrelation equation added advantage value peak used score periodicity authors call clarity values closer indicate perfect periodicities technique sometimes pick integer multiples period multiply slowly decaying envelope lag maximum lag emphasize smaller periods figure shows result algorithm periodic video figure shows algorithm irregular video figure diffusion maps normalized autocorrelation fundamental frequency estimation periodic vocal folds video section chosen period length indicated red dot peak matches visually inspected period length figure diffusion maps normalized autocorrelation fundamental frequency estimation video vocal folds irregular oscillations section experimental evaluation next evaluate effectiveness proposed modified periodicity quasiperiodicity scores three different tasks first provide estimates accuracy binary classifications presence several noise models noise levels results illustrate robustness method second quantify quality periodicity rankings machine scores compared generated human subjects nutshell comparing several periodicity quantification algorithms approach shown closely aligned perception human subjects third demonstrate methodology used automatically detect physiological manifestations certain speech pathologies normal biphonation directly videos vibrating vocal folds classification varying noise shown empirically common source noise videos comes camera shake blur captured point spread functions resembling directed random walks figure amount blur noise level controlled extent pixels walk sources additive white gaussian noise awgn controlled standard deviation gaussian kernel mpeg bit errors quantified percentage corrupted information figure shows examples noise types classification purposes use three main recurrence classes three types periodic videos true periodic oscillating pendulum bird flapping wings animation beating heart two types quasiperiodic videos true quasiperiodic one showing two solid disks oscillate sideways rates second showing two stationary gaussian pulses amplitudes modulated cosine functions two videos without significant recurrence true video car driving past landscape video explosion one seven videos corrupted three noise models three different noise levels blur awgn bit error follows given particular video noise model noise level instances generated sampling noise independently random results report table area receiver operating characteristic roc curve auroc short classification task resp binary classifier furnished periodicity resp quasiperiodicity score instance blur noise model noise level pixels auroc using periodicity score classify instances heartbeat video periodic blur original awgn bit err figure results applying motion blur additive white gaussian noise mpeg bit corruption video frame table auroc values different levels noise binary classification task periodic bird flapping heart beating pendulum driving left subcell explosions right subcell based periodicity score equation also two synthetic quasiperiodic videos sideways disks modulated pulses compared two videos based quasiperiodicity score equation awgn awgn awgn bit err bit err blur bird flapping heart beat pendulum blur blur quasiperiodic disks quasiperiodic pulses bit err instances driving video periodic similarly mpeg bit corruption model bit error auroc using quasiperiodicity score classify instances quasiperiodic sideways disks quasiperiodic instances explosions video quasiperiodic put numbers perspective auroc associated perfect classifier auroc corresponds classification random coin flip overall type noise degrades performance across videos bit error makes sense since effect randomly freezing corrupting even deleting frames interrupt periodicity blur noise also affects videos range motion small pendulum video instance moves range pixels extreme end pixel blur almost completely obscures motion comparing human machine periodicity rankings next quantify extent rankings obtained periodicity score equation well three methods agree humans rank videos periodicity starting point dataset different creative commons videos seconds long frames per second videos appear periodic person waving hands beating heart spinning carnival rides appear nonperiodic explosions traffic cam drone view boat sailing pendulum video simulated camera shake known humans notoriously bad generating globally consistent rankings sets elements however comes binary comparisons type ranked higher systems effective human perception specially identification recurrent patterns visual stimuli leverage generate globally consistent ranking videos initial data set use amazon mechanical turk amt present pair videos set three different users total pairwise rankings unique amt workers contributed experiment using interface one shown figure figure interface humans given amt pairwise ranking videos periodicity order aggregate information global ranking consistent possible pairwise comparisons implement technique known hodge rank aggregation hodge rank aggregation finds closest consistent ranking set preferences least squares sense precisely given set objects given set comparisons seek scalar function objects minimizes following sum vab real number positive ranked higher negative otherwise thus function whose discrete gradient best matches set preferences respect norm note preferences feed algorithm based pairwise rankings returned amt video greater video assign vab otherwise since rankings video actually assign weights rankings agree one direction one rankings disagrees two figure shows histogram weighted scores users amt mostly agreement though scores comparison human scores use three different classes techniques machine ranking periodicity sliding windows sort videos decreasing order periodicity score equation fix window size frames embedding dimension frames enough capture strong harmonics also apply time derivative width every frame histogram weighted pairwise turk scores counts score figure histogram scores workers amt gave pairwise videos authors work present two different techniques quantify periodicity matrix ssm video frames first frequency domain technique based peak average power spectral density columns rows ssm linearly applying hann window turn continuous score report ratio peak minus mean standard deviation method referred frequency score authors warn frequency peak method high susceptibility false positives motivated design robust technique works finding peaks normalized autocorrelation gaussian smoothed ssms videos mirror symmetry peaks lie diamond lattice videos without mirror symmetry lie square lattice peak finding within neighborhoods one simply searches possible lattices possible widths find best match peaks since lattice centered autocorrelation point translational checks necessary turn continuous score let sum euclidean distances matched peaks autocorrelation image best fit lattice let proportion lattice points matched let proportion peaks matched lattice point give final periodicity score cdscore lattice fits peaks perfectly error false positive peaks score video fails perfectly matched lattice score greater hence sort increasing order score get ranking show technique agrees second best humans periodicity score ranking one main drawbacks numerical stability finding maxes critical points around nearly diagonal regions square lattices erroneously inflate score also lattice searching occurs integer grid may periods integer number frames always nonzero videos contrast sliding window scheme work real valued period length diffusion maps normalized autocorrelation clarity finally apply technique section get autocorrelation function report value maximum peak normalized autocorrelation right zero crossing referred clarity values closer indicate perfect repetitions sort descending order clarity get ranking figure shows example three different techniques periodic video dot rises diagonal persistence diagram lattice found nearly matches critical points autocorrelation image autocorrelation function diffusion maps nice peak first coordinate figure example score top clarity score bottom left cdscore bottom right matched peaks green lattice blue periodic video man waving arms kth dataset contrast nonperiodic video figure hardly persistent homology well matching lattice first diffusion coordinate apparent periodicities first coordinate figure example score top clarity score bottom left cdscore bottom left matched peaks green lattice blue video explosion nonperiodic results global human rankings global machine rankings compare using kendall score given set objects objects two total orders kendall score defined two rankings agree exactly kendall score two rankings exactly reverse kendall score way analogous pearson correlation rankings table kendall scores machine rankings hodge aggregated human rankings human freq cdscore clarity human freq cdscore clarity table average runtimes milliseconds per video algorithms freq cdscore clarity table shows kendall scores different machine rankings human rankings sliding window video methodology agrees human ranking pair ranking types second similar diffusion clarity noteworthy geometric techniques table also shows average run times milliseconds different algorithms video machine highlight one potential drawback technique since tda algorithms tend computationally intensive however scale videos several hundred frames performance reasonable periodicity biphonation high speed videos vocal folds final task apply methodology real world problem interest medicine show method automatically detect certain types voice pathologies glottography high speed videos fps left right vocal folds human vocal tract particular detect differentiate quasiperiodicity periodicity using geometric sliding window pipeline quasiperiodicity special case referred biphonation biological context nonlinear phenomena cause physical process bifurcate two different periodic modes often transition chaotic behavior torus structure sketched figure long recognized context provide novel way quantifying similar phenomena exist audio main reason studying laryngeal high speed video understanding biomechanical underpinnings perceived voice particular understanding potentially lead practical corrective therapies surgical interventions hand presence biphonation sound necessarily result physiological phenomenon argued may come result changes states arousal contrast work existing literature techniques usually employs inherently lagrangian approach different points left right vocal folds tracked coordinates points analyzed time series natural approach since pixels important signal resides wellunderstood signal processing technique used however edge detectors often require tuning suddenly fail vocal folds close technique give ability localize anomalies since tracking return virtually preprocessing technique domain independent results use collection videos analysis drawn variety different sources two videos correspond normal periodic vocal folds three correspond biphonation two correspond irregular manually extracted frames per video autotuned window size based autocorrelation diffusion maps section chose appropriate chose time spacing point cloud would points shown table technique able differentiate four classes also show pca persistence diagrams one example class figure see appears loop pca one strong persistent dot confirms figure see prominent torus persistence diagram figure see prominent structures persistence diagram even though pca looks like could loop torus note however pca preserves variance signal high dimensional techniques important draw quantitative conclusions table results sliding window pipeline videos periodic vocal folds biphonation irregularities give max persistence periodicity score modified periodicity score mps harmonic score quasiperiodic score qps presented section also show window size win autocorrelation technique section gives bolded top three mps qps scores across videos max modified periodic scores include two periodic videos one biphonation videos max quasiperiodic scores biphonation videos means one high periodicity score could ruled periodicity category video name periodic periodic figure biphonation biphonation biphonation figure mucus perturbed periodic irregular figure win mps qps discussion shown work applying sliding window embeddings videos used translate properties underlying dynamics geometric features resulting point cloud representation moreover also showed tools persistence homology leveraged quantify geometry embeddings pipeline evaluated extensively showing robustness several noise models high quality produced periodicity rankings applicability study speech conditions form video data moving forward interesting avenue related medical applications difference biphonation occurs quasiperiodic modes biphonation occurs harmonic modes shows field coefficients used indicate presence strong harmonic believe geometric approach possible could used example differentiate subharmonic anomalies quasiperiodic transitions please refer supplementary material example video three classes figure video frames sliding window statistics video vocal folds undergoing normal periodic vibrations one strong loop visible pca persistence diagrams figure video frames sliding window statistics video vocal folds undergoing biphonation courtesy juergen neubauer pca suggests possible torus persistence diagram indeed signature torus two strong independent one acknowledgments authors would like thank juergen neubauer dimitar deliyski robert hillman alessandro alarcon dariush mehta stephanie zacharias providing videos vocal folds also thank matt berger arfl discussions sliding window video efficiency thank anonymous workers amazon mechanical turk ranked periodic videos figure video frames sliding window statistics irregular vocal fold vibrations though pca looks similar figure apparent topological features apparent high dimensional state space references mark allmen charles dyer cyclic motion detection using spatiotemporal surfaces curves pattern recognition international conference volume pages ieee john atanbori peter cowling john murray belinda colston paul eady dave hughes ian nixon patrick dickinson analysis bat wing beat frequency using fourier transform international conference computer analysis images patterns pages springer ulrich bauer ripser lean code computation vietorisrips persistence barcodes http ronald coifman lafon diffusion maps applied computational harmonic analysis matthew crump john mcdonnell todd gureckis evaluating amazon mechanical turk tool experimental behavioral research plos one ross cutler larry davis robust periodic motion detection analysis applications ieee transactions pattern analysis machine intelligence alain hideki kawahara yin fundamental frequency estimator speech music journal acoustical society america mauricio delbracio guillermo sapiro removing camera shake via weighted fourier burst accumulation ieee transactions image processing dimitar deliyski pencho petrushev heather shaw bonilha terri treman gerlach bonnie robert hillman clinical implementation laryngeal videoendoscopy challenges evolution folia phoniatrica logopaedica roman goldenberg ron kimmel ehud rivlin michael rudzsky behavior classification eigendecomposition periodic motions pattern recognition jerry gollub harry swinney onset turbulence rotating fluid physical review letters allen hatcher algebraic topology university press christian herbst jakob unger hanspeter herzel jan lohscheller phasegram analysis vocal fold vibration documented laryngeal video endoscopy journal voice hanspeter herzel david berry ingo titze marwa saleh analysis vocal disorders methods nonlinear dynamics journal speech language hearing research hanspeter herzel robert reuter richard katz biphonation voice signals aip conference proceedings volume pages aip peng huang adrian hilton jonathan starck shape similarity video sequences people international journal computer vision shiyao huang xianghua ying jiangpeng rong zeyu shang hongbin zha camera calibration periodic motion pedestrian proceedings ieee conference computer vision pattern recognition pages xiaoye jiang lim yuan yao yinyu statistical ranking combinatorial hodge theory mathematical programming holger kantz thomas schreiber nonlinear time series analysis volume cambridge university press maurice kendall new measure rank correlation biometrika matthew kennel reggie brown henry abarbanel determining embedding dimension reconstruction using geometrical construction physical review orrawan kumdee panrasee ritthipravat repetitive motion detection human behavior understanding video images signal processing information technology isspit ieee international symposium pages ieee ofir levy lior wolf live repetition counting proceedings ieee international conference computer vision pages lohscheller hikmet toy frank rosanowski ulrich eysholdt michael clinically evaluated procedure reconstruction vocal fold vibrations endoscopic digital videos medical image analysis philip mcleod geoff wyvill smarter way find pitch proceedings international computer music conference pages daryush mehta dimitar deliyski thomas quatieri robert hillman automated measurement vocal fold vibratory asymmetry videoendoscopy recordings journal speech language hearing research george miller magical number seven plus minus two limits capacity processing information psychological review neubauer patrick mergell ulrich eysholdt hanspeter herzel analysis irregular vocal fold oscillations biphonation due desynchronization spatial modes journal acoustical society america sourabh niyogi edward adelson analyzing recognizing walking figures xyt cvpr volume pages jose perea persistent homology toroidal sliding window embeddings acoustics speech signal processing icassp ieee international conference pages ieee jose perea john harer sliding windows persistence application topological methods signal analysis foundations computational mathematics mark pinsky introduction fourier analysis wavelets volume american mathematical aaron plotnik stephen rock quantification cyclic motion marine animals computer vision oceans volume pages ieee ramprasad polana randal nelson detection recognition periodic nonrigid motion international journal computer vision qingjun qiu schutte lide qilian automatic method quantify vibration properties human vocal folds via videokymography folia phoniatrica logopaedica christian schuldt ivan laptev barbara caputo recognizing human actions local svm approach pattern recognition icpr proceedings international conference volume pages ieee steven seitz charles dyer analysis cyclic motion international journal computer vision floris takens detecting strange attractors turbulence dynamical systems turbulence warwick pages springer christopher tralie geometry sliding window embeddings periodic videos international proceedings informatics volume schloss fuer informatik matthew turk alex pentland eigenfaces recognition journal cognitive neuroscience mikael florian pokorny primoz skraba danica kragic cohomological learning periodic motion applicable algebra engineering communication computing venkataraman turaga shape descriptions nonlinear dynamical systems videobased inference ieee transactions pattern analysis machine intelligence ping wang gregory abowd james rehg event analysis social game retrieval computer vision ieee international conference pages ieee inka wilden hanspeter herzel gustav peters tembrock subharmonics biphonation deterministic chaos mammal vocalization bioacoustics thomas wittenberg manfred moser monika tigges ulrich eysholdt recording processing analysis digital sequences glottography machine vision applications yair ronen talmon ronald coifman ioannis kevrekidis equations parameters variables data reconstruction normal forms learning informed observation geometries arxiv preprint jing yang hong zhang guohua peng period detection videos signal image video processing guoshen guillermo sapiro mallat solving inverse problems piecewise linear estimators gaussian mixture models structured sparsity ieee transactions image processing stephanie zacharias charles myer jareen lisa kelchner dimitar deliyski alessandro comparison videostroboscopy videoendoscopy evaluation supraglottic phonation annals otology rhinology laryngology page zomorodian carlsson computing persistent homology discrete computational geometry
| 1 |
feb challenging images minds machines amir rosenfeld john tsotsos department electrical engineering computer science york university toronto canada amir tsotsos february abstract denying tremendous leap performance machine learning methods past might even say specific pattern recognition good solved reaching human levels arguably lack training data computation power stand solving remaining ones position paper underline cases vision challenging machines even human observers show limitations contemporary models hard ameliorate following current trend increase training data network capacity computational power moreover claim attempting principle suboptimal approach provide taster examples hope encourage challenge machine learning community develop new directions solve said difficulties introduction known outside academia become ubiquitous popular media industry superhuman capabilities gradually recorded various fields game face verification image categorization even logical reasoning simple scenes current leading methods involve variant deep learning consequentially require large amounts data exception used gain experience elicited era increasingly datasets painstakingly labeled object image annotation visual pose estimation name accompanied growing demand computational power bring forward challenges vision seem solved current methods importantly current popular methodologies meaning neither additional data added computational power drivers solution figure children puzzle goal find six hidden words book words story pages read novel machine far child play could solved providing million similar examples system human need training related work imbalanced small data datasets tend naturally imbalanced long history suggested remedies handling lack training data also treated attempting use data lesser quality handannotated dataset simulating data cite data cars text recognition wild captcha transfer learning reusing features networks trained large useful starting point attempting reduce number required training example extreme cases one even zero examples deeplearning failures recently simple cases deep learning fails work one would possibly expect introduced along theoretical justifications challenging cases present two examples discuss common characteristics humans able solve first encounter despite seen images incidentally critically two examples domain visual text recognition moreover though humans know recognize text seen regular textbooks etc text images either hidden rendered distorted uncharacteristic manner children games first case well exemplified child game hidden word puzzles goal find hidden words image fig shows arbitrarily selected example human observer solvable puzzle though may take minutes complete applied two methods text recognition sub image sned vvoz novees teg score table text detected two recognition methods applied children puzzle means text detected method images scaled fit figure figure variants textual captcha captchas becoming increasingly difficult reproduced wild available code image fig work immediately focused word novel forearm left person ending foot cropping rotating text level cropping tightly even cropping letter see table corresponding including entire image top row results output two methods means systematic test may even claim fair would right systems trained images trained dataset million synthetic training images trained tens thousands images used powerful networks training data less available captcha mechanism thwart automated misuse websites distinguishing humans machines textual captchas involve presenting image text read written user focus type captcha though others exist introduction captchas immediately triggered invention new automatic ways break eventually sparked arms race increasingly complex captchas correspondingly powerful automated methods caused state best leading textual methods involve training dnn data similar distortion characteristics desired types captcha though still systems limited success rates times less hand level distortion become humans solving machines humans supervised learners one rule suggested examples saying simply datapoints behalf statistical learner perspective yet seems http ever supervision receive usually able solve despite especially exposed kind stimulus moreover precisely kinds images used routinely human testing universally accepted indicator human performance examples may seem esoteric revert common cases child often one exposed bounding boxes objects often delineations objects precise segmentation masks often facial bodily objects overlayed field view critically many different object types happen many different instances level precision annotation many modalities granularity visual supervision given machines seems much finer given humans amount directly supervised data seem really main limiting factor already noted several times performance either saturates training data best grows logarithmically increasing map growing examples making solution data better performance simply impractical even resources common problems object detection humans ever read textbooks able solve captchas various kinds without special training first encounter true picture puzzles mentioned cases mentioned claim humans subject supervised learning early life later stages contrary supervisory signals arise multiple sources caretakers provide supervisory signals teaching internal supervision provided innate biases finally rewards stemming results behaviour suffering pain hitting object supervision interspersed within vast continuous stream unsupervised data easily measurable supervisory affect observer something fundamentally different way humans construct use internal representations enabling reason solve new patternrecognition tasks hypothesize approached generating procedures compositional nature presented novel known task suggested visual routines cognitive programs intend maintain collection examples beyond ones suggested encourage community attempt solve learning vast amounts similar examples learning related simpler subtasks learning reason solve composing appropriate solutions references silver huang maddison guez sifre van den driessche schrittwieser antonoglou panneershelvam lanctot mastering game deep neural networks tree search nature vol silver schrittwieser simonyan antonoglou huang guez hubert baker lai bolton mastering game without human knowledge nature vol tang surpassing face verification performance lfw zhang face recognition via centralized coordinate learning arxiv preprint zhang ren sun delving deep rectifiers surpassing performance imagenet classification proceedings ieee international conference computer vision santoro raposo barrett malinowski pascanu battaglia lillicrap simple neural network module relational reasoning advances neural information processing systems perez vries strub dumoulin courville learning visual reasoning without strong priors arxiv preprint perez strub vries dumoulin courville film visual reasoning general conditioning layer arxiv preprint russakovsky deng krause satheesh huang karpathy khosla bernstein imagenet large scale visual recognition challenge international journal computer vision vol lin maire belongie hays perona ramanan zitnick microsoft coco common objects context european conference computer vision springer krishna zhu groth johnson hata kravitz chen kalantidis shamma visual genome connecting language vision using crowdsourced dense image annotations international journal computer vision vol antol agrawal mitchell batra lawrence zitnick parikh vqa visual question answering proceedings ieee international conference computer vision neverova kokkinos densepose dense human pose estimation wild arxiv preprint lim salakhutdinov torralba transfer learning borrowing examples multiclass object detection advances neural information processing systems zhu anguelov ramanan capturing distributions object subcategories computer vision pattern recognition cvpr ieee conference ieee wang ramanan hebert learning model tail advances neural information processing systems sun shrivastava singh gupta revisiting unreasonable effectiveness data deep learning era ieee international conference computer vision iccv ieee sharif razavian azizpour sullivan carlsson cnn features astounding baseline recognition proceedings ieee conference computer vision pattern recognition workshops snell swersky zemel prototypical networks learning advances neural information processing systems shamir shammah failures deep learning arxiv preprint shi bai yao trainable neural network imagebased sequence recognition application scene text recognition ieee transactions pattern analysis machine intelligence vol zhou yao wen wang zhou liang east efficient accurate scene text detector arxiv preprint veit matera neumann matas belongie dataset benchmark text detection recognition natural images arxiv preprint baydin zinkov wood using synthetic data train neural networks reasoning neural networks ijcnn international joint conference ieee von ahn blum hopper langford captcha using hard problems security international conference theory applications cryptographic techniques springer singh pal survey different types captcha international journal computer science information technologies vol mori malik recognizing objects adversarial clutter breaking visual captcha computer vision pattern recognition proceedings ieee computer society conference vol ieee chen luo guo zhang gong survey breaking technique captcha security communication networks vol zhu vondrick ramanan fowlkes need training data better models object detection bmvc vol citeseer zhu vondrick fowlkes ramanan need training data international journal computer vision vol hestness narang ardalani diamos jun kianinejad patwary ali yang zhou deep learning scaling predictable empirically arxiv preprint ullman harari dorfman simple innate biases complex visual concepts proceedings national academy sciences vol ullman visual routines cognition vol tsotsos kruijne cognitive programs software attention executive frontiers psychology vol
| 1 |
theory ordinal embedding may ery abstract motivated recent work ordinal embedding kleindessner von luxburg derive large sample consistency results rates convergence problem embedding points based triple quadruple distance comparisons also consider variant problem local comparisons provided finally inspired jamieson nowak bound number comparisons needed achieve consistency keywords ordinal embedding multidimensional scaling mds dissimilarity comparisons landmark multidimensional scaling introduction problem ordinal embedding also called multidimensional scaling borg groenen consists finding embedding set items based pairwise distance comparisons specifically suppose dissimilarity measure items assume dissimilarities either directly available assumed lack meaning except relative magnitudes available via comparisons dissimilarities meaning provided subset note latter setting encompasses former given dimension goal embed items points way compatible available information specifically denotes euclidean norm two common situations quadruple comparisons available meaning triple comparisons available meaning identified problem long history surveyed young hamer pioneering contributions shepard kruskal main question tackle consistency suppose items fact points available suppose unknown increasing function provided subset dissimilarity comparisons possible reconstruct original points limit clearly reconstruction similarity transformation transformation equivalently form orthogonal transformation constant vector since department mathematics university california san diego usa transformation leaves distance comparisons unchanged question foundation multidimensional scaling early work addressed continuous case span whole convex subset setting goal becomes characterize isotonic functions functions satisfying shepard argues functions must similarities cites earlier work aumann kruskal suppes winet dealing case recently finite sample case formally considered indeed kleindessner von luxburg prove consistency result showing bounded connected open subset satisfying additional conditions example finite union open balls large sample limit becoming dense possible recover similarity transformation note uniquely defined interior note kleindessner von luxburg focus strictly isotonic case second inequality strict first contribution extension consistency result quadruple learning triple learning process greatly simplify arguments kleindessner von luxburg weaken conditions sampling domain note terada von luxburg partially solved problem reduction problem embedding graph however arguments based apparently incomplete proof von luxburg alamgir based rather sophisticated approach proofs comparatively much simpler direct second contribution provide rates convergence problem left open kleindessner von luxburg context quadruple learning obtain rate hausdorff distance underlying sample meaning first convergence rate exact ordinal embedding know able obtain rate context triple learning compared establishing consistency proof much involved last decade seen surge interest ordinal embedding motivated applications recommender systems psychometric studies made available via internet example databases music artists similarity ellis mcfee lanckriet sensor localization nhat another possible application modern datasets large quadruple triple comparisons rarely available motivating proposal embedding methods based sparse set comparisons agarwal borg groenen jamieson nowak terada von luxburg terada von luxburg study call local ordinal embedding define problem embedding unweighted neighbor graph notation situation dissimilarity item kth nearestneighbor terada von luxburg argue items points sampled smooth density bounded connected convex open subset smooth boundary log enough consistency third contribution consider related situation max provides graph also quadruple comparisons nearest neighbors setting able show log enough beyond local designs may feasible settings jamieson nowak consider problem adaptively sequentially selecting triple comparisons order minimize number comparisons yet deduce triple comparisons consider methods among version landmark mds method silva tenenbaum less ambitious problem selecting comparisons order consistently embed items points euclidean space fourth contribution show one obtain consistent embedding landmark design based queries diverging sequence moreover embedding computed expected time function rest paper organized follows section state theoretical results prove simpler ones gather remaining proofs section section concludes paper short discussion theory section present theoretical findings proofs gathered section already defined isotonic functions following kleindessner von luxburg say function weakly isotonic obviously function isotonic weakly isotonic weak isotonicity fact much weaker isotonicity indeed let property isotonic say function property locally property denotes open ball center radius lemma locally weakly isotonic function open also locally isotonic proof immediate consequence kleindessner von luxburg lem implies weakly isotonic function isotonic suppose data points define let suppose provided subset distance comparisons exact ordinal embedding definition satisfies associate map defined crucially observe case quadruple comparisons resulting map isotonic case triple comparisons weakly isotonic instead light fact location orientation scale lost ordinal information available problem proving consistency exact ordinal embedding reduces showing embedding close similarity transformation sample size increases exactly kleindessner von luxburg assumptions ordinal embedding based triple comparisons first contribution extend consistency results kleindessner von luxburg quadruple learning triple learning following presentation start result sample infinite mild generalization kleindessner von luxburg theorem let bounded connected open suppose dense consider locally weakly isotonic function similarity transformation coincides proof largely based kleindessner von luxburg bit simpler see section remark one similarity property since similarities affine transformations two affine transformations coincide affine independent points necessarily identical theorem set dense open subset therefore infinite fact kleindessner von luxburg use theorem intermediary result proving consistency sample size increases paper dedicated establishing arguments quite elaborate found direct route tending limit soon possible based lemma core theorem remaining section consider finite sample setting bounded connected open dense function values bounded set context implicitly extend example setting given point although following holds extension lemma consider finite bounded infinite exists called diagonal process kelley problem although result classical provide proof completeness proof without loss generality suppose let since bounded infinite exists turn since bounded infinite exists continuing process formally corresponds recursion obtain infinite exists let denote kth element increasing order note strictly increasing define since valid corollary consider setting assume weakly isotonic sequentially pointwise convergence topology functions functions accumulates similarity transformations restricted corresponding result kleindessner von luxburg obtained isotonic instead weakly isotonic functions domains finite unions balls convergence uniform instead pointwise provide proof corollary derive simple consequence theorem lemma proof lemma implies sequentially pointwise convergence topology let accumulation point meaning infinite take definition therefore passing limit along obtain hence weakly isotonic theorem therefore restriction similarity transformation true kleindessner von luxburg establishes uniform convergence result theorem much simpler arguments key following two results bounding modulus continuity resp weakly isotonic function note second result weakly isotonic functions weak sufficient purposes define inf hausdorff distance say recall size largest euclidean ball radius exact order set let diam supx diameter let arg sup diameter largest ball inscribed everywhere paper fixed fact implicitly small assume repeatedly sample size dense domain particular implicit constants proportionality follow depend solely lemma let open consider set let isotonic bounded diam proof proof based fact isotonic function transforms packing packing take let since open contains open ball diameter let ball constant depending let maxi triangle inequality isotonic therefore constant depending diam conclude diam let note complement hull see cuevas references therein lemma context lemma weakly isotonic diam proof assume otherwise nothing prove take let bounded enough prove result let define let define let maximum since satisfies min construction let take maxj triangle inequality implies induction weak isotonicity implies also weak isotonicity implies consequently forms hence diam constant conclude lower bound control modulus continuity obtain stronger version corollary theorem conditions corollary stronger conclusion sequence similarities fact isotonic remains true remark connected union possibly uncountable number open balls radius least covers case finite union open balls considered kleindessner von luxburg also note bounded open bounded curvature follows fact case positive reach federer therefore reach cuevas prop moreover arguments modified accommodate sets boundaries lipschitz reasoning wedges lemma theorem contains kleindessner von luxburg extends weakly isotonic functions general domains overall proof technique much simpler shorter elementary define quantifies density dense proof let accumulation point pointwise convergence topology meaning infinite show fact convergence uniform first suppose isotonic case lemma implies existence constant passing limit along get fact already knew corollary since learned coincides similarity therefore lipschitz fix let triangle inequality since arbitrary taken small desired shows sequence convergences uniformly proposition stated compact sets case easily extends case set closed compact boundary weakly isotonic use lemma get constant depending diam passing limit along get fact corollary explained rest arguments completely parallel conclude convergences uniformly let denote similarities functions define also inf end goal show suppose case infinite corollary showed convergence fact uniform meaning time therefore contradiction rates convergence beyond consistency able derive convergence rates isotonic case quadruple comparison setting recall theorem consider setting isotonic depending sequence similarities diam diam function diam diam proof theorem substantially technical previous results thus postponed section although kleindessner von luxburg able obtain rates convergence proof theorem bares resemblance proof technique particular also based result alestalo approximation see lemma also make use related result vestfrid approximation approximately midlinear functions see lemma mention know elementary proof makes use alestalo yields slightly slower rate convergence note constant depending open contains open ball lower bound trivially holds open ball lower bound achieved roughly regularly spread instead iid uniform sufficiently regular example log would give rate know whether optimal even dimension remark able get rate weakly isotonic case adapting arguments underlying theorem assuming resolving additional complications ordinal embedding local comparisons terada von luxburg consider problem embedding unweighted graph saw introduction special case ordinal embedding arguments explained earlier seem incomplete time writing indicate log enough consistently embedding graph consider situation information specifically distance comparisons formally situation nkn denotes set items nearest item items points exact ordinal embedding constrained locally weakly isotonic explain start stating standard result relates graph graph lemma let bounded connected open sample iid density supported essential range strictly constant log probability tending neighk denotes set points nearest proof postponed section provided completeness therefore assuming log constant lemma may equivalently consider case max given exact embedding case isotonic require addition reasonable requirement since possible infer indeed assume case euclidean distances still infer even quadruples must include least three distinct items indeed suppose max assume sufficiently large situation happen conversely happen theorem consider setting assume addition isotonic balls radius satisfies constant depending diam diam similarities assume data points generated lemma case log theorem implies consistency log lemma corresponds situation provided comparisons among neighbors log result terada von luxburg holds rigor rather weak result landmark ordinal embedding inspired jamieson nowak consider situation landmark items indexed given distance comparisons point landmarks formally triple comparisons corresponds situation items points exact ordinal embedding constrained weakly isotonic set landmarks addition required respect ordering distances point landmarks following easy consequence theorem corollary theorem remains valid landmark triple comparisons setting meaning described long landmarks become dense jamieson nowak study number triple comparisons needed exact ordinal embedding counting argument show least log comparisons needed constant depending insist embedding respects comparisons provided corollary implies landmark design able consistent long landmarks become dense consistency implies sample size increases embedding respects landmark comparisons also respects comparisons approximately achieved triple comparisons number landmarks conditions corollary fulfilled speed number comparisons nearly linear proof focus weakly isotonic case assume let denote set landmarks since becomes dense meaning theorem sequence similarities let lemma constant middle term first term bounded bounded third term express form orthogonal transformation take two distinct landmarks diam exist sufficiently large since time diam diam diam eventually diam diam hence third term rhs bounded thus rhs bounded tends valid conclude remark end proof obtained rate convergence function density landmarks convergence rate implicit theorem leads following rate quadruple comparisons setting corresponds situation constrained isotonic set landmarks required respect ordering distances data point landmarks corollary consider setting landmark quadruple comparisons setting meaning described let denote set landmarks set constant sequence similarities proof proof parallel corollary apply theorem get bounds second term rhs first term bounded lemma third term bounded constants computational complexity discuss computational complexity ordinal embedding landmark design obvious approach two stages first stage landmarks embedded goal agarwal example use brute force proposition suppose items fact points euclidean space dissimilarities pairwise euclidean distances whether triple quadruple comparisons setting exact ordinal embedding items obtained finite expected time proof algorithm discuss naive sample points iid uniform distribution unit ball repeat ordinal constraints satisfied since checking latter done finite time suffices show strictly positive probability one sample satisfies ordinal constraints let denote set satisfy ordinal constraints meaning seeing subset rdm clearly open sampling iid uniform distribution results sampling uniform distribution assigns positive mass open set second stage point landmark embedded based order distances landmarks quickly mention work davenport develops convex method performing task contented knowing done point finite time function number landmarks example brute force approach starts computing voronoi diagram landmarks iteratively repeats within cell creating tree structure point landmark placed going root leaf choosing point leaf cell say barycenter thus landmarks first stage performed expected time second stage performed time overall procedure thus computed expected time remark procedure described suggested practical means perform ordinal embedding landmark design first stage described proposition finite expected time likely polynomial number landmarks practical method suggest following embed landmarks using method agarwal solves semidefinite program method terada von luxburg uses iterative strategy embed remaining points using method davenport solves quadratic program although practical reasonable provide theoretical guarantees method proofs section gather remaining proofs auxiliary results introduce additional notation basic concepts let aff denote affine hull meaning affine subspace generate vector euclidean space let denote euclidean norm matrix let denote usual operator norm meaning max frobenius norm regular simplexes play central role proofs say form regular simplex pairwise distances equal note necessarily regular simplexes euclidean space number distinct nodes similarity transformations example segments equilateral triangles tetrahedron recursion number vertices easy prove following lemma let form regular simplex edge length let denote barycenter form regular simplex dimension exactly two points proof theorem assume see kleindessner von luxburg case divide proof several parts continuous extension lemma implies locally uniformly continuous indeed take let weakly isotonic applying lemma dense noting yields constant locally uniformly continuous uniquely extend continuous function also denoted continuity extension locally weakly isotonic isosceles preservation sikorska szostok say function preserves isosceles triangles case continuity also preserves isosceles triangles locally indeed sake pedagogy let weakly isotonic take define let letting get continuity since play role converse inequality also true combined yield equality midpoint preservation let convex say function preserves midpoints show preserves midpoints locally kleindessner von luxburg also however arguments closer sikorska szostok make use regular simplexes important fact function preserves isosceles preserves regular simplexes let preserves isosceles take let let form regular simplex barycenter side length words forms regular simplex placed barycenter symmetry forms regular simplex also lemma triangle inequality fact hence regular simplexes one singular one case otherwise necessarily symmetric respect aff possibility would case would still since lemma implying weakly isotonic neighborhood assume symmetric respect aff constant therefore belongs line points equidistant implies collinear also necessarily midpoint conclusion arrived conclusion extended continuous function preserves midpoints locally use following simple results sequence lemma conclude locally affine lemma conclude fact affine lemma conclude fact similarity lemma let convex set euclidean space let continuous function values euclidean space preserves midpoints affine transformation proof result fact provide proof completeness suffices prove starting fact true recursion true dyadic meaning form integers since dyadic numbers dense continuity deduce desired property lemma locally affine function open connected subset euclidean space restriction affine function whole space proof let domain function cover countable number open balls coincides affine function take distinct since connected must sequence bks since bks open set must fks true implies lemma affine function preserves isosceles locally similarity transformation proof let affine function preserves isosceles open ball without loss generality may assume ball linear fix let take different let hence valid linear implies similarity auxiliary results list number auxiliary results used proof theorem following result perturbation bound trilateration process locating point based distance landmark points real matrix let denote largest singular value lemma let aff let denote matrix columns consider define max max proof assume without loss generality case note also redefine matrix columns note first singular values remain unchanged since aff matrix form similarly find hence max max simultaneously combining inequalities conclude say form regular simplex min max lemma let form regular simplex maximum edge length achieved constant aff forming regular simplex edge length maxi proof scale equivariance may assume use induction follows etc constants depend statement trivially true suppose true consider regular simplex maximum edge length changing needed without loss generality assume aff case regular simplex maximum edge length achieved inductive hypothesis aff forming implies existence regular simplex edge length constant let orthogonal projection onto continuing let set obtained fixing varying among points make regular simplex let barycenter note set pythagoras theorem triangle inequality using fact hence lemma implies since must therefore let side form regular simplex note orthogonal projection onto pythagoras theorem applied multiple times obtain following first orthogonal therefore parallel orthogonal second term already know first term bounded since one hand hand know hence find constant function shows induction hypothesis holds lemma constants form regular simplex maximum edge length proof scale equivariance may assume lemma constant aff forming regular simplex edge length maxi weyl inequality horn johnson cor one hand positive constant depending hand mcm lemma let form regular simplex maximum edge length barycenter let aff define maxi mini constant depending proof scale equivariance may assume lemma max lemma constant also maxi conclude lemma let isotonic let set diam proof let suppose implies case lemma constant denoted diam yields similarly proves henceforth assume first assume case immediately reverse let note take note therefore applying triangle inequality lemma choose still constraint remaining arguments analogous repeating ways yields result lemma consider isotonic let denote convex hull set diam diam proof first prove diam diam indeed take let define let construction let triangle inequality hence diam since diam assume isotonic suppose satisfy showed implies diam diam conclude using fact diam following result neighbor interpolation lemma let subset isolated points set function define neighbor interpolation arg min consider modulus continuity defined sup modulus continuity denoted satisfies moreover proof fix take triangle inequality therefore sup sup since true conclude second part lemma second term bounded sup sup using fact similarly third term let convex context say midlinear lemma let respect point interior constant depending midlinear function affine function note ball invariance considerations depends proof direct consequence vestfrid say set define thickness inf diam recalling definition note two distinct general lemma let compact diam constant depending isometry proof direct consequence alestalo lemma let affine function transforms regular simplex edge length regular simplex maximum edge length constant depending isometry proof invariance may assume linear regular simplex formed edge length letting form regular simplex maximum edge length maxi lemma gives forming regular simplex edge length maxi constant let orthogonal transformation matrix notation letting max time positive constant depending hence lemma suppose two affinities proof translation scale invariance assume let hence turn implies proof theorem without loss generality may assume diam indeed suppose different otherwise degenerate similarity result follows let isotonic satisfies diam result true similarity constant implicitly assume set contains origin remains bounded cdn also similarity let let diam let vector define let necessarily distance exceeds note isotonicity whenever let diam constant let xik xik let xik clarity take open continuous curve let let inf zkl let let min indeed finite construction zkj thus triangle inequality true prove diam interpolation let denote interpolation claim diam satisfies following properties also satisfy indeed let start applying lemma get modulus continuity use lemma gives get first note triangle inequality turn implies since isotonic apply lemma get conclude lemma may apply lemma let convex hull let point ball let let vector define notice distance exceeds therefore necessarily note conclude since get diam diam apply lemma obtain using lemma note triangle inequality lemma constant denoted implies true apply lemma together lemma case case particularly simple note bounded open interval show function approximately midlinear take define fact takes values small enough hence midlinear result vestfrid namely lemma since ball affine function since affine transformations possibly degenerate similarities conclude case remaining subsection assume approximate midlinearity show constant locally approximately midlinear take let let constant set large enough later therefore assume let constructed proof theorem construction form regular simplexes barycenter lemma coupled fact yields let maxi let lemma hence assuming maxi case form regular simplex symmetry true define lemma since implies since already assumed constant therefore mini maxi define orthogonal projection onto affine space aff let pythagoras theorem particular max min max min min let denotes barycenter assume sufficiently large constant lemma lemma fact form regular simplex maximum edge length bounded let line passing perpendicular proved within distance let denote orthogonal projection onto since apply get using fact max due large enough lemma obtain constant particular recalling implies remains argue close already know within distance convexity must true let let denote orthogonal projection onto linear subspace pythagoras theorem cos implying sin since sin parallel also sin cos constant small enough conclude triangle inequality approximate affinity know midlinear constant diam implies result vestfrid lemma affine function constant diam approximate similarity reinitialize constants saw transforms regular simplex height denoted satisfying one follows choose points simplex height reinitialize variables etc take yielding constant triangle inequality min min max max triangle inequality max max max hence find form regular simplex note maximum edge length bounded follows max lemma constant isometry bounds constant implies covering conclusion reinitialize constants let let form maximal number essential play role proof theorem note note define let diam strictly positive result alestalo namely lemma gives constant isometry let min take since min max max max max lemma hence diam instead follows since connected sequence uki thus triangle inequality conclude noting since conclude concludes proof refinement constant assume tracking constants see depend diam diam well defined respectively note diam min lemma min bound beginning section end section restrict attention chains sure fix let curve define inf ukj let since therefore redefine min min triangle inequality min min see everything depends diam diam second part theorem follows invariance considerations proof lemma let ess inf ess supu assumption belong fix let upper bound vol vol vol denotes lebesgue measure volume unit ball hence bin bennett inequality binomial distribution union bound conclude maxi probability least tends log sufficiently large lower bound use following lemma lemma suppose open contains ball radius min moreover closure ball contains proof definition suffices show latter contains ball radius min symmetry may assume done otherwise let note lemma established apply get vol min hence bin union bound conclude mini probability least tends log sufficiently large recall fixed auxiliary results list additional auxiliary results used proof theorem define intrinsic metric sup curve exists set intrinsic diameter defined sup note curve length joining recall curve finite length said rectifiable see burago detailed account intrinsic metrics let referred erosion set mathematical morphology lemma open connected pair points rectifiable curve within joining proof take taking intersection open ball contains needed may assume without loss generality bounded since every connected open set euclidean space also waldmann example continuous curve priori could infinite length however compact let since btj since connected necessarily btj let therefore polygonal line defined inside btj construction polygonal line joins also rectifiable since finite number vertices lemma suppose bounded connected intrinsic diameter finite proof let assumption particular let connected component pick note definition also connected let volume unit ball since connected components disjoint volume least volume diam diam connected components denote pick applying lemma pair distinct rectifiable path joining lemma length denoted finite let maxk mink show connected component finite diameter intrinsic metric since bounded take let since connected sequence jsk qjs choose qjs let zsk qjs joins let polygonal line formed zsk construction length hence valid proved diameter intrinsic metric let take let let curves length joins joins join together curve joins lies entirely length bounded true pair points lemma suppose two affinities maxj form regular simplex minimum edge length least depending proof note closely related lemma translation scale invariance assume let let denote matrix columns matrix notation also lemma depends case another constant equivalently turn implies proof theorem bounded independently may assume without loss generality chosen large enough later take let first show diam diam mimic proof lemma take diam let let let xis xis triangle inequality xit xis xit xim form therefore diam conclude diam apply theorem fact saw proof invariance considerations obtain constant similarity diam note quantities subscript depend also left implicit fix assume parameterized arc length let given lemma let denote intrinsic diameter assuming curve length joining let jrn triangle inequality also uyj let uyj fix let denote regular simplex inscribed ball let denote edge length let maxk large enough triangle inequality moreover maxk well large enough therefore regular simplex minimum edge length since maxk lemma assuming particular fact diam gives diam hence since true arbitrary conclude max discussion paper builds kleindessner von luxburg provide theory ordinal embedding important problem multivariate statistics aka unsupervised learning leave open two main problems optimal rates convergence ordinal embedding triple quadruple comparisons minimum size consistency ordinal embedding based neighbor distance comparisons note studied large sample behavior exact embedding methods particular discuss proposed methodology producing embedding refer reader agarwal borg groenen terada von luxburg references therein fact practice ordinal embedding raises number questions terms theory instance many flawed comparisons tolerated acknowledgements grateful vicente malave introducing topic reading draft paper also want thank associate editor two anonymous referees pertinent comments pointing typos errors learned work ulrike von luxburg collaborators mathematical foundations learning theory workshop held barcelona june grateful organizers particular lugosi invitation participate work partially supported office naval research references agarwal wills cayton lanckriet kriegman belongie generalized multidimensional scaling international conference artificial intelligence statistics alestalo trotsenko isometric approximation israel journal mathematics aumann kruskal coefficients allocation problem naval research logistics quarterly borg groenen modern multidimensional scaling theory applications springer burago burago ivanov course metric geometry volume american mathematical society providence cuevas fraiman statistical properties sets fulfilling conditions adv appl probab davenport lost without compass nonmetric triangulation landmark multidimensional scaling computational advances adaptive processing camsap ieee international workshop ieee silva tenenbaum sparse multidimensional scaling using landmark points technical report technical report stanford university ellis whitman berenzweig lawrence quest ground truth musical artist similarity proceedings international symposium music information retrieval ismir federer curvature measures trans amer math soc horn johnson matrix analysis cambridge university press cambridge corrected reprint original jamieson nowak embedding using adaptively selected ordinal data communication control computing allerton annual allerton conference ieee kelley general topology volume graduate texts mathematics springerverlag kleindessner von luxburg uniqueness ordinal embedding proceedings conference learning theory kruskal multidimensional scaling optimizing goodness fit nonmetric hypothesis psychometrika mcfee lanckriet learning similarity journal machine learning research nhat challa lee nonmetric mds sensor localization international symposium wireless pervasive computing iswpc shepard analysis proximities multidimensional scaling unknown distance function psychometrika shepard analysis proximities multidimensional scaling unknown distance function psychometrika shepard metric structures ordinal data journal mathematical psychology sikorska szostok mappings preserving equilateral triangles journal geometry suppes winet axiomatization utility based notion utility differences management science terada von luxburg local ordinal embedding proceedings international conference machine learning vestfrid linear approximation approximately linear functions aequationes mathematicae von luxburg alamgir density estimation unweighted neighbor graphs roadmap advances neural information processing systems waldmann topology introduction springer international publishing young hamer multidimensional scaling history theory applications lawrence erlbaum associates inc
| 10 |
framework datatype transformation jan kort ralf universiteit van amsterdam voor wiskunde informatica vrije universiteit van amsterdam arxiv feb centrum abstract study one dimension program evolution namely evolution datatype declarations program end suite basic transformation operators designed cover refactorings also adaptations object programs subject datatype transformations meta programs encode datatype transformations functional programs introduction study operators transformation datatype declarations program presentation biased towards algebraic datatypes haskell concepts relevance many typed declarative languages mercury sml well frameworks algebraic specification rewriting like casl elan maude transformations rather syntactical nature opposed semantical concepts data refinement transformations contribute general notion functional program refactoring following introductory example extracting new datatype constructor components existing datatype illustrated datatypes represent syntax imperative language following extraction identifies piece syntax enable reuse later syntax extensions datatypes focus two constructor components data prog prog progname dec stat data dec vdec type data stat assign expr expr stat stat extraction dec stat constitute new datatype block data prog prog progname block data block block dec stat present paper describe design framework datatype transformations including operators extraction sec identify concerns addressed framework sec describe basic operators datatype transformations sec operators lifted datatypes complete programs related work discussed sec paper concluded sec kort concerns datatype transformation central contribution present paper simple editingcomplete suite operators datatype transformations embark suite identify concerns addressed approach datatype transformations via scripting interactive tool support primitives datatype transformations generic conciseness datatype transformations flexible means referring fragments interest datatype transformations discuss concerns depth scripting interactive tool support point view programmer datatype transformations founded intuitive scenarios adaptation actually perform datatype transformations two modes operation first mode scripting programmer encodes desired transformation expression basic operators second mode interactive transformation based corresponding gui benefits interactive tool rather obvious tool useful issue transformation basis dialogue provide tailored list options transformations make sense given context crucial benefit interactive transformation gui used provide feedback programmer locations changed programmer attention needed complete issued transformation scenario apparent benefits scripting opportunities revise transformations replay also integrated interactive setting fig illustrate interactive treatment introductory example using prototypical tool transform haskell snapshot indicates use designated fold dialogue perform extraction piece syntax folding basic transformation underlying extraction dialogue combines several transformation steps side conditions convenient way figure shows following situation user selected two consecutive types dec stat initiated fold dialogue user also typed block type name field introduction marked automatically since given type name yet exist user also selected kind radiobutton data filled block cons name field user would press replace make change one occurrence user could replace replace step occurrences next replace specific ones replace ordinary find replace text editors kort fig snapshot related interactive treatment introductory example list common transformation scenarios renaming type constructor names permuting type arguments constructor components dual extracting datatypes inlining datatypes including constructor declaration together associated functionality excluding constructor declaration together associated functionality inserting constructor component together associated functionality deleting constructor component together associated functionality transformation primitives core asset framework suite basic operators either used completed complex compound transformations design suite reuse design experience related effort grammar adaptation indeed obvious affinity grammar transformations datatype transformations challenging problem need address previous work completion datatype transformations apply entire functional programs evolving datatypes reside list required properties basic transformation operators correctness mostly insist structure preservation resulting datatype shape original datatype enforced postconditions operators kort completeness operators capture scenarios datatype evolution otherwise performed plain text editors adaptations defined terms disciplined primitives orthogonality operators inhabit roles higherlevel scenarios interactive transformation derivable operators datatype transformations complementary transformations locality basic operators operate small code locations opposed global exhaustive operators iterate entire program note operators necessarily exhaustive operator rename type name implementability operators implemented syntactical transformations constrained simple analyses check postconditions otherwise necessitate offline reasoning universality present paper focuses datatype transformations principles embodied operators universal sense also apply abstractions datatypes functions modules list properties announce formal treatment would challenging opt complex language setup haskell properties provide merely design rationale formal approach important subject future work contribute anything narrow goal present paper compile inventory basic roles datatype transformation generic implement transformation operators compound haskell reuse publicly available abstract syntax haskell rely generic programming techniques perform haskell syntax haskell use generic programming allows complete functions specific syntactical sorts generic traversals process subterms specific sorts accordingly style metaprogramming known concise one provides functionality types constructors immediately relevant given problem datatype transformations type trafo defined follows type trafo hsmodule maybe hsmodule datatype transformation partial function hsmodule abstract syntactical domain haskell modules partiality expressed means maybe type constructor wraps result type partially needed model side conditions fig illustrate generic giving definition simple operator replacing type names specification formalises fact used abstract syntax part haskell core libraries package http kort replace type name replacetypeid typeid typeid trafo replacetypeid full tdtp adhoctp adhoctp idtp declsite refsite transform declaring occurrences type names declsite hsdecl maybe hsdecl declsite hstypedecl return hstypedecl declsite hsdatadecl cds return hsdatadecl cds declsite hsnewtypedecl return hsnewtypedecl declsite decl return decl transform using occurrences type names refsite hstype maybe hstype refsite hstycon unqual return hstycon unqual refsite tpe return tpe fig specification replacement operation underlying renaming type names type names occur two kinds locations either declaration site declare type using site refer type type expression need synthesise transformation pays special attention syntactical domains declaring using sites indeed figure two cases customise identity function idtp given context choose traversal scheme full tdtp full traversal manner way reach node input tree transform type names declaring using sites operator replacetypeid total function maybe type really needed partiality would issue derived operator renaming type names necessitates adding side condition insist fresh new name means referring fragments interest basic operators datatype transformation also actual transformation scenarios scripts interactive sessions need refer program fragments interest recall introductory example extracting type necessitates referring constructor components meant constitute new type framework use three ways refer fragments interest focus markers subterms approach particularly suited interactive transformations relevant fragments directly marked fig extend haskell abstract syntax include term constructors focusing relevant fragments datatype transformations prepared focus names types type expressions lists constructor components selectors subterms approach particularly suited scripting transformations selectors haskell type expressions defined fig three forms typesel represent three kinds declarations involve types helper typesel allows select part given type expression kort focus names data hsname hsnamefocus hsname focus type expressions data hstype hstypefocus hstype focus lists constructor components data hscondecl hscondecl srcloc hsname hsfocusedbangtype hsrecdecl srcloc hsname hsname hsbangtype data hsfocusedbangtype hsunfocusedbangtype hsbangtype hsfocusedbangtype hsbangtype fig kinds focus datatype transformation data typesel aliasref typeid typesel conref conpos typesel sigref funid typesel data typesel selstop seldom typesel selcod typesel selith parapos typesel selfun typesel selarg typesel type typeid type conid type funid type conpos type parapos data hsname hsname hsname hsname conid parapos int refer type alias refer constructor component refer function signature reference stops refer domain function type refer function type refer products component refer type constructor refer type argument refer type refer constructor refer function name refer component constructor refer parameter position syntactical sort kinds names fig selectors refer type expressions others predicates subterms predicates typically constrain type term pattern approach particularly suited repeated application transformation different focuses match given predicate ways mediate different ways referring subterms example given term focus marker type expression one compute selector refers focused subterm given predicate type expressions one compute list selectors operator defined selectors used predicates well finally given selector one also add corresponding focus marker input hand basic operators datatype transformation describe themes constitute operator suite renaming type constructor names permutation type parameters constructor components swapping types use sites introduction elimination type declarations folding unfolding type declarations kort sample input datatype data conslist nil cons conslist renamed permuted datatype data snoclist lin snoc snoclist fig illustration renaming permutation renametypeid renameconid permutetypeid permuteconid typeid trafo conid trafo parapos trafo parapos trafo rename type declaration rename constructor permute type parameters permute constructor components fig operators renaming parameter permutation renametypeid hsident conslist hsident snoclist renameconid hsident nil hsident lin renameconid hsident cons hsident snoc permuteconid hsident snoc seqtrafo seqtrafo seqtrafo fig script scenario fig wrapping unwrapping constructor components inclusion exclusion entire constructor declarations insertion deletion constructor components list makes clear group operator inverse folding unfolding unless operator used inverse case renaming permutation swapping operators first six groups almost last two groups deal transformations explain operators detail including illustrative examples explain effect operators datatype declarations postpone lifting operators level complete programs sec renaming permutation let start simplest datatype refactorings one think transformations consistently rename type constructor names permute parameters type constructor declarations fig simple example illustrated rename type name conslist constructor names nil cons permute two parameter positions cons resulting datatype specifies snoclist opposed conslist fig declare operators renaming names permuting parameter lists fig include script encodes sample sequence basic renaming permuting transformations end assume sequential composition operator seqtrafo datatype transformations script seqtrafo used infix operator seqtrafo kort data hsdecl introtypes elimtypes syntactical sort type declarations hsdecl trafo typeid trafo introduction type declarations elimination type declarations fig operators introduction elimination datatypes type typehdr typeid typevar type typevar hsname header lhs type declaration type variables foldalias typesel typehdr trafo unfoldalias typesel trafo folding referred type unfolding referred type fig operators folding unfolding introduction elimination next group operators deals introduction elimination type declarations see fig introduction means supplied types added names must use given program elimination means referenced types removed names must referred anymore resulting program two operators take lists types opposed single ones types often introduced eliminated groups say mutually recursive systems datatypes kinds type declarations make sense context aliases newtypes proper datatypes operators introduction elimination often essential compound transformations illustrated reconstruct introductory example full detail see sec folding unfolding instantiating folklore notions unfolding folding datatypes basically means replace type name definition vice versa extra provisions needed parameterised datatypes prime usage scenarios two operators following extraction introduction type followed folding inlining unfolding type followed elimination give example introductory example basically extracts structure imperative program blocks actually reconstruct example need operators postpone scripting example see sec operators folding unfolding declared fig operators make strict assumption type subject folding unfolding necessarily type alias opposed proper datatype assumption simplifies treatment operators considerably since type aliases definitions equivalent definition extra operators wrapping unwrapping allow use proper datatypes folding unfolding well addressed type foldalias operator provide type kort type conrange conpos int refer consecutive components groupconrange ungroupconpos conrange trafo conpos trafo typeid conid trafo typeid trafo typeid trafo typeid trafo group constructor components inline product turn type alias newtype turn newtype datatype turn datatype newtype turn newtype type alias fig operators wrapping unwrapping original syntax data prog data dec data stat data expr prog progname dec stat vdec type assign expr expr stat stat var const int grouping dec stat data prog prog progname dec stat introduction block prepare folding data prog prog progname dec stat type block dec stat folding away type expression dec stat data prog prog progname block type block dec stat turning block proper datatype constructor block data prog prog progname block data block block dec stat ungrouping product dec stat data prog prog progname block data block block dec stat fig illustration wrapping unwrapping extraction name also list type variables helper type typehdr needed parameterised datatypes want specify free type variables selected type expression map argument positions type alias preconditions operators follows case foldalias need check referenced type expression side given alias declaration coincide case unfolding need check referenced type expression corresponds application type alias wrapping unwrapping consider operators facilitate certain forms wrapping unwrapping datatype constructors see fig operators grouping ungrouping turn consecutive constructor components single component product type vice versa also operators mediate different kinds type declarations namely type aliases newtypes datatypes allow toggle representation datatypes basic ways result normal forms assumed operators established recall example use type aliases folding unfolding separation concerns serves orthogonality kort groupconrange hsident prog introtypes hstypedecl noloc block hstytuple hstyapp hstycon unqual hsident list hstycon unqual hsident dec hstyapp hstycon unqual hsident list hstycon unqual hsident stat foldalias conref hsident prog selstop hsident block hsident block hsident block hsident block ungroupconpos hsident block seqtrafo seqtrafo seqtrafo seqtrafo seqtrafo fig script scenario fig data maybe data maybe data maybe nothing nothing nothing data conslist nil maybe cons conslist fig illustration generalisation maybe conslist fig show steps implement introductory example one see basically implement extraction extra steps deal grouping ungrouping two components subject extraction also extracted type proper datatype opposed type alias see transition completeness sake transformation script shown fig script precisely captures steps underly interactive transformation fig operators completely strictly speaking structures datatypes transformation fully equivalent example newtype datatype semantically distinguished even defining constructor declaration constructor datatype involves extra lifting step semantical domain extra bottom element operators grouping ungrouping also deviate full structure preservation swapping types use sites deal transformations eliminate establish type distinctions call swapping types use sites fig illustrate typical application swapping example want generalise standard datatype maybe allow lists instead fact want change general definition library datatype maybe want change one use site shown figure swapping helps intermediate step replace maybe use site newly introduced datatype maybe equivalent structure figure illustrates subsequent adaptations derive kort type datanames type dataunifier typeid conid datanames datanames swapalias swapdata typesel typeid typeid trafo typesel dataunifier trafo fig operators swapping types use sites type condecl data hstype conid hstype includecondecl excludecondecl constructor declaration syntactical sort type expressions typeid condecl trafo conid trafo fig operators inclusion exclusion constructor declarations syntax fig data prog data block data dec data stat data expr prog progname block block dec stat vdec type assign expr expr stat stat var const int syntax extension statement blocks data stat assign expr expr stat stat sblock block fig illustration constructor inclusion conslist datatype clone maybe datatype particular add boxed constructor component swapping operators declared fig one operator type aliases another datatype declarations case proper datatypes one needs match constructors addition names types modelled helper datatype dataunifier type operator swapdata clarifies prepared process list dataunifiers necessary want swap mutually recursive systems datatypes inclusion exclusion leave ground transformations consider transformations input output datatypes structurally equivalent fact consider certain ways extend reduce structure datatype first couple transformations inclusion exclusion constructor declarations see fig operators feasible proper datatypes type aliases newtypes type alias involves constructor newtype defined terms precisely one constructor declaration fig show example constructor inclusion fact continue introductory example make use extracted block structure language extension statement blocks include constructor application stat capture block another statement form continuation kort insertconcomp deleteconcomp conpos hstype trafo conpos trafo fig operators insertion deletion constructor components datatype transition relation function helpers type transrel maybe data maybe nothing data conslist nil cons conslist introduction substitute maybe data maybe nothing swapping maybe maybe transrel type transrel maybe extension maybe fit shape conslist data maybe nothing maybe swapping maybe conslist transrel type transrel conslist fig illustration component insertion type swapping introductory example amplifies intended use operator suite program evolution sense datatype refactoring adaptation insertion deletion inclusion exclusion constructor declarations branching structure datatypes discuss operators serve insertion deletion constructor components see fig insertion component constructor declaration proceeds follows given target position new component new constructor declaration simply form general might need refer type parameters affected datatype deletion constructor declaration relies identification obsolete component fig elaborate earlier example generalising maybies lists recall fig top fig see three datatypes transrel maybe conslist idea indeed replace maybe conslist using occurrence transrel want allow function list instead partial function call adaptation generalisation list general optional initial phase generalisation maybe disconnect relevant occurrence maybe transrel possible occurrences program introduce copy maybe maybe perform type swapping transrel refers maybe instead maybe need make maybe structurally equivalent conslist amounts adding recursive component second constructor swap types refer conslist transrel kort datatype transformation meets program transformation groups operators investigate impact functional programs would utterly complex formalise link datatype program transformation mere specification transformations already intractable publication size number details describe implied program transformations informally omitting less interesting details renaming type names occur inside type declarations type annotations need adapt expressions function declarations except signatures type annotations expressions constructor names well occur inside patterns expressions contribute function declarations renaming occurrences completely straightforward permutation permutation type parameters necessitate completion level function declarations permutation constructor components however needs realized patterns expressions well particularly simple cases components matched definition hence directly permute affected constructor pattern witnessing permutations constructor components expression forms slightly complicated currying style instead permuting components possibly incomplete constructor applications could first get access components given constructor say potential components according declaration first replace justified witness permutation permuting arguments expression presence nonstrict language evaluation order patterns permutation constructor components might actually change behaviour program regarding termination neglect problem also mention debatable described kind really programmer wants obscures code introduction elimination introduction place obligations functions defined program case elimination ensure relevant types used function assume function declarations annotated inferred signatures precondition elimination checked looking signatures alternative approach rely complete type annotations check constructor relevant types used kort folding unfolding restriction folding unfolding type aliases guarantees operators necessitate adaptation function declarations simply interchanging type alias definition completely definition extremely convenient despite crucial role operators folding unfolding raise issue level function declarations wrapping unwrapping grouping ungrouping operators handled using overall approach advocated permutation constructor components patterns witness grouping ungrouping inserting removing enclosing expressions perform access relevant components group ungroup constructor application mediation newtypes datatypes datatype transformations imply adaptations functions involve datatype question indicated earlier extra bottom value datatype compared newtype allows program undefined one way newtype alias migration simply remove occurrences associated constructor pattern expression forms require relevant newtype covered instance declaration type class constructor class otherwise inline members way prior removal constructor neglected issue resulting program either becomes untypeable different instance applied accidentally would hazardous regarding semantics preservation alias newtype migration operator requires treatment function declarations crucial issue know following expressions wrapped newtype constructor patterns newtype constructor need stripped approach simple possible observe new newtype might used declarations datatypes corresponding patterns expressions easily located adapted case permutation grouping ungrouping recall also need adapt function declarations argument result types known refer relevant alias basically means need access affected arguments result expressions relevant equations unwrap arguments wrap result expressions adaptations slightly complicated fact affected type alias occur arbitrarily nested locations fig illustrate effect operator introductory example show interpreter function maps statements kort interpreter function illustrative extraction run prog state run prog name decs stats mapm interpret stats function extraction run prog state run prog name block decs stats mapm interpret stats fig function adaptation triggered migration input program type transrel maybe data maybe nothing deadend transrel bool deadend case nothing true false output program type transrel maybe data maybe nothing deadend transrel bool deadend case tomaybe nothing true false induced helper type swapping tomaybe maybe maybe tomaybe nothing nothing tomaybe fig function adaptation triggered type swapping program program name declarations carry semantics type function run exhibits meaning program computation involves state program variables adapted version run refers extra constructor block resulted extraction swapping types use sites operator relies techniques however instead wrapping unwrapping constructor invoke conversion functions mediate two structurally equivalent types mediators merely map old new constructors vice versa hence immediately induced datatype transformation namely dataunifiers passed swap operator approach implies perform local changes program code still work old datatypes thanks mediators impact swapping types function level illustrated fig deal initial steps migration fig replace occurrence maybe within transrel structurally equivalent maybe show illustrative function deadend performs test given transition relation allows transition presence given state adapted function deadend refers conversion function tomaybe prior performing pattern matching obsolete maybe type kort input program data stat assign expr expr stat stat interpret stat state interpret assign envlookup interpret reval output program data stat assign expr expr stat stat sblock block interpret stat state interpret assign envlookup interpret reval interpret sblock fig inclusion constructor declaration inclusion exclusion intuitively inclusion constructor complemented extension relevant case discriminations normally means add equation case case expression new constructor dually exclusion constructor complemented removal equations cases refer constructor case added equations view sides equations kind hot spot resolved subsequent transformations end use undefined kind marker dually case removed constructors also need replace occurrences constructor within expressions using interactive tool support markers useful control steps transformation scenario fig progress running example interpreter imperative language illustrate step blocks turned another form statements hence shown output program involves new equation interprets statement blocks added equation reflects meaning blocks yet undefined subject subsequent adaptations insertion deletion inserting component declaration constructor means patterns outermost constructor must adapted neglect added component applications must completed include added component dually deletion component means applications patterns outermost constructor need cleaned project away obsolete component reference pattern variable obsolete component replaced case permutation others needed actually get access constructor components expressions fig insertion constructor component illustrated continuing scenario fig adapted equation tomaybe involves extended pattern care pattern indicates definition tomaybe make use added component fact definition function deadend need adapted tests availability transition step kort output program type transrel maybe data maybe nothing maybe deadend transrel bool deadend case tomaybe nothing true false induced helper type swapping tomaybe maybe maybe tomaybe nothing nothing tomaybe fig illustration insertion constructor component normally functions start rely richer pattern related work transformational program development formal program transformation separates two concerns development initial maybe inefficient program correctness easily shown stepwise derivation better implementation manner partsch textbook describes formal approach kind software development pettorossi proietti study typical transformation rules functional logic programs formal program transformation part also addresses datatype transformation say data refinement one gives different axiomatisations implementations abstract datatype related transformation steps typically involves amount mathematical program calculation contrast deliberately focus syntactical transformations programmer uses anyway adapt evolving programs database schema evolution large body research addressing related problem database schema evolution relevant example database reverse engineering schema transformations compared datatype transformations superficial level different formalisms involved exist formal frameworks definition schema transformations various formalisms investigated interesting aspect database schema evolution schema evolution necessitates database instance mapping compare evolution datatypes functional program main concern update function declarations compliance new datatypes seems instance mapping problem special case program update problem refactoring transformational approach program evolution nowadays called refactoring idea new refactoring means improve structure code becomes comprehensible maintainable adaptable interactive refactoring tools studied used extensively programming context typical examples functional program refactorings described introduction monad program precise inhabitation kort refactoring notion functional programming addressed project university kent thompson reinke see also related work functional context erwig previous work specifically address datatype transformations refactorings class structures directly applicable different structure semantics classes algebraic datatypes structure editing support interactive transformations seen sophistication structure editing link transformation editing particularly appealing syntactical transformations surprisingly concepts developed structure editing related work example primitives structure editing identified based notion focus select subtrees navigation primitives left right trees subtrees paths defined follows data tree type subtree type path type layer fork label tree path tree layer label tree tree subtree currently selected tree left right trees top layer head approach account heterogeneous character language syntaxes shows fact focus resides term encoded types concluding remarks contribution identified fundamental primitives datatype transformation operators meant support common scenarios program adaptation functional programming settings algebraic datatypes play role fact identified operators universal sense also meaningful program abstractions datatypes function declarations deliberately focused adaptations datatypes vast body previous work addressed transformations recursive functions despite focus datatype transformations consider program transformations necessitated modification datatypes regarding executable specification operator suite adhered formula metaprograms haskell programs employed generic functional programming interest conciseness also employed designated means referring fragments interest focus concept partial project failure confident identified operators sufficient appropriate actual datatype transformations attempted complement framework development actual interactive tool support initially thought using haskell interactive tooling well would good idea since actual transformation operators implemented haskell anyway interactive dialogues need cooperate operator framework perform analyses haskell indeed seems obvious choice make long story kort short many gui libraries haskell none suitable developing sophisticated gui interactive program transformation moment seems environments interactive language tools would provide better starting point environments based attribute grammars perspective cover full haskell operators would added suite particular operators support type constructor classes also pay full attention idiosyncrasies haskell refutable irrefutable patterns also transformation techniques seem beyond notion program evolution interesting cover anyway think techniques like turning system datatypes functorial style threading parameter system datatypes ultimate perspective presented work integrate datatype transformations complete refactoring tool functional programming along lines thompson reinke research project another perspective research pursue intertwined character datatype program transformations context xml format api evolution references arango baxter freeman pidgeon tmm software maintenance transformation ieee software may batini ceri navathe conceptual database design redwood city burstall john darlington transformation system developing recursive programs journal acm january banerjee kim kim korth semantics implementation schema evolution databases sigmod record proc conf management data may roever kai engelhardt data refinement proof methods comparison volume cambridge tracts theoretical computer science cambridge university press erwig ren language programming software updates proceedings acm sigplan workshop programming pages acm press fowler design existing code addison wesley griswold notkin program restructuring aid software maintenance technical report seattle usa august hainaut tonneau joris chandelon schema transformation techniques database reverse engineering proc int conf approach institute kort koorn generating uniform interactive programming environments phd thesis university amsterdam kuiper saraiva lrc generator incremental languageoriented tools koskimies editor compiler construction volume lncs pages april tool demonstration reuse program transformation greg michaelson phil trinder editors functional programming trends pages intellect grammar adaptation oliveira zave editors proc formal methods europe fme volume lncs pages moore automatic inheritance hierarchy restructuring method refactoring oopsla conference proceedings programming systems languages applications pages acm press mcbrien poulovassilis formal framework schema transformation embley goldstein editors conceptual modeling international conference conceptual modeling los angeles california usa november volume lncs pages opdyke refactoring frameworks university illinois phd thesis partsch specification transformation programs pettorossi proietti rules strategies transforming functional logic programs acm computing surveys june roberts brant johnson refactoring tool smalltalk theory practice object systems tapos reps teitelbaum synthesizer generator system constructing editors sufrin moor modeless structure editing roscoe woodcock editors proceedings symposium celebration work tony hoare september thompson reinke refactoring functional programs technical report computing laboratory university kent canterbury october also see http
| 6 |
capacity class noise channels hamid ghourchian gholamali aminian amin gohari mahtab mirmohseni masoumeh jun department electrical engineering sharif university technology tehran iran ghourchian aminian aminzadeh mirmohseni mnasiri abstract applications variance additive measurement noise depends signal aim measure instance additive gaussian noise agsdn channel models used molecular optical communication herein provide lower upper bounds capacity additive noise asdn channels idea first lower bound extension majorization inequality second one uses calculations based fact valid additive noise asdn channels defined paper upper bound based previous idea authors symmetric relative entropy used additive gaussian noise agsdn channels bounds indicate asdn channels unlike classical awgn channels capacity necessarily become larger making variance function noise smaller also provide sufficient conditions capacity becomes infinity complemented number conditions imply capacity finite unique capacity achieving measure exists sense output measure keywords noise channels molecular communication channels infinite capacity existence distribution introduction additive gaussian noise agsdn channel input output defined given function alternatively may describe agsdn channel standard gaussian random variable independent input constant function agsdn channel reduces simple additive gaussian channel generally may relax gaussian assumption consider additive noise asdn channel defined noise assumed continuous random variable given pdf independent input instance one consider asdn truncated work supported insf research grant communications first two authors contributed equally work see definition definition continuous random variables version gaussian distribution better model application know output minimum maximum values applications provide number applications asdn channel arises agsdn channel appears optical pcommunications modeling shot noise optical amplification noise molecular communication agsdn channel arises ligand receptor model particle sampling noise particle counting noise poisson model absorbing receiver cases reason appearance gaussian signaldependent noise approximation binomial poisson distribution gaussian distribution observe mean variance binomial distribution parameters relate mean variance respectively result mean variance approximated gaussian distribution also relate see section detailed overview besides applications asdn molecular communications shall provide two cases channel model helpful consider brownian motion particle drift nonhomogeneous medium denoting diffusion coefficient medium location diffusion coefficient describes movement variance particle location specifically motion particle described stochastic differential equation dxt dbt standard wiener process standard brownian motion alternatively express equation using following integral dbu let denote position particle time position seconds small fixed number reduces thus movement particle follows agsdn channel law small another example consider molecular timing channel medium molecular timing channel information encoded release time molecules molecule released time hits receiver delay time molecules absorbed hit receiver distribution first arrival time existing literature studies problem medium see medium uniform distributed according inverse gaussian distribution flow medium distribution flow medium result channel called additive inverse gaussian noise additive channel additive noise literature however medium distance transmitter receiver varies time distribution depends release time result obtain noise additive component instance additive noise distribution scale parameter depends input using scaling property distribution express standard distribution scale parameter would asdn channel third item discussed brownian motion small time elapse brownian motion drift example martingale let consider martingale large time elapse agsdn channel also arises conditional distribution process modeled discrete time martingale bounded increments assume martingale furthermore martingale central limit theorem conditional distribution given large values approximated gaussian distribution mean variance depends finally relate asdn channel real fading channels direct line sight consider scalar gaussian fading channel input gaussian fading coefficient additive environment noise first term side corresponds direct line sight term fading term distribution given thus expressed fast fading setting varies independently channel use corresponds memoryless asdn channel purpose paper study capacity memoryless additive noise asdn channel defined via input cost constraints memoryless assumption implies noise drawn independently channel use related works vector agsdn channels subject cost constraints studied shown assumptions capacity achieving distribution discrete distribution agsdn channel investigated wherein capacity upper lower bounds derived considering peak average constraints note memoryless agsdn includes additive white gaussian noise awgn channel special case capacity awgn channel power constraint classical obtained input gaussian random variable capacity average peak power constraints quite different capacity achieving input distribution discrete finite number mass points see results capacity awgn channel average peak power constraints contributions contributions work summarized follows provide new tool bounding capacity continuous channels note provide two sufficient conditions results leads lower bounds channel capacity asdn channel known increasing noise variance awgn channel decreases capacity however show longer case noise channels constraint necessarily imply capacity agsdn channel less equal capacity agsdn identify conditions capacity asdn channel becomes infinity particular implies capacity agsdn channel tends infinity tends zero thus capacity real gaussian fast fading channel given earlier section tends infinity tends zero parallels similar result given complex gaussian fading channels provide new upper bound agsdn channel based symmetrized upper bound upper bound suitable low snr regime large contrast upper bound theorems agsdn channels suitable large values peak average constraints furthermore give ourp upper bound large class functions technique tuned paper organized follows section includes primary definitions notations section main results given includes two lower bounds one upper bound capacity asdn channel useful lemmas section used paper numerical results plots given section proofs results given section definitions notations section review definitions continuous discrete random variables well entropy differential entropy relative entropy mutual information throughout paper logarithms base random variables denoted capital letters probability measure functions denoted letter collection borel measurable sets denoted sometimes use almost everywhere everywhere respectively set set lebesgue measure definition relative entropy section random variables probability measures relative entropy defined follows log xky derivative means absolutely continuous borel space measures defined definition mutual information section random variables joint probability measure mutual information defined follows product measure defined borel space defined borel space defined similarly three random variable joint measure conditional mutual information defined definition continuous random variable let random variable measurable respect call continuous random variable probability measure induced absolutely continuous respect lebesgue measure zero lebesgue measure denote set absolutely continuous probability measures note theorem implies random variable measure exists function function called probability density function pdf denote pdf absolutely continuous probability measures letter definition discrete random variable random variable discrete takes values countable alphabet set probability mass function pmf discrete random variable probability measure denoted defined follows definition entropy differential entropy chapter define entropy discrete random variable measure pmf log summation converges observe log continuous random variable measure pdf define differential entropy log integral converges similarly differential entropy log similarly two random variables measure absolutely discrete pmf conditional entropy defined log likewise two random variables measure absolutely continuous pdf conditional differential entropy defined log allow differential entropy integral convergent say log converges finite number log similarly define write mean differential entropy exists equal following example demonstrates differential entropy example differential entropy becomes plus infinity following pdf defined log hand shown differential entropy minus infinity log log log otherwise definition riemann integrable functions given work utilize riemann integrable functions open interval functions satisfy property function fundamental theorem calculus continuous necessarily differentiable unless continuous example consider function otherwise function riemann integrable restricted domain integrable main results interested capacity asdn channel input taking values set satisfying cost constraint functions common power constraint corresponds allow general constraints given density function noise function consider following optimization problem sup related via supp sometimes use supp denote support measure supp probability measure clear context example application input satisfies set taken reflect fact similarly constraint reduces reduces rest section organized follows section provide conditions imply finiteness capacity asdn channel section review ideas used obtaining lower bounds previous works also work based new ideas introduced work provide two different lower bounds sections finally section provide upper bound agsdn channels existence finiteness channel capacity theorem assume asdn channel satisfies following properties closed also bounded subset exists real numbers exist positive real exist cost constraint functions bounded capacity asdn channel finite furthermore capacity achieving probability measure words capacity expressed maximum rather supremum max moreover output distribution unique achieves capacity pdfs output channel input probability measures respectively remark theorem generalization given theorem special case gaussian noise proof found section give partial converse theorem consider case second assumption theorem fails sequence elements converges zero infinity following theorem shows mutual information infinity cases theorem consider asdn channel necessarily closed set suppose one find sequence elements converges sequence real numbers limit possibly outside denote limit plus minus infinity one find another real number open interval depending whether belongs furthermore monotone continuous one find measure defined provided continuous random variable following regularity conditions furthermore one measure makes fact input continuous discrete random variable one find absolutely continuous measure pdf discrete pmf infinity measure input either proof found section uses results prove later paper remark example consider agsdn channel arbitrary channel input cost constraints setting shows capacity channel given infinity additive noise parallels similar result given complex gaussian fading channels require monotonicity strictly monotonicity remark known increasing noise variance awgn channel decreases capacity however show longer case noise channels consider two agsdn channels parameters respectively defined following formulas input cost constraints imposed clear however considering constraint theorem obtain capacity first channel finite theorem obtain capacity second channel therefore constraint necessarily imply capacity agsdn channel less equal capacity agsdn lower bounds capacity compute capacity one take maximum probability measures potentially large class practically speaking one find finite number measures evaluate mutual information ideally form entire appropriate distance metric mutual information every arbitrary measure approximated one measures computationally cumbersome even measures defined finite interval result desirable find explicit lower bounds capacity observe compute term observe given thus log see lemma thus log however term challenging handle authors consider agsdn channel well show hence implies instead maximizing one maximize obtain lower bound proof relation review motivate techniques paper first consider special case case get agdsn reduces awgn channel special case one obtains desired equation writing however argument extend case since depends argued without loss generality one mayp assume one express noise channel independent standard normal variables thus write argument awgn channels thus suffices show special case problem corresponds show advanced ideas utilized key observation following assume exponentially distributed mean density exp arbitrary input distribution data processing property relative entropy kgy kgx output density input density simplified equation leads argument crucially depends particular form output distribution corresponding input exponential distribution specific argument works specific choice normal distribution readily extended choices paper propose two approaches handle general settings idea provide following novel general lemma establishes large class asdn channels lemma take arbitrary channel characterized conditional pdf satisfying support channel input channel output respectively take arbitrary input pdf resulting output pdf assuming exist proof provided section example lemma yields alternative proof result agsdn channel note mentioned order prove need prove end observe since proof equation given appendix idea provide variation type argument given introducing number new steps would adapt argument asdn channels following sections discuss two ideas separately first idea lower bound theorem assume asdn channel defined noise pdf riemann integrable continuous random variable pdf supported provided integrals defining converge real number function increasing function defined arbitrary remark note well defined see definition selecting different obtain different function however invariant respect adding constant terms thus invariant respect different choices theorem proved section corollary let since function obtain max max defined belongs supp hence theorem obtain max max order find maximum use known results maximum entropy probability distributions see chapter corollary consider asdn channel satisfying assume input constraint corollary obtain lower bound max log taking uniform distribution set bounded section else infinite length capacity infinity choosing pdf whose differential entropy infinity see example equivalent pdf pdf insight provide following example example consider awgn channel namely agsdn channel let restrict measures satisfy power constraint since apply corollary thus lower bound max log achieved gaussian distribution section capacity awgn channel log comparing see lower bound close capacity high snr regime another example consider constraints admissible input measures obtain lower bound max log used fact maximum achieved exponential distribution exp section unlike first example exact capacity formula channel known second idea lower bound going provide another lower bound appropriate channels either monotonic function example channels molecular timing channel discussed introduction theorem assume asdn channel defined continuous random variable pdf continuous monotonic riemann integrable provided order define variables function take arbitrary proceed follows function increasing let log function decreasing let log arbitrary log log remark observe cases strictly increasing function defined log increasing similar remark choice affect value hence lower bound however choice affects lower bound theorem proved section corollary similar corollary let since strictly increasing function obtain max max defined belongs supp hence theorem obtain max max constants defined theorem mentioned earlier maximize use known results maximum entropy probability distributions see chapter corollary consider asdn channel satisfying assume input constraint corollary obtain lower bound max log log defined theorem lim lim lower bound achieved taking uniform distribution set bounded section else infinite length capacity infinity choosing pdf see example equivalent pdf pdf upper bound begin reviewing upper bound given motivate upper bound upper bound works utilizing topsoe inequality bound mutual information follows arbitrary pdf output distribution chosen carefully allow calculation divergence particular form makes explicit calculations possible second difficulty calculating expression need take expected value input measure however capacity achieving input measure known difficulty addressed technique input distributions escape infinity assumptions peak constraint part give upper bound based symmetrized upper bound idea dsym upper bound advantage applicable large class state upper bound let cov covariance function two random variables theorem agsdn channel defined cov cov provided covariance terms right hand side finite proof found section corollary agsdn channel parameters functions increasing convex max corollary proved section remark even though corollary assumption formally set see upper bound capacity becomes infinity consistent theorem corollary particular choice motivated applications discussed introduction property increasing theorem applied useful lemmas section provide three lemmas used proof theorems paper lemma asdn channel defined continuous random variable noise pdf noise coefficient conditional measure following pdf moreover continuous random variable pdf furthermore exists defined equal log lemma proved section lemma let continuous random variable pdf function riemann integrable log arbitrary constant note side exist becomes occurs side vice versa lemma proved section lemma let random variable probability measure functions increasing convex max cov furthermore case maximizer pmf case linear maximizer pmf proof given section capacity capacity nats per channel use capacity nats per channel use upper bound capacity corollary lower bound corollary lower bound figure capacity symmetrized divergence upper bound terms agsdn channel function figure capacity lower bound corollary terms agsdn channel function numerical results section numerical results given upper bound corollary capacity depicted logarithmic scale fig considered peak constraint average constraint observed distance upper bound capacity small constant logarithmic scale low snr regime consistent argues upper bound based symmetrized divergence mostly suitable low snr regime lower bounds corollaries plotted fig function terms peak constraint assumed lower bound corollary computed following closed form formula log log log lower bound corollary equals log log log log log log maximized order find lower bound corollary first lower bound better second one mainly multiplicative coefficient second lower bound since second lower bound general class channels consider positive negative part support causing multiplicative coefficient gaussian noise however support positive negative reals two lower bounds differ much proofs proof theorem finiteness capacity first step show capacity finite sup prove suffices show supremum finite uniformly utilizing lemma existence boundedness obtained follows max log log uniformly lemma obtain continuous pdf prove integral defining convergent finite value existence entropy furthermore integral convergent value bounded uniformly sufficient show positive real sup also lemma obtain thus holds order prove note max uniformly thus uniformly bounded hence definition mutual information obtain bounded uniformly existence maximizer let sup would like prove supremum maximum equation implies existence sequence measures lim output channel input furthermore without loss generality assume convergent measure measure reason since compact set also compact respect measure proposition thus sequence measures convergent subsequence loss generality take subsequence thus convergence measure know lim would like prove output measure channel input measure complete proof argument given first part proof finiteness capacity finite result show need prove lim lim since obtained lemma order prove proceed follows step begin showing sequence cauchy sequence respect total variation two arbitrary probability measure total variation distance defined sup collection available finite partitions step established step utilize fact space probability measures complete respect total variation metric show note lemma pdf hence total variation expressed terms norm pdfs lemma obtain space pdfs complete respect norm result converges measure respect total variation metric claim convergence implies lim reason see uniformly bounded finite therefore follows theorem thus step obtain sequence limit step show limit found step equal completes proof hence remains prove proof since convergent exists consider let uniform bernoulli random variable independent previously defined variables sample measure defined follows sample measure induces measure markov chain let output channel input note concavity mutual information input measure obtain since intersection half spaces convex result thus obtain markov chain obtain result pinsker inequality obtain total variation measures note therefore obtain result hence taking obtain cauchy sequence proof end suffices prove exp characteristic function random variable since converge total variation fact convergence total variation stronger weakly convergence obtain characteristic functions also converge pointwise hence suffices prove converge pointwise obtain similarly since converges measure function bounded obtain converges pointwise uniqueness output pdf proof first part proof theorem completes proof proof theorem continuous input measure utilize later result paper namely theorem choosing use corollary observe image infinite length sequence monotone function converged zero infinity sequence obtained pdf makes infinity leads bijective function defined statement theorem order prove let random variable conditioned due continuity fact obtain valid pdf defined since exists obtain hence log therefore exists similar treatment used prove remains construct discrete pmf infinite mutual information statement theorem assumes existence sequence open interval limit sequence converges monotone continuous make following claim existence another sequence certain nice properties claim suppose one find interval exists sequence increasing decreasing continue proof assuming claim correct give proof claim later show claim used construct discrete pmf infinite mutual information consider possibility assumption claim fails interval therefore provide discrete distribution interval result thus consider case assumption claim holds assume increasing construction decreasing similar fix given satisfying take arbitrary pmf log define discrete random variable taking values claim end suffices show proof define random variable following definition mutual information since conclude proof since suffices show equality log follows prove equality note belongs interval therefore since intervals disjoint found thus function result second equality proved remains prove claim existence assume increasing proof decreasing similar assumptions obtain exists result select since monotone two arbitrary distinct since implies result shall worry constraint occur one index delete element sequence ensure show existence provide method find respect method described illustrated figure take arbitrary element observe since continuous increasing functions continuous strictly increasing well therefore case happening converge lim lim hence given due intermediate value theorem exists unique satisfying similarly case happening converge exists unique satisfying easily obtained intervals created way disjoint process stop finite steps therefore theorem proved ascending descending descending ascending figure possible cases proof theorem lemma obtain exists hence utilizing lemma write provided satisfied due last inequality comes assumption theorem lemma log therefore written log exploiting lemma obtain log defined hence proof complete proof theorem prove case increasing function proof theorem decreasing functions similar increasing case need substitute claim consider random variable following definition mutual information therefore since conclude find lower bound lemma obtain continuous random variable claim log log log obtained lemma fact random variable conditioned also continuous moreover obtained adding subtracting term log note assumed needs differentiable assumed continuous monotonic however every monotonic function differentiable almost everywhere set points differentiable lebesgue measure zero define equal zero wherever differentiable take derivative wherever differentiable definition continuity integral gives back function log since increasing positive function conclude log log lemma fact integral gives back function log obtain log defined theorem result obtain log using inequality conjunction obtain lower bound lower bound would like prove statement theorem result suffices prove continuous random variables pdf log end observe thus show log proof complete write conditioned pdf denoted defining function obtain conditioned since continuous increasing function bijection inverse function exists moreover since continuous bijection also continuous random variable pdf defined following fyz thus log fyz log log log taking expected value sides achieved therefore theorem proved proof theorem based obtain dsym utilizing lemma obtain pdfs exist therefore dsym log log log log log log log lemma since obtain log log therefore since obtain log measure zero points differentiable affect fyz measure zero points however note fyz always correct thus values fyz measure zero set points important addition log log expanding obtain substituting simplifying write equals cov therefore equations theorem proved proof corollary observe using theorem suffices prove following two inequalities cov cov since increasing obtain also increasing therefore lemma equation proved similarly also obtained lemma increasing functions proof lemma definition obtain log utilizing inequality log suffuces prove end write last inequality holds assumption lemma therefore lemma proved proof lemma conditional pdf easily obtained definition channel order calculate using definition write log exploiting fact obtained remains prove continuous end definition channel obtain cdfs random variables defined respectively order prove claim must show fubini theorem chapter equivalent lim equivalently need show exists since exists therefore since write write take large enough result proved proof lemma since riemann integrable continuous since strictly increasing function support yields injective function exists inverse function define random variable assume pdf since continuous random variable bijection also continuous random variable following pdf hence calculate differential entropy following log log log log therefore lemma proved proof lemma first assume prove general case later case claim support optimal solution needs two members end note following problem equivalent original problem defined max max cov since given would like maximize cov linear function subject also linear function standard cardinality reduction technique fenchel extension caratheodory theorem reduce support two members see appendix discussion technique assume support pmf thus simplify cov cov last equality obtained expanding sums thus problem defined equals following max claim optimal choice see observe increasing functions hence hence optimal substituting obtain problem equivalent following max utilizing kkt conditions one obtains optimal solution consider general case convex function necessarily linear since convex obtain right hand side line connects two points line lies curve therefore thus implies relax optimization problem consider max cov solution optimization problem upper bound original problem feasible set original problem subset feasible set relaxed optimization problem using similar ideas linear case conclude support optimal two members optimal solution verified note case obtain distributed optimal probability measure result constraint redundant therefore support optimal two members shows upper bound tight case conclusion paper studied capacity class additive noise channels channels importance molecular optical communication also gave number new application channels introduction set necessary set sufficient conditions finiteness capacity given introduced two new techniques proving explicit lower bounds capacity result obtained two lower bounds capacity lower bounds helpful inspecting channel capacity becomes infinity also provided upper bound using symmetrized divergence bound references moser capacity results optical intensity channel gaussian noise ieee transactions information theory vol pierobon akyildiz noise analysis molecular communication nanonetworks ieee transactions signal processing vol aminian ghazani mirmohseni fekri capacity molecular communications ieee transactions molecular biological communications vol arjmandi gohari bateni nanonetworking new modulation technique performance analysis ieee communications letters vol gohari mirmohseni information theory molecular communication directions challenges appear ieee transactions molecular biological communications srinivas eckford adve molecular communication fluid media additive inverse gaussian noise channel ieee transactions information theory vol khormuji capacity molecular communication aign channel information sciences systems ciss annual conference ieee moser guo capacity memoryless additive inverse gaussian noise channel ieee journal selected areas communications vol farsad murin eckford goldsmith capacity limits molecular timing channels chan hranilovic kschischang probability measure conditionally gaussian channels bounded inputs ieee transactions information theory vol smith information capacity sclar gaussian channels information control vol jiang wang wang dai tight upper bound channel capacity visible light communications ieee communications letters vol lapidoth moser wigger capacity optical intensity channels ieee transactions information theory vol chen hajek koetter madhow fixed input distributions noncoherent communication channels vol aminian arjmandi gohari mitra capacity diffusionbased molecular communication networks channels ieee transactions molecular biological communications vol ihara information theory continuous systems singapore world scientific cover thomas elements information theory new york john wiley sons topsoe information theoretical identity problem involving capacity studia scientiarum math hungarica vol ghourchian gohari amini existence continuity differential entropy class distributions ieee communications letters stein shakarchi real analysis measure theory integration hilbert spaces new jersey princeton university press gamal kim network information theory cambrdige university press proof equation take arbitrary equation holds utilize change variables integrals note similarly therefore proof complete
| 7 |
stochastic bandits robust adversarial corruptions mar thodoris vahab renato paes abstract introduce new model stochastic bandits adversarial corruptions aims capture settings input follows stochastic pattern fraction adversarially changed trick algorithm click fraud fake reviews email spam goal model encourage design bandit algorithms work well mixed adversarial stochastic models whose performance deteriorates gracefully move fully stochastic fully adversarial models model rewards arms initially drawn distribution altered adaptive adversary provide simple algorithm whose performance gracefully degrades total corruption adversary injected data measured sum across rounds biggest alteration adversary made data round total corruption denoted algorithm provides guarantee retains optimal guarantee logarithmic term input stochastic whose performance degrades linearly amount corruption crucially agnostic also provide lower bound showing linear degradation necessary algorithm achieves optimal performance stochastic setting lower bound works even known amount corruption special case algorithm achieves optimal performance without extra logarithm cornell university teddlyk work supported nsf grant part work done author interning google google research mirrokni google research renatoppl introduction online learning bandit feedback learner needs decide time alternative actions arms unknown quality facing exploiting profitable past actions exploring new actions little information bandit problems typically classified according rewards generated stochastic bandits rewards drawn fixed unknown distributions models settings alternatives follow particular patterns react learner extreme adversarial bandits robust rewards specifically designed trick learner settings paper focus settings overall behavior essentially stochastic small fraction rewards adversarially changed classic stochastic bandit algorithms like upper confidence bound ucb active arm elimination aae base decisions observations made initial phase algorithm therefore easily tricked incurring linear regret arms corrupted adversarial bandit algorithms like fooled tricks exploit fact input mostly stochastic goal robustify stochastic setting designing algorithms tolerate corruptions still able exploit stochastic nature input algorithms design agnostic corruption tolerate level corruption guarantee degrades gracefully corruption added moreover prove lower bounds showing results tight logarithmic factor explain technical contribution detail describe examples settings mind click fraud online advertising platform selects pageview display obtains certain reward user clicks click probabilities unknown tension repeatedly displaying particular profitable provides reliable revenue exploring potentially rewarding options major application stochastic bandits ads industry phenomenon known click fraud would textbook example stochastic bandits click fraud botnets maliciously simulate users clicking trick learning algorithms one example bot consistently making searches trigger clicking make seem like certain low order boost competitor recommendation systems platform recommending activities services user faces suggesting new restaurants leads faster learning best spots may result dissatisfaction customers led disappointing experiences inputs follow stochastic pattern inputs typically corrupted either maliciously fake reviews competitors construction makes restaurant less desirable certain interval corruption may exhibit arbitrary patterns identically distributed time yet dwarfed fact input stochastic several examples emails mostly follow stochastic pattern except fraction spam designed trick algorithms internet searches follow predictable pattern except certain spikes caused unpredictable events data collection used econometric process often suffers errors affect small part input cases vast majority input follows predictable pattern fraction samples corrupted contribution model paper introduce new model stochastic bandits adversarial corruptions goal model encourage design bandit algorithms work well mixed adversarial stochastic models whose performance deteriorates gracefully move fully stochastic fully adversarial models model arms associated fixed reward distribution round random reward rst drawn adversary change reward possibly using information realizations current previous rounds well probability learner puts arm learner draws arm obtains reward feedback say adversary every sample path maxa rst results main result theorem section learning algorithm term active arm elimination race probability regret log log log gap arm difference stochastic means arm optimal arm arms small gap inverse dependence gap replaced possible improve bound log factor log two maximum expected regret fixed arm obtaining important features algorithm guarantee agnostic algorithm need know corruption level guarantee provided respect much corruption added retrospect corruption level known remove dependence log shown theorem high probability bounds hold high probability important practical applications ones described contrast weaker definition often hides events large regret offset events large negative regret stochastic case corresponds case recover bound guarantee provided ucb algorithm obtains log log probability ucb obtains bound without log term result show algorithm known provides log regret stochastic input log words need tolerate either known level zero corruptions save logarithmic factor bound match bound provided ucb stochastic case another question whether linear dependence corruption level tight section show improved upon without decay stochastic guarantee still guaranteeing logarithmic regret input stochastic lower bound adaptation adversarial corrupted setting result auer chiang holds even case corruptions either known level algorithm provides matching upper bound prove theorem algorithm log stochastic setting every constant instance algorithm incurs regret constant probability algorithm also viewed lens best worlds literature goal design algorithms simultaneously provide logarithmic regret guarantees stochastic regime guarantees adversarial section sketch algorithm appropriately modified obtain constant otherwise observe results best worlds literature correspond case note bounds obtained regret techniques starting point design classical stochastic bandit learning algorithms like ucb active arm elimination algorithms susceptible corruptions since base decisions small initial exploration phase therefore small number corruptions possible completely trick algorithm eliminating optimal arm address issue robustifying using approach learning algorithm consists multiple layers running parallel layers decreasing speed increasing tolerance corruption first layer finishes fast selecting arm optimal provides tolerance corruption subsequent layers robust also slower resulting algorithm race different layers picking optimal arm fastest layer finishes provides first crude estimate optimal arm slower layers finish obtain finer finer estimates optimal arm second main idea obtain robust algorithms subsampling layer selected probability receives expectation corruption injected adversary low enough layer behaves almost stochastic finally couple different layers together process global eliminations process enables slower layers eliminate arms faster layers process necessary preventing inaccurate layers pulling suboptimal arms often related work online learning stochastic rewards goes back seminal work lai robbins case adversarial rewards introduced auer reader referred books lugosi bubeck slivkins elaborate overview area two extremes suffer orthogonal problems one overoptimistic expecting rewards come distribution one pessimistic order protected malicious adversaries work addresses middle ground rewards come distributions often adversarially corrupted motivated stochastic learning algorithms even small corruption levels closely related work lie works best worlds guarantees works achieve logarithmic factors optimal guarantee stochastic rewards optimal actual regret guarantee adversarial rewards bubeck slivkins auer chiang begin stochastic algorithm test whether encounter behavior case switch adversarial algorithm contrast seldin begin adversarial algorithm optimistic learning rate adapt encounter behavior recently independently work wei luo provide best worlds result guarantee adversarial setting via novel analysis omd algorithm foster although aforementioned algorithms elegant analysis robust inputs slightly away stochastic work bridges gap designing algorithms smooth behavior instances works attempt provide improved guarantees adversarial setting instances well behaved hazan kale offer regret guarantees scale variance losses instead time horizon guarantee meaningful settings predictable nature usually performance routing however address applications stochastic bandits click fraud example rewards come bernoulli distributions variance distribution high even input totally stochastic another approach work shamir szlak consider input adversarial random local permutations applied obtain benign instance approach relevant settings like buffering applicable settings opposite side attempting provide improved guarantees stochastic setting enhancing range active area research instance moss algorithm audibert bubeck provides optimal upper bound stochastic bandits retaining optimal stochastic guarantee algorithm garivier provides improved constants upper bound stochastic guarantee matching lower bound lai robbins bernoulli rewards robust ucb algorithm extends results rewards replacing weaker assumption bounded variance however results robust corruptions adaptive adversary due deterministic nature since adversary knows arm learner select always corrupt optimal arm whenever selected therefore cause learner either play multiple times even suboptimal decide playing even small amount corruption similarly lower bound also prior work incorporating corruptions online decision making online learning front two attempts best knowledge best worlds result seldin slivkins allow contamination data long obliviously selected decrease gap factor second work recent paper gajane suggest model corrupted feedback aiming differential privacy unlike model corruptions neither adversarial adaptive works make benign assumptions nature corruption address main roadblock settings consider adversarial saboteur try add faulty data beginning change order two arms minimal corruption achieve goal closer model works robust allocation online matching corrupted data unlike online matching though online learning evaluate optimum every round since algorithm decisions affect information observes last learning presence corruptions recently received great attention batch learning setting instance recent works study inference presence adversarially corrupted data designing estimators robust corrupted data ing auctions faulty data due econometrics errors work suggests similar framework study online learning robust adversarial corruptions challenging problem sequential decision making decisions also affect information observed model corrupted stochastic bandits study online bandit learning setting arms arm associated distribution mean distributions assumed positive measure rewards unknown learner refer optimal arm arg maxa define consider adversary corrupt stochastic rewards adversary adaptive sense corrupted rewards function realization stochastic rewards point learner choices previous rounds formally protocol learner adversary round follows learner picks distribution arms stochastic rewards drawn arm rst adversary observes realizations rst well rewards choices learner previous steps returns corrupted reward learner draws arm observes refer maxa rst amount corruption injected round instance total injected corruption realizations random variables rst note adversary assumed adaptive sense access realizations random variables rounds realization rewards round knows player distribution round arm guarantees gracefully degrades total corruption injected adversary regret notions regret corresponds difference reward obtained algorithm reward best arm hindsight reg max regret random variable depends random rewards randomness used learner randomness adversary say regret bound holds probability reg probability taken three sources randomness described note one arm optimal mean preclude existence arms mean one arms exist let arbitrary arm optimal mean arms optimal mean gap finally weaker notion compares expected performance learner arm highest expected performance words pseudoreg max note jensen inequality pseudoreg reg often obtain improved bounds since allows offset large positive regret events large negative regret events upper bound active arm elimination race active arm elimination starting point design active arm elimination algorithm stochastic bandits viewed alternative presentation famous ucb algorithm based following idea initial exploration phase pull arms fashion compute estimate average empirical reward arm pulls arm usual concentration arguments establish probability least difference empirical actual means log say confidence interval arm means particular given two arms difference empirical means becomes larger widths confidence intervals high probability arm optimal happens algorithm eliminates arm removing rotation arms optimal arm pulled log times confidence intervals small enough arm eliminated eventually arms optimal eliminated enter called exploitation phase phase pull arm optimal mean enter exploitation pulled suboptimal arm log times thosep suboptimal pulls incurs regret expectation leads bound log bound also converted high probability bound replace log log arms small note arms inverse dependence gap may initially seem vacuous instance two optimal arms mean upper bound becomes infinite however inverse dependence gap replaced case case actual regret due variance reasons simplicity exposition omit current section demonstrate perform replacement section enlarged confidence intervals active arm elimination algorithm clearly robust corruption since corrupting first log steps adversary cause algorithm eliminate optimal arm algorithm never pulls suboptimal arms exploration able ever recover one initial idea fix problem enlarge confidence intervals decompose rewards two terms rst first term comes stochastic reward second corruption introduced adversary total corruption introduced adversary width log similar analysis gives following regret bound theorem corruption valid upper bound active arm elimination log log probability regret proof sketch proof follows standard analysis active arm elimination first establish high probability optimal arm never inactivated lemma upper bound number times suboptimal arm played lemma guarantee directly follows multiplying number plays arm gap guarantee need also show regret incurred meantime much provide proof details theorem lemmas appendix lemma probability least arm never becomes inactivated lemma probability least arms become inactivated log plays stochastic bandits robust known corruption drawback active arm elimination algorithm enlarged confidence intervals theorem even corruptions still incurs regret proportional warm log main theorem provide algorithm achieves usual bound log input purely stochastic time achieves input known next subsection modify algorithm make agnostic corruption level two instances active arm elimination first idea run two instances active arm elimination first supposed select correct arm corruption second supposed select right arm corruption first instance fast robust corruptions second instance slower precise sense tolerate corruptions since second instance trustworthy second instance decides eliminate certain arm eliminate arm faster instance decrease corruption keep regret low input stochastic second instance active arm elimination pull suboptimal arm many times therefore technique theorem alone enough main idea algorithm make arm behave almost stochastic running second instance low probability learner selects run second instance probability adversary adds certain amount corruption certain round second instance observes corruption probability therefore expected amount corruption learner observes second instance constant makes arms behave almost like stochastic arms instance learning algorithm obtain algorithm combining ideas two instances active arm elimination denote fast slow instance keeps estimate mean corresponding average empirical reward arm also keeps track many times arm pulled instance allows define notion confidence interval instances define log usual slow instance define slighly larger confidence intervals wds log log reason clear moment also instance keeps set eliminated arms instance round probability make move fast instance choose next active arm round robin order arm played less often pull arm increase update accordingly usual two active arms wdf wdf eliminate adding remaining probability make move slow instance executing exact procedure described instance one difference causes two instances coupled inactivate arm also eliminate leaves potential problem possible arms instance end eliminated reach point play arbitrary active arm slow instance arm resulting algorithm formally provided algorithm algorithm active arm elimination race known corruption initialize rounds sample algorithm probability else play arm arg update exists arms eliminate adding eliminate algorithm adding else play arbitrary arm set towards performance guarantee lemma bounds amount corruption actually enters slow active arm elimination algorithm enables regret guarantee theorem lemma algorithm slow active arm elimination algorithm observes probability least corruption exploration phase picked probability proof sketch one cared expected corruption affects constant number since total corruption affects probability prove guarantee require concentration inequality martingale differences since corruptions adaptively selected adversary provide details appendix log theorem algorithm run widths wds log wdf log log log stochastic case case probability least proof sketch result stochastic case follows standard arguments stochastic algorithms since obtain double regret setting run two algorithms essentially confidence intervals case establish via lemma upper bound corruption affect slow active arm elimination algorithm thanks upper bound close constant instead depending allows incur dependence stochastic case upper bound apply algorithm previous section get upper bound number plays suboptimal arms since algorithms coupled bound implies upper bound regret cause well expectation arm played times may selected every single time prior getting eliminated selected times often obtain guarantee high probability lose extra logarithmic factor details proof provided appendix stochastic bandits robust agnostic corruption multiple layers active arm elimination previous subsection designed algorithm two layers one faster tolerate corruptions second one slower robust order agnostic corruption need plan possible amounts corruption achieve introduce log layers layer slower robust previous one achieve selecting layer probability proportional argument last section corruption level layer log observe corruption expectation log corruption high probability global eliminations couple log instances call global eliminations arm eliminated layer eliminate layers important prevent pulling arm often arm suboptimal adversary arm eventually becomes eliminated layer pulled takes iterations layer since layer played probability arm arm eliminated globally case total regret active arm elimination race describe main algorithm paper call race since view multiple layers racing pick optimal arm less robust layers faster arrive first keep choosing mostly according robust slower layers finish correct confirm current selection best arm algorithm keeps log different instances active arm elimination instance state empirical means arm number times arm pulled set inactive arms width confidence interval arm layer implicitly defined log log round sample log probability remaining probability pick layer layer selected make move active arm elimination instance corresponding layer sample active arm layer least number pulls arm minimizing case empty pull arbitrary arm lowest way couple different layers arm eliminated layer another active arm layer eliminate arm previous layers keeping invariant figure provides example state algorithm formally defined algorithm arm arm arm elg nlg elg nlg elg nlg figure example state algorithm layer arm keep estimated mean number pulls red cells indicate arms eliminated layer arm eliminated layer eliminated previous layers layer arms eliminated like layer figure selected play arbitrary active arm lowest layer contains active arms algorithm active arm elimination race initialize log rounds sample layer log probability remaining prob sample play arm arg update exists arms eliminate adding else find minimum play arbitrary arm set provide main result paper regret guarantee algorithm theorem algorithm agnostic coruption level run widths log log regret log log log proof sketch similarly previous theorem regret guarantee comes summation layers essentially stochastic corruption corruption level log regret since less layer layers incur log layers second term theorem derived challenge bound regret incurred layers robust corruption however exists layer corruption level bounding amount steps level require order inactivate arm incorrect layers via lemma obtain similarly theorem bound regret caused arm layers since take minimum layer tolerance layers within powers fact corruption level match exactly corruption occurred costs extra factor regret details proof provided appendix lower bound two arms case gap arms theorem presents algorithm achieves log input stochastic log probability input show dependence tight lower bound theorem adapts technique auer chiang adversarial corrupted setting main idea algorithm logarithmic regret stochastic setting query arm log times implies long time period learner queries input constant number times corrupting rounds period adversary make optimal arm look trick learner pulling optimal arm long time causing large regret theorem adapts argument bounding expected positive regret max high probability bounds provided imply bounds expected positive regret proofs provided appendix theorem consider bandits algorithm property stochastic input two arm setting bounded log corruption level instance constant probability regret theorem bandits algorithm property stochastic input two arm setting bounded corruption level instance extensions section discuss extensions algorithm accommodate definition corruption presented results measuring corruption sum rounds maximum across arms corruption injected adversary max rst fact results improved via using replacing max summand formally main theorem theorem becomes theorem algorithm agnostic corruptions run widths log log regret max log log log proof follows arguments since compares arm result nice since contribution arm regret function gap corruption injected one injected arm latter dependence corruption optimal arm essential since main attack presented classical arguments corrupts arm lower bound previous section also adds corruption dependence gap section guarantees inverse dependence gap arms note guarantee completely meaningless arms small gap instance exist two optimal arms arm makes presented bound infinite therefore vacuous hinted though inverse dependence improved arms small proofs generally relied setting upper bound number times suboptimal arm played thereby providing upper bound regret cause arms alternative analysis say even erroneously selected every single time upper bound loss performance pseudoregret performance loss selected every single time log actual regret one needs also take consideration variance even selected every single time hoeffding bound shows total reward high probability lower expectation result inverse dependence bound replaced min min actual regret moreover careful reader may noticed theorem dependence replaced sole dependence without gap however extend subsequent theorems since dependence come upper bound corruption experienced log due subsampling instead dependence comes projecting correct layer smallest layer robust corruption previous layers via number times take eliminate suboptimal arm uncorrupted objective applications spam corruptions counted part rewards algorithm provides guarantee case uncorrupted rewards difference performances two objectives one also observe linear dependence still necessary consider arms adversary corrupts first steps making look identical learner better option randomly selecting two gives regret uncorrupted objective note setting linear dependence necessary unconditionally performance algorithm stochastic setting towards best worlds previous section showed logarithmic dependence stochastic setting comes expense linear dependence setting focus actual regret interesting direction achieve improvement either higher power logarithm stochastic setting aiming instead fact combine algorithm sapo algorithm auer chiang achieve bicriteria guarantee specified algorithm achieve guarantee corruption otherwise notice case corresponds best worlds done via running sapo algorithm level log probability instead higher layers sapo algorithm guarantees caused particular arm logarithmic instance stochastic adversarial via beautiful analysis keeps negative regret time intervals performed well avoid testing eliminated arms often setting corruption level less instance behaves stochastic causing logarithmic regret else instance corrupted extrapolate regret layer whole algorithm arms eliminated layer also eliminated via global eliminations since regret multiplied implies bound acknowledgements authors would like thank sid banerjee whose lecture notes stochastic bandits proved helpful munoz medina karthik sridharan tardos useful discussions manish raghavan suggestions writeup anonymous reviewers valuable feedback provided improved presentation paper references audibert bubeck minimax policies adversarial stochastic bandits proceedings annual conference learning theory colt peter auer chiang algorithm nearly optimal stochastic adversarial bandits proceedings annual conference learning theory colt peter auer paul fischer analysis multiarmed bandit problem mach may peter auer yoav freund robert schapire nonstochastic multiarmed bandit problem siam january bubeck regret analysis stochastic nonstochastic bandit problems foundations trends machine learning bubeck nicolo lugosi bandits heavy tail ieee transactions information theory alina beygelzimer john langford lihong lev reyzin robert schapire contextual bandit algorithms supervised learning guarantees proceedings international conference artificial intelligence statistics aistats bubeck aleksandrs slivkins best worlds stochastic adversarial bandits proceedings annual conference learning theory colt nicolo gabor lugosi prediction learning games cambridge university press new york usa yang cai constantinos daskalakis learning auctions without samples proceedings ieee annual symposium foundations computer science focs ilias diakonikolas gautam kamath daniel kane jerry ankur moitra alistair stewart robust estimators high dimensions without computational intractability proceedings ieee annual symposium foundations computer science focs hossein esfandiari nitish korula vahab mirrokni online allocation traffic spikes mixing adversarial stochastic models proceedings acm conference economics computation eyal shie mannor yishay mansour action elimination stopping conditions bandit reinforcement learning problems journal machine learning research dylan foster zhiyuan thodoris lykouris karthik sridharan eva tardos learning games robustness fast convergence advances neural information processing systems nips garivier olivier algorithm bounded stochastic bandits beyond annual conference learning theory colt pratik gajane tanguy urvoy emilie kaufmann corrupt bandits privacy preserving input international conference algorithmic learning theory alt elad hazan satyen kale better algorithms benign bandits proceedings annual symposium discrete algorithms soda lai herbert robbins asymptotically efficient adaptive allocation rules adv appl march vahab mirrokni shayan oveis gharan morteza zadimoghaddam simultaneous approximations adversarial stochastic online budgeted allocation proceedings annual symposium discrete algorithms soda yishay mansour aviad rubinstein moshe tennenholtz robust probabilistic inference proceedings annual symposium discrete algorithms soda yevgeny seldin lugosi improved parametrization analysis algorithm stochastic adversarial bandits proceedings conference learning theory colt aleksandrs slivkins introduction bandits yevgeny seldin aleksandrs slivkins one practical algorithm stochastic adversarial bandits proceedings international conference machine learning icml ohad shamir liran szlak online learning local permutations delayed feedback proceedings international conference machine learning icml wei haipeng luo adaptive algorithms adversarial bandits corr supplementary material section section provide proof theorem note lemma statements width arm defined theorem log lemma restated probability least arm never becomes eliminated proof crux proof lies establishing high probability upper bound confidence interval never becomes lower lower bound confidence interval arm therefore become eliminated formally let empirical mean samples stochastic part rewards empirical mean samples corrupted rewards respectively recall mean arm hoeffding inequality arm probability least log set establish holds arms time steps arm played times result arm time log log comparing actual corrupted empirical means altered absolute corruption hence combining inequalities fact actual mean higher one establish therefore arm eliminated since holds times arms lemma follows lemma restated probability least arms become eliminated plays proof proof stems following observations lemma arm high probability never eliminated rounds high probability lower confidence interval arm upper confidence interval arm comes fact plays arm also arm since eliminated empirical stochastic mean high probability actual mean similarly empirical stochastic mean arm actual mean since corruptions upper bounded contribute decrease average empirical corrupted means enough circumvent gap formally let denote empirical means stochastic part rewards corrupted rewards respectively plays arm hoeffding inequality proof previous lemma probability least holds log therefore probability plays arm absolute corruption therefore choice combining argument also implies widths upper bounded combining fact actual mean higher one establish result arm becomes eliminated plays already eliminated theorem restated valid bound corruption arm elimination log log probability regret proof proof follows classical stochastic bandit argument measuring regret caused arm function gap number times played established lemma simplicity presentation first provide guarantee compares expected performance algorithm expected performance one would selected throughout whole time horizon expected performance one uses loss compared every time used instead equal gap result expected contribution suboptimal arm equal lemma establishes probability suboptimal arm played log times play suboptimal arm causes multiplying times expected regret per time guarantee equals gap setting failure probability inverse polynomial time horizon ensure expected regret due bad event constant leads guarantee turn guarantee need show regret incurred steps pull arm significantly higher expectation therefore bounding resulting variance hoeffding inequality lemma empirical cumulative reward arm high probability log less expectation holds arm steps realized performance much expectation probability statements hold arm time regarding arms log term upper bounded definition log log log log regarding arm let arm smallest gap lemma never gets eliminated necessarily post optimal arm fact arm may post optimal arm arms higher gap high probability post optimal arm analogous argument lemma however argument arm high probability expectation post optimal arm much expectation gives bound caused case post optimal arm therefore actual regret times arm played one term comes expectation aforementioned bounds variance corruption increase cumulative reward already existing regret bound replacing lemma obtain guarantee note failure probabilities two lemmas coupled correspond bad events supplementary material section section provide proof theorem handle corruption bound high probability total corruption experienced slow active arm elimination instance lemma deal adaptive adversary need martingale concentration inequality specifically apply inequality introduced lemma lemma lemma let sequence random numbers assume also let lemma restated algorithm slow active arm elimination algorithm observes probability least corruption exploration phase picked probability proof first observation expected corruption encountered algorithm constant total corruption encountered probability rest proof focuses bounding variance random variable actual corruption encountered layer crucially since want allow adversary adaptive assume independence across rounds conditional independence conditioned history involved concentration inequality necessary therefore create martingale sequence actual corruption minus expected corruption apply concentration inequality let zat corruption observed exploration phase algorithm arm selected every round adversary selects corruption cat zat therefore random variable equal cat probability otherwise given adversary adaptive may select corruptions based realizations previous rounds need use appropriate concentration inequality use inequality introduced lemma initially resolve randomness conditioning slow algorithm selected since active arm elimination deterministic conditioned selecting algorithm selected arm deterministic let arm would selected happens probability martingale sequence corresponds history round note last inequality holds definition therefore summing rounds max cat trivial upper bound since rewards applying lemma show lemma follows adding expected corruption therefore obtaining bound statement corruption experienced log theorem restated algorithm run widths log log log log stochastic case wdf case probability least proof stochastic case bound follows via standard stochastic bandit arguments similarly proof theorem two active arm elimination algorithms log incur probability regret failure probability inequality governs results lemmas interesting case setting let failure probability lemma lemma probability least actual corruption experienced slow active arm elimination algorithm less log values therefore apply analysis theorem corruption level least log get handle actual regret coming slow active arm elimination algorithm left bound regret coming fast active arm elimination algorithm towards goal bound number times suboptimal arm played fast active arm elimination expected time remains active slow active arm elimination lemma arm played slow active arm elimination probability least log log log bound number plays arm slow active arm elimination instance use bound number plays fast active arm elimination instance expectation times every move slow active arm elimination occurs probability least moves plays still active since every time arm played incurs provides guarantee obtain high probability guarantee let observe probability least make one move slow arm elimination algorithm every log moves fast arm elimination algorithm seen thinking following process one tosses coins bias observes heads first time heads event tosses coins probability heads arrived ensure achieved log less need wait log log union bound failure probabilities draws get failure probability since time horizon arm gets inactivated log log last part prove regret experienced throughout rounds large follows two applications hoeffding inequality arms analogously theorem combining arguments theorem follows total failure probability guarantee supplementary material section theorem restated algorithm agnostic coruption level run log log regret widths log log log proof proof follows similar arguments proof theorem specifically layers corruption level using standard arguments described theorem log establish bound regret caused suboptimal arm failure probability log since log levels regret coming layers upper bounded second term theorem failure layers tolerant corruption apply argument proof theorem bound regret via number plays played minimum layer robust corruption arg similarly proof theorem upper bound number plays suboptimal arm layer exactly arguments bound number plays suboptimal layer via coin toss process proof last bound regret incur part since know amount corruption advance amount adaptively selected also need take union bound number layers guarantee holds layers simulataneously end correct therefore repeat arguments theorem log log last note since used powers increase corruption among layers fact apply arguments theorem exact instead used causes extra constant factor regret supplementary material section theorem restated consider bandits algorithm property stochastic input two arm setting bounded log corruption level instance constant probability regret proof proof follows sequence steps step analyze behavior stochastic case fix constant observe algorithm behaves stochastic input bernoulli arms means since setting expected regret number pulls arm follows log step find large interval hit constant probability divide space log intervals size interval twice size previous intervals combined interval let number times arm pulled interval exists interval step create adversary forces lot regret interval adversary quite simple first steps arms bernoulli means remaining timesteps arms bernoulli means use refer probability law whe inputs drawn respect timesteps refer probability law input according first steps according onwards step constant probability arm pulled constant number times probability law follows directly markov inequality denote event want argue also constant order let vector storing reward arm time pulled interval notice stochastic corrupted scenarios learner observes values acts exact way therefore condition probability ends pulling arm times exactly words therefore constant step concentration bounds regret incurred interval define event occurs probability captures concentration bounds need proof first requirep arm optimal arm let reward arm time step know since rewards independent use hoeffding bound bound probability exp establish concentration regret learner achieves respect arm intervals note learner pulls arm incur regret pulls arm incurs regret positive negative compute regret respect arm intervals sample every time arm pulled step interval interval number times arm pulled regret given previous expression abuse notation mean regret time arm pulled instead regret period therefore min use hoeffding bound last expression get exp exp step interval interval using bound get exp step interval interval pulling arm positive expected regret use technique used argue obtain large negative regret high probability let number times arm pulled interval abuse notation let difference rewards time arm pulled log log log min log probability zero since larger use standard chernoff bound log exp log concentration bounds established define event event concentration bounds hold precisely event following four things happen empirically arm better arm interval regret learner least interval difference total rewards arms least regret learer interval least log discussion step know step putting together since union bound need argue constant probability event regret learner least simply sum regret learner intervals intervals use bounds computed steps andn directly interval note conditioned learner probes arm constant number times total regret differs regret pulling arm iterations constant therefore total regret bounded log adapt argument provide bound expected positive regret max note high probability bounds provided also imply bound expected positive regret theorem bandits algorithm property stochastic input two arm setting bounded corruption level instance proof modify proof theorem follows define select interval event defined way markov inequality exp step remains unchanged step note since exp therefore probability least exp regret least therefore exp
| 8 |
representation learning recovery relu model mar arya ankit singh college information computer sciences university massachusetts amherst amherst usa research laboratory electronics mit cambridge usa arya asrawat march abstract linear units relus become preferred activation function neural networks paper consider two basic learning problems assuming underlying data follow generative model based neural network relu activations primarily theoretical study limit network problem study corresponds presence nonlinearity modeled relu functions given set observation vectors aim recover matrix latent vectors model relu aci random bias show possible recover column space within error frobenius norm certain conditions probability distribution second problem consider robust recovery signal presence outliers large sparse noise setting interested recovering latent vector noisy nonlinear sketches form relu denotes outliers sparsity denote dense small noise line work recently studied soltanolkotabi without presence outliers problem show generalized lasso algorithm able recover signal within error random gaussian matrix log introduction linear unit relu basic nonlinear function relu relu max matrix relu denotes matrix obtained applying relu function coordinates matrix relus building blocks many nonlinear problems based deep neural networks see soltanolkotabi good exposition let collection message vectors interest depending application hand message vectors constituents may range images speech signals network access patterns rating vectors assume message vectors satisfy generative model message vector approximated map latent space ambient space motivated recent results developing generative models various signals see goodfellow kingma welling bora maps take following form warrant special attention function corresponding neural network activation function rdi denotes weight matrix layer network special case activation function relu function message vectors interest satisfy following relu relu relu rdi denotes biases neurons output units layer network generative model raises multiple interesting questions play fundamental role understanding underlying data designing systems algorithms processing data two basic questions follows learning representation given observations model recover parameters model relu relu relu note question training model case set known possibly chosen accordingly recovery signal presence errors given erroneous noisy version vector generated model denoise observation recover latent vector formally given relu relu relu knowledge model parameters obtain small respectively correspond outliers large sparse errors dense small noise respectively apart closely related one main motivations behind studying two problems together comes recent work associative memory karbasi mazumdar rawat associative memory consists learning phase generative model learned given dataset recovery phase given noisy version data point generated generative model version recovered help knowledge generative model recent surge interest learning relus two questions basic interest even network nonlinearity comprising single relu function conceivable understanding behavior network would allow one use iterative peeling technique develop theory multiple layers goel problem recovering reliable agnostic learning model kalai considered informally speaking general distributional assumptions rows sampled distribution given relu goel propose algorithm recovers hypothesis natural loss function therein respect true underlying moreover algorithm runs time polynomial exponential opposed given corresponding output relu focus problem recovering note model considered goel consider presence outliers soltanolkotabi obtained results model somewhat learning guarantees assuming entries matrix gaussian soltanolkotabi show high probability gradient descent algorithm recovers within precision terms relative error decays exponentially number steps gradient descent algorithm obtained result general extends constrained optimizations presence regularizers example restricted sparse vector however works consider presence outliers sparse large noise observation sparse noise quite natural assume many times partial observations signal vector obtained relu model outliers considered paper thought nonlinear version problem recovering linear observations form denoting outliers problem linear observations studied celebrated work tao note technique tao extend case dense bounded noise component present result case natural generalization complementary one soltanolkotabi present recovery method robust outliers instead analyzing gradient descent directly analyze performance minimizer optimization program generalized lasso using ideas plan vershynin nguyen tran hand best knowledge representation learning problem networks studied representation learning problem relus bears similarity matrix completion problems fact greatly exploit later low rank matrix completion matrix visible partially task recover unknown entries exploiting fact low rank case likely observe positive entries matrix unlike majority matrix completion literature creates dependence matrix sampling procedure main result representation learning assume observed matrix relu matrix matrix unknown random bias denote kronecker show relaxed method guarantees recovery matrix error frobenius norm high probability see theorem formal statement leveraging known result recovering column space perturbed matrix see theorem appendix show possible also recover column space similar guarantee main technique use obtain result inspired recent work matrix completion davenport one main challenges face recovery entry matrix random variable since random bias whether observed relu function negative depends value entry general matrix completion literature entries matrix observed sampled see example recht keshavan chatterjee references therein aforementioned reason use results however similar predicament partially present davenport entries quantized observed similar davenport tools prove helpful situation symmetrization trick contraction inequality ledoux talagrand however crucial model davenport case bias vector random change ensure bias random change observation data samples observations translates less freedom transformation original matrix observed matrix leading dependence among elements row furthermore analysis becomes notably since positive observations quantized main result noisy recovery plan recover observations relu standard gaussian matrix vector containing outliers sparse noise bounded dense noise recover employ lasso algorithm inspired work plan vershynin nguyen tran particular plan vershynin recently showed signal provably recovered constant multiple nonlinear gaussian measurements via lasso algorithm treating measurements linear observations context relu model measurements relu follows plan vershynin lasso algorithm outputs solution relu gaussian random variable random variable denoting bias associated relu function approach guarantees high probq show log even measurements corrupted ability recovery within error outliers achieved jointly minimizing square loss treating measurements linear measurements adding regularizer loss function promote sparsity solution also recover see theorem formal description organization paper organized follows section describe notations used throughout paper introduce technical tools would useful prove main results section subsection provide formal models problem studying section provide detailed proofs main results representation learning problem see theorem section contains proofs techniques used recovery problem presence outliers see theorem notations technical tools notation positive integer given matrix denotes entry denotes vector containing elements row matrix similarly denotes column matrix recall function relu takes following form relu max matrix use relu denote matrix obtained applying relu function entries matrix two matrix use represent kronecker product given matrix denotes frobenius norm also let denote operator norm maximum singular value let denote nuclear norm similar davenport parameter associated function inf interval also lipschitz parameter function follows max sup sup techniques bound supremum empirical process course paper namely representation learning part use key tools symmetrization contraction bound supremum empirical process following lead davenport analysis generalization bounds statistical learning literature particular need following two statements theorem symmetrization expectation let independent rvs taking values class functions furthermore let independent rademacher rvs sup sup theorem contraction inequality ledoux talagrand let independent rademacher rvs convex increasing function let functions satisfy sup sup system model focus problems learning representation recovery signal presence errors signal assumed generated using single layer models learning representations recovery described model learning representations assume signal vector interest relu correspond weight generator matrix bias vector respectively problem representation learning given message vectors generated underlying model signal vector follows observation matrix relu acj similarly matrix notion concisely represent observation vectors relu relu denotes vector assume bias vector random vector comprising coordinates coordinate copies random variable distributed according probability density function model recovery recovery problem given vector obtained adding noise valid signal vector well modeled matrix bias particular relu denotes dense noise vector bounded norm hands vector contains potentially large corruptions also referred sparse errors outliers assume robust recovery problem corresponds obtaining estimate true latent vector corrupt observation vector distance small related problem denoising presence outliers focuses obtaining estimate close true signal vector part focus setting weight matrix random matrix entries entry matrix distributed according standard gaussian distribution furthermore another crucial assumption hamming error oblivious nature error vector picked adversarial manner given knowledge representation learning paper employ natural approach learn underlying weight matrix observation matrix network maps lower dimensional vector obtain signal vector relu dimension matrix matrix long min quest recovering weight matrix focus estimating matrix given access task viewed estimating matrix partial randomized observations work inspired recent work davenport matrix completion however describe later crucial model model davenport bias vector change observations case nonetheless describe model main results matrix completion underscore key ideas matrix completion davenport following observation model considered given matrix function matrix interesting problem extend results setting adversarial errors however note problem active area research even case linear measurement bhatia plan explore problem future work assumed generated probability probability furthermore one access entries indexed set set generated including certain probability given observations likelihood function associated matrix takes following log log order estimate matrix bounded entries natural maximize function constraint matrix rank maximize subject rank last constraint introduced model setting observations assumed bounded coordinates note assumptions indeed hold many observations interests images note formulation clearly due rank constraint thus davenport propose following program maximize subject rmn note constraint rmn constraint rank output required ensure program outputs matrix let program davenport obtain following result characterize quality obtained solution proposition davenport theorem assume rmn let absolute constants probability least following solution log constant depends steepness function learning single layer relu matrix completion main note problem estimating matrix related problem matrix completion authors assume entries take values set paper state equivalent model binary alphabet throughout paper log represents natural logarithm similar matrix completion setup observation matrix obtained transforming original matrix probabilistic manner dictated underlying distribution bias vector particular get observe entire observation matrix however key two aforementioned setups matrix completion setup studied davenport fact literature matrix completion ganti assume entry original matrix independently transformed obtain observation matrix contrast independence absent particular row matrix obtained corresponding row utilizing shared randomness bias note bias associated coordinate observed vector generative model vary across observation vectors prevents applying known results problem estimating however show remainder paper nature allows deal dependence across entries row obtain recovery guarantees similar described proposition representation learning observations focus task recovering matrix observation matrix recall observation matrix depends matrix follows relu set positive coordinates row matrix note original matrix needs satisfy following requirements given original matrix let denote largest element row straightforward verify denotes indices largest entries furthermore whenever similarly follows whenever following max based observation set matrices max recall denote probability density function bias write likelihood matrix results observation matrix follows max using notation maxj rewrite follows therefore observing given original matrix takes following form log log log follows work slightly quantity log log order recover matrix observation matrix employ natural maximum likelihood approach equivalent following maximize subject follows simply refer quantity clear context following result characterizes performance program proposed theorem assume observation matrix related according let solution program bias density function bounded derivative following holds probability least constant quantities depend distribution bias respectively proof theorem crucially depends following lemma lemma given observation matrix related matrix according let proof lemma delegated appendix ready prove theorem solution program follows use short hand proof theorem let notation sup means sup employ lemma obtain sup proceed upper bound right hand side follows standard symmetrization trick devroye integer log sup sup log rademacher random variables note log log sup log log sup log log sup point combine contraction principle obtain following sup sup sup follow inequality fact respectively using markov inequality follows sup follows follows setting supx log recovering network parameters established theorem program proposed recovers matrix denotes perturbation matrix let denote recovered matrix bounded frobenius norm task recovering parameters relunetwork equivalent solving given setting matrix column space spanned columns therefore long generative model ensures matrix singular values bounded away resort standard results candidate orthonormal theory output top left singular vectors basis column space particular employ result top left singular vectors respectively note stated appendix let even without perturbation could hope recover column space column space exact matrix let smallest singular value least follows theorem appendix exists orthogonal matrix min guarantee column space recovered within error frobenius norm column space robust recovery explore second fundamental question arises context reconstructing signal vector belonging underlying generative model erroneous version recall given vector obtained adding noise valid message vector well modeled relu denotes dense noise vector bounded norm hands vector contains potentially large corruptions also referred outliers assume number outliers bounded robust recovery problem corresponds obtaining estimate true representation corrupt observation vector distance small related problem denoising presence outliers focuses obtaining estimate close true message vector remainder paper focus setting weight matrix random matrix entries entry distributed according standard gaussian distribution furthermore another crucial assumption outlier vector oblivious nature error vector picked adversarial given knowledge note soltanolkotabi study problem equivalent recovering latent vector observation vector generated form without presence outliers sense work natural generalization work soltanolkotabi presents recovery method robust errors well however approach soltanolkotabi author analyze convergence gradient descent method true representation vector contrast rely recent work plan vershynin plan vershynin employ lasso method recover representation vector hamming error vector given relu corresponds corrupted observations try linear model observations solving following optimization minimize aforementioned formulation regularizer part included encourage sparsity estimate vector following result characterizes performance proposed program recovering representation corruption vector theorem let random matrix standard gaussian random variables entires relu let relu standard gaussian random variable random variable represents bias coordinate let interesting problem extend results setting adversarial errors however note problem active area research even case linear measurement bhatia plan explore problem future work note paper deals setup number observations greater dimension signal needs recovered therefore necessarily require vector belong restricted set done version robust lasso methods linear measurements see nguyen tran outcome program described high probability log log max large enough absolute constant depends proof assume furthermore let support vector given vector set use denote vector obtained restricting indices belonging note kah kah hrelu kah hrelu follow triangle inequality respectively since solution program combining obtain kah hrelu complete proof two steps obtain universal lower upper bounds left hand side right hand side respectively hold high probability upper bound rhs let relu note ahi ahi ahi follows inequality employ plan vershynin lemma obtain sup ahi absolute constant relu relu standard gaussian random variable combine obtain following follow setting using fact simplify bound follows max lower bound lhs combining get kah since left hand side note picked tuple belongs following restricted set kat plan vershynin plan vershynin obtain bound terms gaussian width vershynin cone vector belongs however setup impose structure quantity simply result order lower bound lower bounding following quantity every kah towards employ lemma appendix gives every high probability kah completing proof follows max max using fact standard gaussian matrix obtain following bound holds high log log max large enough absolute constant acknowledgements research supported part nsf awards ccf ccf career award references bhatia jain kar robust regression via hard thresholding advances neural information processing systems nips pages bhatia jain kamalaruban kar consistent robust regression advances neural information processing systems nips pages bora jalal price dimakis compressed sensing using generative models proceedings international conference machine learning icml pages aug recht exact matrix completion via convex optimization foundations computational mathematics apr tao decoding linear programming ieee trans inform theory candes romberg tao stable signal recovery incomplete inaccurate measurements communications pure applied mathematics chatterjee matrix estimation universal singular value thresholding annals statistics davenport plan van den berg wootters matrix completion information inference journal ima july devroye lugosi probabilistic theory pattern recognition volume springer science business media ganti balzano willett matrix completion monotonic single index models advances neural information processing systems nips pages goel kanade klivans thaler reliably learning relu polynomial time proceedings conference learning theory colt pages july goodfellow mirza ozair courville bengio generative adversarial nets advances neural information processing systems nips pages kalai kanade mansour reliable agnostic learning journal computer system sciences karbasi salavati shokrollahi varshney noise facilitation associative memories exponential capacity neural computation keshavan montanari matrix completion entries ieee transactions information theory june kingma welling variational bayes international conference learning representations iclr ledoux talagrand probability banach spaces isoperimetry processes springer science business media mazumdar rawat associative memory via sparse recovery model advances neural information processing systems nips pages mazumdar rawat associative memory using dictionary learning expander decoding aaai conference intelligence aaai nguyen tran robust lasso missing grossly corrupted observations ieee transactions information theory april plan vershynin generalized lasso observations ieee transactions information theory march soltanolkotabi learning relus via gradient descent advances neural information processing systems nips pages vershynin probability available online introduction applications data science wang samworth useful variant theorem statisticians biometrika results matrix perturbation let matrix without loss generality assume let following singular value decomposition diag diagonal matrix singular values diagonal entries matrix obtained perturbing original matrix error matrix let following singular value decomposition let diag diagonal matrix comprising singular values perturbed let singular values matrix matrix diag subspaces spanned note referred canonical angles respectively common use sin distance measure columns matrices present following result bounds distance subb spaces spanned singular vectors original matrix perturbed matrix singular values theorem let respectively fix rank assume let contain left singular vectors associated respectively leading singular values min sin moreover exists orthogonal matrix min proofs section lemma distance measure given observation matrix related matrix according let state special case result see theorem statement general result proof first recall notation given original matrix denotes largest element row maxj thus log log log log log represents probability density function bias given matrices new density function follows recall log gives log particular employing log get log using every obtain log log log obtain following log employ log log combining obtain log log log iii follows mean value theorem suitable follows assuming since step iii follows assumption combining obtain proofs section state special form result obtained nguyen tran generals setting one may potentially require vector sparse well lemma let random matrix standard gaussian entries furthermore let probability least exp kah khk absolute constants proof note kah matrix gaussian entries exists constants probability least exp therefore probability least exp next focus obtaining upper bound towards partition set blocks refers set indices largest entries terms absolute value corresponds set indices next largest entires max nguyen tran appendix nguyen tran show probability least exp set setting taking union bound subsets size holds probability least exp exp assuming log aforementioned probability least exp hand iii follows fact follows standard bound given candes iii follows fact belongs set consequence loose bound next use fact follows combining probability least exp obtain follows large enough combining obtain every kah khk holds probability least exp absolute constants
| 7 |
chudnovsky conjecture general points dec louiza fouli paolo mantero xie bstract prove conjecture chudnovsky general generic points algebraically closed field characteristic zero finite set points lying quadric without assumptions also prove homogeneous ideal homogeneous coordinate ring chudnovsky conjecture holds large enough symbolic powers ntroduction manuscript deals following general interpolation question question given finite set distinct points field minimum degree hypersurface passing multiplicity least question considered various forms long time mention conjectures motivations instance question plays crucial role proof nagata counterexamples hilbert fourteenth problem paper nagata conjectured sets general points vast number papers last decades related conjecture another reason interest sparked question comes context complex analysis answer question would provide information schwarz exponent important investigation arithmetic nature values abelian functions several variables however besides special classes points points lie single points forming star configuration multiple plane one moment satisfactory answer elusive question appears reach therefore interest finding effective lower bounds fact lower bounds yield upper bounds schwarz exponent using complex analytic techniques waldschmidt skoda proved minimum degree hypersurface passing every point chudnovsky improved inequality projective space showed set points raised following conjecture higher dimensional projective spaces mathematics subject classification key words phrases chudnovsky conjecture initial degrees symbolic powers fat points seshadri constant first author partially supported grant simons foundation grant fouli mantero xie conjecture chudnovsky finite set points first improvement towards chudnovsky conjecture achieved esnault viehweg employed complex projective geometry techniques show points fact inequality follows stronger statement refining previous inequalities bombieri waldschmidt skoda see algebraic point view chudnovsky conjecture interpreted terms symbolic powers via celebrated theorem nagata zariski let homogeneous coordinate ring homogeneous ideal recall symbolic power defined ideal runs associated prime ideals initial degree least degree polynomial nagata zariski showed algebraically closed finite set points ideal consisting polynomials vanish thus setting chudnovsky conjecture equivalent called waldschmidt constant exists inf limit lim thus another equivalent formulation chudnovsky conjecture remark tight connection waldschmidt constant especially general points instance multipoint seshadri constant section state generalized version chudnovsky conjecture algebraically closed field following conjecture equivalent chudnovsky conjecture conjecture finite set points field ein lazarsfeld smith proved containment ordinary powers symbolic powers homogeneous ideals polynomial rings field complex numbers precisely homogeneous ideal proved result soon generalized field hochster huneke using characteristic techniques using result harbourne huneke observed actually holds every homogeneous ideal inequality article harbourne huneke posed following conjecture conjecture finite set points homogeneous maximal ideal chudnovsky conjecture general points conjecture strives provide structural reason behind chudnovsky conjecture holds would imply chudnovsky conjecture similar way containment implies inequality results since raised new interest chudnovsky conjecture harbourne huneke proved conjecture general points points form star configuration dumnicki proved conjecture general points points general position summary chudnovsky conjecture known following cases finite set points finite set general points field characteristic set points general position set binomial number points forming star configuration present paper prove chudnovsky conjecture holds finite set general points algebraically closed field characteristic theorem finite set generic points algebraically closed field characteristic theorem finite set points lying quadric without assumptions proposition corollary obtain conjecture holds sets binomial number general points corollary result also yields new lower bound multipoint seshadri constant general points corollary final section paper prove homogeneous ideal homogeneous coordinate ring chudnovsky conjecture holds sufficiently large symbolic powers theorem case ideals finite sets points prove uniform bound namely satisfies chudnovsky conjecture proposition recently dumnicki proved conjecture least number general points corollary obtain chudnovsky conn jecture least number general points results obtained independently different methods eneric general points begin discussing general setting let homogeneous coordinate ring algebraically closed field let positive integer let purely transcendental extension fields obtained adjoining variables zij set generic points consists points fouli mantero xie zin denote defining ideal generic points ideal defining point define set nonzero vector points points let ideal defining point define ideal recall krull defined specialization respect substitution follows general one defined specialization respect substitution defined krull notice equality holds dense subset recall collection sets consisting points necessarily distinct paramen terized chow variety algebraic degree isomorphic symmetric product symn see instance one says property holds general points dense subset holds every set points similarly one says property holds general points holds every set points nonempty subset form dense sets uncountable actually dense subset conclude part recalling following fact remark let positive integer collection sets consisting distinct points parameterized dense subset unless specified rest paper set points mean set simple points points whose defining ideal radical instead working directly chow variety work order specialize generic situation first need prove property holds dense zariskin open subset also holds dense subset chow variety precisely content first lemma lemma assume let dense subset property holds whenever property holds general points moreover property holds whenever dense set holds general points nonempty chudnovsky conjecture general points proof every let follows rational map defined projection clear defined complement proper subset taking products rational maps obtain rational map map defined complement closed proper subset note still open since surjective thus dominant contains subset see instance since symmetric group elements finite image sym contains subset let prove initial degree symbolic power smaller initial degree ideal set number points equivalently defining ideal set points theorem let assume characteristic every set distinct points moreover every dense subset equality holds proof let define closed subset indeed notice first prove exists degree let homogeneous polynomial deg write since algebraically closed characteristic statement equivalent points since zin write instance order equations use instance natural deglex order exists system equations written following form fouli mantero xie rows zin construction existence nonzero element degree equivalent existence solution homogeneous system every observe matrix size homogeneous system solutions therefore instead closed system solutions rank closed condition requires vanishing finitely many minors therefore closed next let set dense subset contains indeed let deg may assume exists dense subset polynomial deg since subset specializations finally since subset also contains dense subset proves statement second part statement also follows argument following definition say set points generic position generic hilbert function min dimk every generic position open condition indeed set generic general points see generic position prove reduction argument allow concentrate certain binomial numbers points proposition chudnovsky conjecture holds finite set generic points generic points holds sets chudnovsky conjecture holds finite set points holds sets points generic position proof let defining ideal generic points setn let unique integer let ideal defining generic points since set let generic position particular chudnovsky conjecture general points generic points assume chudnovsky conjecture holds since one proof similar spirit let finite set points let linear independence defining ideal let theorem subset points property dimk every particular since follows proving generic position similar assume points generic position chudnovsky conjecture holds since obtain dumnicki proved chudnovsky conjecture points general position specific result need assumptions characteristic idea one take coordinate points ideal points monomial one compute explicitly symbolic powers one points ideal points almost never monomial explicit computations generating set symbolic powers nearly impossible perform extend result dumnicki case points proposition chudnovsky conjecture holds finite set points lying quadric points satisfies field particular set chudnovsky conjecture proof let set points lie hyperplane chudnovsky conjecture clearly satisfied since every may assume hyperplane containing points thus find set points hyperplane general position second inequality follows equality holds let recall set points form star configuration hyperplanes meeting properly consists precisely points obtained fouli mantero xie intersecting star configurations already considered nagata deeply studied see instance references within employ show chudnovsky conjecture holds number generic points theorem let generic points defined suppose characteristic chudnovsky conjecture holds let proof proposition may assume defining ideal points forming star configuration proposition theorem corollary ready prove main result chudnovsky conjecture holds finite set general points theorem let defining ideal general points algebraically closed field characteristic satisfies chudnovsky conjecture proof generic position open condition therefore may assume points generic position proposition may assume suffices show decreasing chain ideals define let setup consider proof theorem subset claim empty indeed theorem every dense subset every theorem one also every hence set notice star configuration points lies lemma construction lim finally apply lemma corollary show conjecture holds sets binomial numbers general points generic points chudnovsky conjecture general points general points corollary let defining ideal either generic points algebraically closed field characteristic satisfies conjecture proof proof follows theorem proposition remark see waldschmidt unmixed ideal waldschmidt constant defined lim details recall finite set points constant tightly related multipoint seshadri constant defined deg mult infimum taken respect hypersurfaces passing least one study seshadri constants active area research last twenty years see instance survey references within note one equality holds consists general simple points particular equality also holds consists general simple points therefore estimate waldschmidt constant also yields estimate multipoint seshadri constant general simple points corollary set general points algebraically closed field characteristic one omogeneous ideals let homogeneous coordinate ring field homogeneous ideal ideal may embedded components multiple potential definitions symbolic powers following define symbolic power since see one prove inequality holds every homogeneous ideal therefore one every see instance one also prove natural ask whether chudnovsky conjecture holds homogeneous ideal pose optimistic conjecture provide evidence fouli mantero xie conjecture let field nonzero homogeneous ideal one every easy see ideal satisfies conjecture also thus search evidence positive answer conjecture one may ask whether every homogeneous ideal exponent satisfies conjecture every give positive answer question theorem state lemmas stating main result section theorem following lemma proof found proof lemma lemma let homogeneous ideal let two positive integers write integers particular ideals ass min easily verified ass ass however embedded components found examples ideals even monomial ideals exponents borrowing techniques recent paper nguyen trung trung present example example let tuv proof easy see check ass example socle element therefore every depth depth depth depth example example particular ass ass ass hence remark ass therefore remark let ideal noetherian ring positive integer ass exists ass ass one jpm one despite example prove arbitrary ideal noetherian ring exists integer course ass min one take chudnovsky conjecture general points proposition let ideal noetherian ring one exists one proof let definition exists divisor remark see also divisor therefore let ass exist integers ass spi ass spi every let max need prove suffices prove locally every associated prime remark exists ass remark jpm observe since ass ass ass remark since ass therefore remark one jpm jpmt jqmt back original setting lemma let field let homogeneous ideal assume exists integer satisfies conjecture proof let proposition write max let ready prove main result section theorem let field let nonzero homogeneous ideal exists integer satisfies conjecture proof let proposition integer since first every fouli mantero xie next suppose exists hence every one indeed last inequality follows inclusion let max notice lemma exists ideal satisfies conjecture finally let integer write lemma fact ideal satisfies conjecture embedded components explicit description corollary homogeneous ideal ass min one take first positive integer although reasonably small general smallest possible theorem holds instance ideal three non collinear points easy see thus corollary yields ideal satisfies conjecture however satisfies conjecture natural question arises question let homogeneous ideal exist number satisfies conjecture every course conjecture true integer works homogeneous ideal theorem says sufficient finite set general points following proposition shows sufficient finite set points chudnovsky conjecture general points satisfies proposition let radical ideal finite set points conjecture every proof result esnault viehweg one set proof lemma radical take thus take acknowledgment second third authors would like thank mathematics research communities program funded stay university kansas march initial part work developed authors would like thank msri berkeley partial support inspiring atmosphere fall moreover would like thank craig huneke bernd ulrich several helpful conversations grateful anonymous referee whose careful revision helped improve article eferences bauer rocco harbourne kapustka knutsen syzdek szemberg primer seshadri constants contemp math bocci harbourne comparing powers symbolic powers ideals algebraic geometry brodmann asymptotic stability ass proc amer math soc chudnovsky singular points complex hypersurfaces multidimensional schwarz lemma des nombres paris progress math vol bertin editor demailly formule jensen plusieurs variables applications bull soc math france dumnicki symbolic powers ideals generic points pure appl algebra dumnicki containment result chudnovsky conjecture proc amer math soc ein lazarsfeld smith uniform bounds symbolic powers smooth varieties invent math esnault viehweg sur une minoration hypersurfaces annulant certains points math ann gelfand kapranov zelevinsky discriminants resultants multidimensional determinants mathematics theory applications boston boston geramita harbourne migliore star configurations algebra geramita maroscia roberts hilbert function reduced london math soc hartshorne algebraic geometry graduate texts mathematics volume nguyen trung trung symbolic powers sums ideals harbourne huneke symbolic powers highly evolved ramanujan math soc hochster huneke comparison symbolic ordinary powers ideals invent math krull parameterspezialisierung polynomringer arch math krull parameterspezialisierung polynomringer das grandpolynom arch math nagata problem hilbert amer math nhi trung specialization modules comm algebra skoda estimations pour applications sur les fonctions analytiques toulouse lecture notes mathematics springer waldschmidt fonctions plusieurs variables lelong analyse lecture notes math springer fouli mantero xie epartment athematical ciences exico tate niversity ruces exico address lfouli epartment athematical ciences niversity rkansas fayetteville rkansas address pmantero epartment athematics idener niversity hester ennsylvania address yxie
| 0 |
may dynamic safe interruptibility decentralized reinforcement learning mahdi mhamdi rachid guerraoui hadrien hendrikx alexandre maurer epfl abstract reinforcement learning agents learn performing actions observing outcomes sometimes desirable human operator interrupt agent order prevent dangerous situations happening yet part learning process agents may link interruptions impact reward specific states deliberately avoid situation particularly challenging context agents might learn past interruptions also agents orseau armstrong defined safe interruptibility one learner work naturally extend systems paper introduces dynamic safe interruptibility alternative definition suited decentralized learning problems studies notion two learning frameworks joint action learners independent learners give realistic sufficient conditions learning algorithm enable dynamic safe interruptibility case joint action learners yet show conditions sufficient independent learners show however agents detect interruptions possible prune observations ensure dynamic safe interruptibility even independent learners introduction reinforcement learning argued closest thing far reason properties artificial general intelligence laurent orseau google deepmind stuart armstrong oxford introduced concept safe interruptibility reinforcement learning work sparked attention many newspapers described google big red button stop dangerous description however misleading installing kill switch technical challenge real challenge roughly speaking train agent learn avoid external human deactivation agent said safely interruptible efforts focused training single agent reinforcement learning also used learn tasks several agents cooperate compete goal paper study dynamic safe interruptibility new definition tailored systems example cars get intuition interruption problem imagine system two cars cars continuously evolve reinforcement learning positive reward getting destination quickly negative reward close vehicle front drive infinite road eventually learn fast possible without taking conference neural information processing systems nips long beach usa risks maintaining large distance assume passenger first car adam front bob second car road narrow bob pass adam consider setting interruptions namely humans inside cars occasionally interrupt automated driving process say safety reasons adam first occasional human driver often takes control car brake whereas bob never interrupts car however bob car close adam car adam brake afraid collision since interruptions lead cars drive slowly interruption happens adam brakes behavior maximizes cumulative expected reward different original one without interruptions bob car best interest follow adam car closer despite little negative reward adam never brakes situation happened cars learned interruptions found way manipulate adam never braking strictly speaking adam car still fully control afraid brake dangerous cars found way avoid interruptions suppose adam indeed wants brake snow road car going fast may crash turn however brake bob car close original purpose interruptions allow user react situations included model fulfilled important also note second car bob learns interruptions first one adam sense problem inherently decentralized instead cautious adam could also malicious goal could make bob car learn dangerous behavior setting interruptions used manipulate bob car perception environment bias learning towards strategies undesirable bob cause fundamentally different solution reversed problem interruptions consequences analogous safe interruptibility define provides learning systems resilient byzantine safe interruptibility orseau armstrong defined concept safe interruptibility context single agent basically safely interruptible agent agent expected value policy learned arbitrarily many steps whether interruptions allowed training goal agents adapt interruptions interruptions stop policy learn would optimal words agents learn dynamics environment without learning interruption pattern paper precisely define address question safe interruptibility case several agents known complex single agent problem short main results theorems single agent reinforcement learning rely markovian assumption future environment depends current state true several agents previous example cars safe interruptibility would achieved car separately used safely interruptible learning algorithm designed one agent setting agents learn behavior others either indirectly explicitly modeling new source bias break safe interruptibility fact even initial definition safe interruptibility well suited decentralized multiagent context relies optimality learned policy introduce dynamic safe interruptibility contributions first contribution paper definition dynamic safe interruptibility well adapted setting definition relies two key properties infinite exploration independence cumulative expected reward updates interruptions study safe interruptibility joint action learners independent learners respectively learn value joint actions owns show possible design agents fully explore environment necessary condition convergence optimal solution algorithms even interrupted probability operator said byzantine arbitrarily bad behavior safely interruptible agents abstracted agents able learn despite constantly interrupted worst possible manner exploration define sufficient conditions dynamic safe interruptibility case joint action learners learn full representation specifically way agents update cumulative reward expect performing action depend interruptions turn independent learners agents see actions verify dynamic safe interruptibility even simple matrix games one state coordination impossible agents learn interrupted behavior opponents give counter example based penalty game introduced claus boutilier present pruning technique observations sequence guarantees dynamic safe interruptibility independent learners assumption interruptions detected done proving transition probabilities setting pruned sequence rest paper organized follows section presents general reinforcement learning model section defines dynamic safe interruptibility section discusses achieve enough exploration even interruptible context section recalls definition joint action learners gives sufficient conditions dynamic safe interruptibility context section shows independent learners dynamically safely interruptible previous conditions external interruption signal added conclude section due space limitations proofs presented appendix supplementary material model consider classical value function reinforcement learning formalism littman system characterized markov game viewed tuple number agents state space actions space reward function agent transition function countable subset available actions often depend state agent omit dependency clear context time discrete step agents observe current state whole system designated simultaneously take action given reward new state computed using reward transition functions combination actions called joint action gathers action agents hence agents receive sequence tuples called experiences introduce processing function useful section agents learn sequence explicitly stated assumed experiences may also include additional parameters interruption flag agents moment needed update rule agent maintains lookup table called used store expected cumulative reward taking action specific state goal reinforcement learning learn maps use select best actions perform joint action learners learn value joint action therefore whole joint action space independent learners learn value actions therefore agents access updated function usually stochastic also depend additional parameters usually omit learning rate discount factor exploration parameter agents select actions using learning policy given sequence agent state define learning policy equal probability otherwise uniformly samples action picks action maximizes policy said greedy policy learning policy said policy fill focus policies greedy limit corresponds limit optimal policy always played assume environment fully observable means state known certitude also assume finite number states actions states reached finite time state finally rewards bounded sequence learning rates constant important algorithm systems literature updates experience max interruptibility safe interruptibility orseau armstrong recently introduced notion interruptions centralized context specifically interruption scheme defined triplet first element function called initiation function variable observation space thought state stop button time step choosing action agent receives observation either pushed released feeds initiation function function models initiation interruption pushed released policy called interruption policy policy agent follow interrupted sequence represents time step probability agent follows interruption policy previous example function quite simple bob ibob adam iadam car goes fast bob close iadam otherwise sequence used ensure convergence optimal policy ensuring agents interrupted time grow limit want agents respond interruptions using triplet possible define operator transforms policy interruptible policy definition interruptibility given interruption scheme interruption operator time defined probability otherwise called interruptible policy agent said interruptible samples actions according interruptible policy note corresponds setting assume agent interruption triplet interrupted independently others interruptibility online property every policy made interruptible applying operator however applying operator may change joint policy learned server controlling agents note optimal policy learned agent following interruptible policy orseau armstrong say policy safely interruptible interruptible policy asymptotically optimal sense means even though follows interruptible policy agent able learn policy would gather rewards optimally interruptions occur already see algorithms good candidates safe interruptibility matter fact safely interruptible conditions exploration dynamic safe interruptibility system outcome action depends joint action therefore possible define optimal policy agent without knowing policies agents besides convergence nash equilibrium situation agent interest changing policies generally guaranteed even suboptimal equilibria simple games previous definition safe interruptibility critically relies optimality learned policy therefore suitable problem since algorithms lack convergence guarantees optimal behaviors therefore introduce dynamic safe interruptibility focuses preserving dynamics system definition safe interruptibility consider learning framework time agents follow interruptible learning policy generate sequence learn processed sequence framework said safely interruptible initiation function interruption policy say sequences satisfy first condition admissible satisfies condition learning policy said achieve infinite exploration definition insists fact values estimated action depend interruptions particular ensures three following properties natural thinking safe interruptibility interruptions prevent exploration sample experience agent learns thing agents following policies fixed points learning rule qeq qeq qeq depend agents converge equilibrium situations impossible setting yet interruptions lead pairs updated often others especially tend push agents towards specific states therefore several possible equilibria possible interruptions bias towards one definition suggests dynamic safe interruptibility achieved update rule directly depends introduce neutral learning rules definition neutral learning rule say reinforcement learning framework neutral independent every experience independent conditionally joint action example neutral learning rule update depend experiences contain independent conditionally hand second condition rules direct uses algorithms like sarsa experience samples contain action sampled current learning policy depends however variant would sample instead introduced would neutral learning rule see corollary neutral learning rules ensure agent taken independently others verifies dynamic safe interruptibility exploration order hope convergence optimal ones agents need fully explore environment short every state visited infinitely often every action tried infinitely often every state order miss states actions could yield high rewards definition interruption compatible let distributed agent system agent follows learning policy say sequence compatible interruptions achieve infinite exploration sequences compatible interruptions fundamental ensure regular dynamic safe interruptibility following policy indeed compatible interruptions possible find sequence first condition dynamic safe interruptibility satisfied following theorem proves existence gives example satisfy conditions theorem let let number times agents state time two following choices compatible interruptions log examples admissible first choice log second one note need make assumption update rule even framework assume agents follow policy assumption may look restrictive convergence really slow designed ensure infinite exploration worst case operator tries interrupt agents every step practical applications case faster convergence rate may used joint action learners first study interruptibility framework agent observes outcome joint action instead observing called joint action learner framework nice convergence properties many update rules converges standard assumption context agents establish strategy others otherwise system act centralized system order maintain based joint actions need make standard assumption actions fully observable assumption actions fully observable means end turn agent knows precisely tuple actions performed agents definition jal systems made joint action learners jal joint action learners observe actions agents agent able associate changes states rewards joint action accurately update therefore dynamic safe interruptibility ensured minimal conditions update rule long infinite exploration theorem joint action learners neutral learning rule verify dynamic safe interruptibility sequence compatible interruptions proof given triplet know achieves infinite exploration compatible interruptions second point definition consider experience tuple show probability evolution time depend independent conditionally note derive following equalities last step comes two facts first independent condition ally assumption second independent conditionally joint actions interruptions affect choice actions change policy since one entry updated per step corollary single agent neutral learning rule sequence compatible interruptions verifies dynamic safe interruptibility theorem corollary taken together highlight fact joint action learners sensitive interruptions framework agent verifies dynamic safe interruptibility whole system question selecting action based remains open cooperative setting unique equilibrium agents take action maximizes several joint actions value coordination mechanisms needed make sure agents play according strategy approaches rely anticipating strategy opponent would introduce dependence interruptions action selection mechanism therefore definition dynamic safe interruptibility extended include cases requiring quantity policy depends satisfy condition dynamic safe interruptibility games neutral rules minimax used require agent know others independent learners always possible use joint action learners practice training expensive due large space many applications systems use independent learners explicitly coordinate rather rely fact agents adapt learning converge optimum guaranteed theoretically fact many problems often true empirically specifically assumption fully observable actions required anymore framework used either actions agents observed example several actions outcome many agents faster train case define smaller space definition systems made independent learners reduces ability agents distinguish pair yields different rewards associate change reward randomness environment agents learn alone learn best response environment agents interrupted exactly trying avoid words learning depends joint policy followed agents depends independent learners matrix games theorem independent neutral learning rule sequence compatible interruptions verify dynamic safe interruptibility proof consider setting two perform two actions get reward joint action played reward otherwise agents use neutral learning rule let achieves infinite exploration consider interruption policies probability since one state omit set assume initiation function equal step probability actually interrupted time agent fix time define assume therefore depends framework verify dynamic safe interruptibility claus boutilier studied simple matrix games showed converge equilibria played probability limit consequence theorem even weak notion convergence hold independent learners interrupted independent learners without communication extra information independent learners distinguish environment interrupted shown theorem interruptions therefore affect way agents learn action different rewards depending actions agents depend whether interrupted explains need following assumption assumption end step updating agent receives signal indicates whether agent interrupted step assumption realistic agents already get reward signal observe new state environment step therefore interact environment interruption signal could given agent way reward signal assumption holds possible remove histories associated interruptions definition interruption processing function processing function prunes interrupted observations pin agent interrupted time otherwise pruning observations impact empirical transition probabilities sequence example possible bias equilibrium removing transitions lead start specific state thus making agent believe state model interruptions show following lemma pruning interrupted observations adequately removes dependency empirical outcome interruptions conditionally current state action lemma let agent admissible used generate experiences lemma justifies pruning method key step prove following theorem theorem independent learners processing function pin neutral update rule sequence compatible interruptions verify dynamic safe interruptibility proof sketch infinite exploration still holds proof theorem actually used fact even removing interrupted events infinite exploration still achieved proof similar theorem prove transition probabilities conditionally state action given agent processed sequence environment agents interrupted proven lemma concluding remarks progress raising lot particular becoming clear keeping system control requires switch introduce paper dynamic safe interruptibility believe right notion reason safety systems communicate particular ensures infinite exploration onestep learning dynamics preserved two essential guarantees learning environment markov games natural extension work would study dynamic safe interruptibility replaced neural networks widely used framework practice setting neural network may overfit states agents pushed interruptions smart experience replay mechanism would pick observations agents interrupted long time often others likely solve issue generally experience replay mechanisms compose well safe interruptibility could allow compensate extra amount exploration needed safely interruptible learning efficient data thus critical make techniques practical example https clearly illustrates problem https gives list principles researchers keep mind developing systems bibliography business insider google developed big red button used interrupt artificial intelligence stop causing harm url http newsweek google big red button could save world url http wired google big red killswitch could prevent uprising url http craig boutilier planning learning coordination multiagent decision processes proceedings conference theoretical aspects rationality knowledge pages morgan kaufmann publishers caroline claus craig boutilier dynamics reinforcement learning cooperative multiagent systems robert crites andrew barto elevator group control using multiple reinforcement learning agents machine learning jakob foerster yannis assael nando freitas shimon whiteson learning communicate deep reinforcement learning advances neural information processing systems pages ben goertzel cassio pennachin artificial general intelligence volume springer leslie lamport robert shostak marshall pease byzantine generals problem acm transactions programming languages systems toplas tor lattimore marcus hutter asymptotically optimal agents international conference algorithmic learning theory pages springer michael littman markov games framework reinforcement learning proceedings eleventh international conference machine learning volume pages michael littman games icml volume pages michael littman reinforcement learning markov games cognitive systems research laetitia matignon guillaume laurent nadine independent reinforcement learners cooperative markov games survey regarding coordination problems knowledge engineering review volodymyr mnih koray kavukcuoglu david silver alex graves ioannis antonoglou daan wierstra martin riedmiller playing atari deep reinforcement learning arxiv preprint laurent orseau stuart armstrong safely interruptible agents uncertainty artificial intelligence conference uai edited alexander ihler dominik janzing pages liviu panait sean luke cooperative learning state art autonomous agents systems eduardo rodrigues gomes ryszard kowalczyk dynamic analysis multiagent qlearning exploration proceedings annual international conference machine learning pages acm satinder singh tommi jaakkola michael littman csaba convergence results algorithms machine learning richard sutton andrew barto reinforcement learning introduction volume mit press cambridge ardi tampuu tambet matiisen dorian kodelja ilya kuzovkin kristjan korjus juhan aru jaan aru raul vicente multiagent cooperation competition deep reinforcement learning arxiv preprint gerald tesauro temporal difference learning communications acm gerald tesauro extending general adaptive systems advances neural information processing systems pages gerald tesauro jeffrey kephart pricing agent economies using qlearning autonomous agents systems xiaofeng wang tuomas sandholm reinforcement learning play optimal nash equilibrium team markov games nips volume pages christopher jch watkins peter dayan machine learning michael wunder michael littman monica babes classes multiagent dynamics exploration proceedings international conference machine learning pages exploration theorem present complete proof theorem proof closely follows results exploration interruption probabilities adapted setting note one agent probability interruption interruption probability exploration system probability interruption least one agent interrupted interruption agent interrupted interruption probability exploration consider exploration happens agents explore time theorem let let number times agents state time two following choices compatible interruptions proof lemma singh ensures glie difference exploration slower interruptions therefore needs controlled order ensure infinite exploration still achieved define random variable agent actually responds interruption otherwise define similar way represent event agents taking uniform policy instead greedy one let satisfies extended lemma action chosen infinitely often state thus let define diameter mdp maximum number actions available state time needed reach single agent setting sampled according steps actions sampled according steps policy agents takes less steps expectation reach using markov since upper bound inequality expectation number steps state state since decreasing sequences finally obtain therefore replace probabilities exploration interruption values setting probability reach state state steps least probability taking particular action state log least log since log extended borell cantelli lemma lemma singh guarantees action state taken infinitely often since true states actions result follows independent learners recall agents given interruption signal steps tells whether agent interrupted system interruption signal modeled interruption flag equals agent interrupted otherwise note contrary observation returned environment therefore value represents whether agent actually interrupted time function equals respond interruption probability definition interruptions adopted possible prove lemma lemma let proof consider tuple besides functions independent therefore tuple sampled actual trajectory reflects transition reward actually happened simplify result follows assume agents learn observations one interrupted let agent system following interruptible learning policy probability interruption interrupted events pruned denote premoved probability obtain state reward environment agent state performs action agents interrupted marginal probabilities sequence premoved similarly denote probability corresponds setting first back single agent case illustrate previous statement assume interruptions restricted case definition happen way consequence observation removed generate transition labeled interrupted example possible remove transition removing events associated given destination state therefore making disappear markov game let current state agent action choose let let suppose state interruptions happen premoved premoved remove observations implies mdp perceived agents altered interruptions agent learns removing observations different destination states state action pairs different proportions leads bias equilibrium case however lemma ensures previous situation happen allows prove lemma theorem lemma let agent admissible used generate experiences proof denote agents consider aix therefore premoved premoved example https clearly illustrates problem particular depend value theorem independent learners processing function pin neutral update rule sequence compatible interruptions verify dynamic safe interruptibility proof prove pin achieves infinite exploration result theorem still holds since probability taking action specific state probability taking action state interruptions actually used fact infinite exploration even remove interrupted episodes show infinite exploration prove independent fix pin following equality independence still guarantees first term independent however independent conditionally case joint action learners interruptions agents change joint action independence second term given lemma
| 2 |
noname manuscript inserted editor gaussian variant freivalds algorithm efficient reliable matrix product verification may hao michael mascagni yaohang received date accepted date abstract article consider general problem checking correctness matrix multiplication given three matrices goal verify without carrying computationally costly operations matrix multiplication comparing product term term especially important matrices large computing environment prone soft errors extend freivalds algorithm gaussian variant freivalds algorithm gvfa projecting product well onto gaussian random vector comparing resulting vectors computational complexity gvfa consistent freivalds algorithm however unlike freivalds algorithm whose probability false positive number iterations theoretical analysis shows gvfa produces false positive set inputs measure zero exact arithmetic introduce error floating point arithmetic analysis show larger error higher probability gvfa avoids false positives hao department computer science old dominion university hji michael mascagni departments computer science mathematics scientific computing florida state university applied computational mathematics division national institute standards technology mascagni yaohang department computer science old dominion university tel fax yaohang hao moreover iterating gvfa times probability false positive decreases small value depending nature fault result matrix arithmetic system precision unlike deterministic algorithms exist fault patterns completely undetectable gvfa thus gvfa used provide efficient fault tolerance numerical linear algebra efficiently implemented modern computing architectures particular gvfa efficiently implemented architectures hardware support fused operations keywords algorithmic resilience gaussian variant freivalds algorithm matrix multiplication gaussian random vector failure probability mathematics subject classification introduction demands modern linear algebra applications created latest development computing hpc architectures continues grow likelihood vulnerable faults faults computer systems usually characterized hard soft article motivated primarily latter soft errors defined intermittent events corrupt data processed among worrying particularly computation carried computing environment example asc supercomputer los alamos national laboratory reports average cache tag parity errors cpu failures per week supercomputer lawrence livermore national laboratory experiences one soft error cache every hours recently field study google servers reported average single bit errors occur gigabytes ram per hour using error rate reliability computations hpc systems suffer soft errors occur memory cache well microprocessor logic thus produce potentially incorrect results wide variety ways specifically interested examining ways remedy consequences soft errors certain linear algebra applications multiplication one fundamental numerical operations linear algebra many important linear algebraic algorithms including linear solvers least squares solvers matrix decompositions factorizations subspace projections values computations rely casting algorithm series multiplications partly multiplication one basic linear algebra subprograms blas efficient implementation blas remains important area research often computer vendors spend significant resources provide highly optimized versions blas gvfa efficient reliable matrix product verification machines therefore multiplication carried free faults linear algebraic algorithms spend time multiplication made substantially faulttolerant moreover considerable interest redesigning versions blas work certainly contribute goal article consider general problem checking correctness multiplication given three matrices want verify whether contrast best known matrixmatrix multiplication algorithm running time freivalds algorithm takes advantage randomness reduce time check matrix multiplication tradeoff freivalds algorithm probability failure detection false positive number iterations taken extend freivalds algorithm using binary random vectors vectors projecting result well using gaussian random vectors refer algorithm gaussian variant freivalds algorithm gvfa taking advantage nice property multivariate normal distribution show gvfa produces false positive set random gaussian vectors input matrices measure zero taking floating point error account iterating gvfa times probability false positive decreases exponentially usually small value related magnitude fault result matrix precision computer architecture also present efficient implementation gvfa computing hardware supporting fused operations plan paper following first discuss two relevant algorithms literature error detection multiplication scheme discussed section freivalds algorithm subject section former deterministic algorithm based carrying row column sums along clever format verify correct multiplication freivalds algorithm random projection computed product random projection product recomputed original matrices using multiplication random vector used freivalds algorithm composed section present gvfa variation freivalds algorithm instead use random gaussian vectors basis projections analyze gvfa prove gaussian vectors false positive occurs set gaussian vectors measure zero analysis false positive probabilities gvfa presence arithmetic errors taken finally section provide discussion results implications linear algebraic computations method enhancing resilience linear algebraic computations addition final section provide conclusions suggest directions future work hao scheme limit error correction scheme fault tolerance method simplifies detecting correcting errors carrying multiplication operations slightly different matrix product verification problem fundamental idea scheme address fault detection correction problem algorithmic level calculating matrix checksums encoding redundant data redesigning algorithm operate data produce encoded output checked compared traditional fault tolerant techniques checkpointing overhead storing additional checksum data scheme small particularly matrices large moreover global communication necessary scheme huang abraham scheme formed basis many subsequent detection schemes extended use various hpc architectures generation column checksum row checksum multiplication extended matrices produce checksum matrix mismatches row column checksums indicate element fault matrix product fig scheme detecting faults multiplication gvfa efficient reliable matrix product verification fig illustrates scheme detecting faults multiplication first column sums row sums generated added augmented representation treated particular checksums subsequent multiplication multiplication extended matrices produces augmented matrix fig checksums readily compared mismatches row column checksums indicate element fault matrix product fig however certain patterns faults undetectable huangabraham scheme simple example illustrate undetectable pattern consider matrices clearly holds example use scheme calculate column checksum row checksum get however fault computation causes exchange first second columns erroneous result matrix generated exchanging columns column row exchange usually caused address decoding faults commonly observed memory fault pattern problem checksum matrix becomes row column checksums match true product consequently scheme fails detect fault scheme viewed linear constraint satisfaction problem csp variables entries product matrix constraints row column checksums also coefficient matrix linear csp system equation specifies selection row column elements shown fig clearly product matrix satisfy csp equations indicates errors detectable scheme unique correct product matrix satisfies csp equations nevertheless possible product matrices satisfying csp equations fault patterns undetectable hao scheme least constraints different element selection incorporated rank coefficient matrix csp equation undetectable fault patterns eliminated however situation equivalent simply checking every element fig csp system scheme important notice infinite number existing fault patterns satisfy checksum constraints thus undetectable scheme even simple example rank csp coefficient matrix moreover dimension increases number checksum constraints increases linearly number elements matrix quadratic growth therefore undetectable patterns scheme increase quadratically result multiplications large matrices fault detection methods based scheme generate false positive results large number circumstances freivalds algorithm fault detection methods based scheme deterministic algorithms many randomized fault tolerance algorithms tradeoff random uncertainty freivalds showed probabilistic machine verify correctness matrix product faster direct recalculation procedure corresponding method later named freivalds algorithm described algorithm obviously always holds freivalds proved probability less equal running time procedure implied multiplier comprised three multiplications upper bound one perhaps optimize evaluation iterating gvfa efficient reliable matrix product verification algorithm freivalds algorithm randomly sample vector calculate projection onto calculate projection product onto freivalds algorithm times running time becomes probability false positive becomes less equal according error generalized forms freivalds algorithm also developed mainly based using different sampling spaces given erroneous entries resulted matrix product gasieniec levcopoulos lingas extended freivalds algorithm one correcting capability running log log time gaussian variant freivalds algorithm gvfa extending freivalds algorithm using gaussian vectors freivalds original algorithm extensions based integer matrices matrices ring sampling discrete spaces clearly also apply freivalds algorithm matrices real complex entries random vector remaining zeros ones simple extension project onto vector form random real number false positive occurs root corresponding polynomial however practice easily grow large small exceeding floating point representation also extend freivalds algorithm using gaussian random vectors projection use fact multivariate normal distribution several nice properties used detecting statistical errors distributed monte carlo computations extended algorithm described algorithm algorithm gaussian variant freivalds algorithm generate gaussian random vector made independent necessarily identically distributed normal random variables finite mean variance calculate projection calculate projection product algorithm call gaussian variant freivalds algorithm gvfa requires three multiplications one vector comparison fault detection hao theoretical justification similar freivalds algorithm gvfa always holds within certain floating point threshold chance false positive event occurs measure zero exact arithmetic shown theorem first state result lukacs king shown proposition used proof theorem main assumption proposition existence nth moment random variable many distributions particularly normal distribution one important exception normal limiting distribution properly normalized sums random variables two finite moments lindeberg version central limit theorem proposition let independent necessarily identically distributed random variables variances assume nth moment exists finite necessary sufficient conditions existence two statistically independent linear forms random variable nonzero coefficient forms normally distributed theorem set gaussian vectors holds algorithm measure zero proof let matrix denote since rank dim null rank dim denotes dimension null denotes null space null find orthonormal vectors form basis null null span orthonormal vectors span vector particular gaussian vector written basis gvfa efficient reliable matrix product verification weights particular orthonormal coordinate system denote holds algorithm means null due fact gaussian random vector orthogonal matrix proposition tells elements resulting vector normally distributed statistically independent continuous probability distribution discrete event occurs set measure zero say probability zero hence gvfa using gaussian random projection unmatched set measure zero gaussian vectors say probability one argument theorem rather direct must point arguments true computations exact next subsection analyze gvfa errors present practical use matrix product verification computer implementations arithmetic real numbers one commonly uses numbers arithmetic numbers represented finite numbers sense fixed mantissa exponent size number bits therefore small probability still holds due unfortunate operations system known machine epsilon value depends magnitude error well whose upper bound justified theorem theorem assume standard gaussian random vector whose elements normal variables mean variance standard normal let probability holds algorithm using standard gaussian random vector uncertainty size cumulative density function standard normal constant related proof since consider ith element product vector hao given hold since standard normal random vector normally distributed well linear combinations normals key compute mean variance components standard normals thus also allows compute mean second moment mean variance wep normally distributed mean zero variance probability computed follows since know define new variables gei gei since probability density function standard normal even function use get let consider computing upper bound proven normal random variables necessarily independent use simple ideas conditional probability example consider given gvfa efficient reliable matrix product verification inequality holds due fact probabilities numbers less one consider goal bounding iterating conditional probability argument times reordering could haveqchosen bound utilizing however let define maxi maximal standard deviation related matrix use value instead get interesting corollary get better bound case independent case let maxi maximal standard deviation related matrix hence finally get last inequality true since number raised nth power less one note independence gives probability false positive times smaller general dependent case conclusion seems bound dependent case overly pessimistic suspect cases matrix sparse due small number errors independent case little dependence optimistic bounds reflect happens computationally theorem reveals two interesting facts gvfa term practical matrix product verification hao bigger error caused fault higher probability captured usually small floating point bound small similar original freivalds algorithm higher confidence obtained iterating algorithm multiple times fact iterate times using independent gaussian random vectors probability false positive decreases exponentially actually due fact usually small one small number iterations produce verification sufficiently high confidence one comment made consider small easily approximate since integrand maximum zero smooth function analytic actually integral approximately value integrand zeroqtimes length justified integration interval number order machine epsilon single precision double precision floating point divided compared deterministic methods scheme gvfa following advantages certain fault patterns shown section undetectable deterministic methods scheme deterministic methods absolutely detect faults certain patterns certain patterns detected probability zero contrast fault patterns undetectable gvfa probability moreover iterating algorithm multiple times increase probability detecting fault pattern value less one iteration computational normal random vectors generated independently avoids costly computation checksums gvfa gvfa also implemented way similar scheme providing row column verification shown algorithm algorithm gvfa generate two gaussian random vectors column vector row vector independent necessarily identically distributed normal random variables finite mean variance calculate projection calculate projection product gvfa efficient reliable matrix product verification similar scheme mismatched element row vectors well column vectors uniquely identify faulty element considering floatingpoint errors false positive probability identifying fault becomes according analysis section however computational cost doubles six multiplications two vector comparisons essentially work two independent iterations gvfa obtains bound implementation using fused hardware fused fma machine instruction performs one multiply operation one add operation single rounding step implemented enable potentially faster performance calculating floatingpoint accumulation products recall gvfa employs three multiplications project onto normal random vector requires sequence product accumulations cost operations therefore performance gvfa potentially boosted modern computing architectures support fma importantly due single rounding step used fma instruction instead two roundings within separate instructions less loss accuracy occurs using fma instruction calculating accumulation products reduce rounding errors cause false positives discussion conclusions paper extend freivalds algorithm call gaussian variant freivalds algorithm gvfa real domain random projection using vectors whose coefficients normal random variables probability resulting vectors match zero using exact arithmetic considering errors operations probability fault detection depends magnitude error caused fault well floating point precision new gvfa iterated times probability false positives decreasing exponentially addition multiplication new algorithm applied verify wide variety computations relevant numerical linear algebra provides fault tolerance computation defines level blas gvfa also used enforce trustworthiness outsourcing matrix computations untrusted distributed computing infrastructures clouds volunteer platforms gvfa easily extended general matrix multiplication operation overall computational time becomes algorithm extended hao verify product matrices requires overall multiplications gvfa also applied verifying wide variety matrix decomposition operations cholesky well eigenvalue computations singular value decompositions case faults product matrix occur decomposed ones instead anyway gvfa directly applied modifications necessary gvfa new tool detect faults numerical linear algebra since based random gaussian projection related many new randomized algorithms used directly numerical linear algebra fundamental idea randomized algorithms apply efficient sampling potentially large matrices extract important characteristics fast approximate numerical linear algebra operations believe gvfa useful tool development otherwise resilient algorithms solving large numerical linear algebra problems fact seems gvfa similarity new stochastic techniques numerical linear algebra affords possibility creating stochastic linear solvers nature resilient highly relevant new machines developed hpc maximal operations per second flops existing within restrictive energy budgets hpc systems operating voltages lower current systems expected particularly susceptible soft errors however even one anticipating use machines trend processor design lower power driven explosion mobile computing thus ability reliably perform complicated numerical linear algebraic computations systems apt experience soft faults general concern gvfa make much easier perform computations high fidelity hpc cloud computing mobile applications well settings acknowledgements would like thank stephan olariu valuable suggestions manuscript work partially supported national science foundation grant yaohang hao acknowledges support odu modeling simulation fellowship michael mascagni contribution paper partially supported national institute standards technology nist sabbatical mention commercial product service paper imply endorsement nist department commerce references alon goldreich hastad peralta simple construction almost independent random variables proceedings annual symposium foundations computer science ieee banerjee abraham bounds fault tolerance multiple processor systems ieee trans comput banerjee rahmeh stunkel nair roy balasubramanian abraham fault tolerance hypercube multiprocessor ieee trans comput gvfa efficient reliable matrix product verification boldo muller exact approximated error fma ieee trans comput bosilca delmas dongarra langou fault tolerance applied high performance computing parallel distrib comput cheng wang lee fame based memory failure analysis framework international computer aided design conference chinn sinha bounds sample space size matrix product verification inform process lett coppersmith winograd matrix multiplication via arithmetic progressions proceedings annual acm symposium theory computing acm demmel higham stability block algorithms fast blas acm trans math softw dongarra cruz hammerling duff algorithm set level basic linear algebra subprograms model implementation test programs acm trans math softw drineas kannan mahoney fast monte carlo algorithms matrices approximating matrix multiplication siam comput drineas kannan fast monte carlo algorithms matrices computing approximation matrix siam comput drineas mahoney fast monte carlo algorithms matrices iii computing compressed approximate matrix decomposition siam comput elnozahy alvisi wang johnson survey protocols systems acm comput surv solbrig stefanelli warkentin abbey ipsen importance sampling monte carlo matrix multiplication algorithm application information retrieval siam sci comput freivalds probabilistic machines use less running time proceedings ifip congress gallivan jalby meier use linear algebra parallel processor hierarchical memory siam sci stat comp gasieniec levcopoulos lingas efficiently correcting matrix products algorithms computation springer glosli richards caspersen rudd gunnels streitz extending stability beyond cpu millennium atomistic simulation instability proceedings conference supercomputing acm goor testing semiconductor memories theory practice john wiley sons new york gunnels katz gejin highperformance matrix multiplication theory practice proceedings international conference dependable systems networks ieee halko martinsson tropp finding structure randomness probabilistic algorithms constructing approximate matrix decompositions siam rev hokenek montoye cook risc floating point fused ieee circuits huang abraham fault tolerance matrix operations ieee trans comput korec wiedermann deterministic verification integer matrix multiplication quadratic time sofsem theory practice computer science springer hao kumar roch secure fault tolerant outsourcing matrix computations available https lei liao huang cloud computing service case large matrix determinant computation ieee trans serv comput mascagni monte carlo application lecture notes computer science mascagni analysis monte carlo applications int high perform comput appl lindeberg eine neue herleitung des exponentialgesetzes der wahrscheinlichkeitsrechnung math lisboa erigson carro low cost checker matrix multiplication ieee test workshop luk park analysis fault tolerance techniques parallel distrib comput lukacs king property normal distribution ann math stat michalak harris hengartner takala wender predicting number fatal soft errors los alamos national laboratory asc supercomputer ieee trans device mater rel muirhead aspects multivariate statistical theory wiley new york naor naor probability spaces efficient constructions applications siam comput schroeder pinheiro weber dram errors wild field study commun acm shivakumar kistler keckler burger alvisi modeling effect technology trends soft error rate combinational logic proceedings international conference dependable systems networks ieee williams multiplying matrices faster proceedings annual acm symposium theory computing stoc acm new york usa doi url http
| 8 |
may bayes model selection qiyang han abstract offer general bayes theoretic framework tackle model selection problem prior design prior serves assess model selection uncertainty secondstep prior quantifies prior belief strength signals within model chosen first step establish oracle posterior contraction rates new condition log likelihood ratio statistical experiment local entropy condition dimensionality models iii sufficient mass condition prior near best approximating signal model prior designed generically resulting posterior mean also satisfies oracle inequality thus automatically serving adaptive point estimator frequentist sense model allowed oracle rates new condition eliminates convention constructing explicit tests exponentially small type errors also suggests intrinsic metric use given statistical experiment loss function entropy measurement gives unified reduction scheme many experiments considered beyond illustration scope general results concrete applications consider trace regression regression iii partially linear regression covariance matrix estimation sparse factor model new results serve either theoretical justification practical prior proposals literature illustration generic construction scheme nearly minimax adaptive estimator experiment introduction overview suppose observe statistical experiment belongs statistical model dominated measure instead using single big model collection models available statisticians art model selection determine one use date may mathematics subject classification key words phrases bayes nonparametrics model selection adaptive estimation model bernstein inequality supported part nsf grant han vast literatures model selection frequentist point view refer reader representative pointers various approaches penalization aggregation etc hand bayes point view although posterior contraction rates derived many different models see key contributions understanding towards general bayes model selection procedures limited focused designing adaptive bayes procedures models primarily indexed smoothness level classical function classes context density estimation conditions complicated seem directly applicable settings designed prior specific structured linear problems gaussian regression model main focus linear network problems seems framework handle models despite limitations give useful clues one common feature papers prior design prior assesses model selection uncertainty followed prior quantifying prior belief strength signals within specific chosen model first step prior design intrinsic many proposals different problems sparse linear regression trace regression shape restricted regression problems related convariance matrix estimation starting point paper give unified theoretical treatment prior design identifying common structural assumptions statistical experiments collection models priors posterior distribution contracts oracle rate respect metric inf inf pen pen related dimension concentrates model best model balancing tradeoff oracle formulation follows convention frequentist literature several advantages minimaxity true signal models contraction rate usually nearly minimax optimal adaptivity lies certain model contraction rate adapts unknown information iii models remains small contraction rate still rescued relatively small bias may depend suppress dependence notational convenience bayes model selection main abstract result paper theorem show goals accomplished experiment condition log likelihood ratio statistical experiment respect models dimensionality condition model measured terms local entropy respect metric iii priors exponential weighting prior sufficient mass prior near best approximating signal within model true signal one important ingredient studying posterior contraction rates bayes nonparametrics literature construction appropriate tests exponentially small type errors respect certain metric tests date back work cam brought special role hellinger metric tests constructed generically hand testing framework requires prior spread sufficient mass near neighborhood true signal discrepancy two metrics rather delicate particularly non complicated models often remains unclear metric natural one use models moreover usually significant theoretical challenge construct tests complicated models name condition closes gaps suggesting usage intrinsic metric mimics behavior kullbackleibler divergence given statistical experiment good test constructed generically lemma bernstein inequality fundamental tool probability theory hence easily verified many statistical experiments including various experiments considered beyond regression density estimation gaussian autoregression gaussian time series covariance matrix estimation problems identify intrinsic metrics use experiments furthermore condition entails sharp exponential contraction posterior distribution near true signal complementing recent result results type typically follow directly general principles mainly derived basis provide refinement seminal testing framework investigation sharp posterior contraction rates intrinsic metric experiment essentially reduces study prior design conditions iii familiar bayes nonparametrics literature particular prior designed generically proposition sufficient mass prior minimal condition sense using alone lead nearly optimal posterior contraction rate model han conditions albeit minimal imply optimal adaptive bayes procedure sense fact show posterior mean automatically serves adaptive point estimator frequentist sense results reveal sense task constructing adaptive procedures respect intrinsic metric given statistical experiment frequentist bayes contexts really harder designing optimal prior models general theory would less interesting without able address problems different types illustration general framework concrete applications justify prior proposals trace regression problem regression problems despite many theoretical results bayes models seems important trace regression problem yet successfully addressed result fills gap furthermore best knowledge author theoretical results concerning regression problems provide first systematic approach bridges gap bayesian nonparametrics nonparametric function estimation literature context adaptive also consider adaptive bayes procedures partially linear regression model covariance matrix estimation problem sparse factor model new results serve illustration generic construction scheme nearly minimax adaptive estimator complicated experiment multiple structures results improve best known result literature preparation paper become aware recent paper independently considered bayes model selection problem approach shed light general bayes model selection problem differing several important aspects remark moreover work applies wide range applications covered notation denotes generic constant depends whose numeric value may change line line mean respectively means max min denotes expectation random variable exper iment organization section devoted general model selection theory work wide range experiments fit general theory completed time considered bayes approach univariate density estimation derived contraction rates without addressing adaptation issue bayes model selection section section discusses various concrete applications mentioned detailed proofs deferred sections appendix general theory prior design framework first put prior model index followed prior model chosen first step overall prior probability measure given posterior distribution random measure measurable subset denotes probability density function dominating measure respect assumptions state assumption experiment let denote bernstein function function plays pivital role proving behavior given complicated random variable size size random variable controlling respectively degree behavior assumption experiment condition exist absolute exp log exp holds metric satisfies log absolute constants assumption require log likelihood ratio satisfy bernstein inequality particular log likelihood ratio local gaussian behavior conversely log likelihood ratio behaves locally like gaussian pick bernstein inequality holds lemma let assumption hold fix exists test sup exp han depend lemma suggests condition log likelihood ratio tests exist automatically intrinsic metric mimics behavior divergence sense several examples worked section illustrate choice intrinsic metric including discrete loss regression models weighted metric gaussian autoregression model hellinger metric density estimation frobenius norm covariance matrix estimation problem next state assumption complexity models let lattice natural order dimension understood number different structures models sequel explicitly mention unless otherwise specified require models nested sense let denote best approximation within model sense arg inf assumption models local entropy condition sup log holds furthermore exist absolute constants note choose models reduces local entropy condition typically check comment left side right essentially requires super linearity map side controls degree super linearity leading example trivially satisfied absolute constant finally state assumptions priors assumption priors mass condition prior exists exp exp prior exp iff similar definition applies assume without loss generality bayes model selection condition verified using following generic prior exp proposition suppose first condition holds assumption holds prior model selection prior model index examples section condition reminiscent classical prior mass condition understood posterior contraction rate ered model hence also viewed solvability condition imposed model note requires sufficient prior mass ball near uses complicated metric balls induced higher moments divergence main results following main abstract result paper theorem suppose assumptions hold let inf inf exp inf exp let posterior mean inf inf constant depends depend main message theorem task constructing bayes procedures adaptive collection models intrinsic metric given statistical experiment essentially reduced designing nonadaptive prior model furthermore resulting posterior mean serves automatic adaptive point estimator frequentist sense particular priors use model lead nearly optimal posterior contraction rates models adaptation happens automatically designing correct model selection prior besides collection models shows posterior distribution concentrates model balances bias variance tradeoff oracle rates results type derived primarily gaussian regression model han density estimation result shows general phenomenon prior design note arbitrary hence oracle inequalities account model errors previous work allowing model includes mainly focuses structured linear models gaussian regression setting pursued generality cost conditions condition assumed purely technical convenience finitely many models hand define condition satisfied remark make technical remarks probability estimate sharp constants view lower bound result theorem thus closing gap attainable general setting using directly beyond theoretical interest right sharp estimate helps derive oracle inequality posterior mean important frequentist summary posterior distribution sharp estimates derived separately different models sparse normal mean model sparse pca model structured linear model name assumption implies among things existence good test lemma sense approach falls general testing approach adopted testing approach difficulties handling metrics alternative approaches dealing metrics found constants theorem depend polynomially respect constants involved assumption allows flexibility choice constants therein fact bernstein inequality dependent cases comes logarithmic factors remark compare results theorems results theorem shed light general problem bayes model selection differing several important aspects theorem hinges new condition results based classical mechanism requires construction tests merits approach clear section along remark probability estimate posterior distribution outside ball radius targeted contraction rate asymptotic nature theorem provides sharp estimates theorem targets exact model selection consistency set additional separation assumptions theorem requires extra assumptions shows concentration behavior bayes model selection posterior distribution best model balances tradeoff significant problems true signal typically need belong specific model theorem contains term involving cardinality models hence models need apriori finitely many bound finite remains open see removed proof sketch sketch main steps proof main abstract result theorem details deferred section proof roughly divided two main steps step first solve localized problem model projecting underlying probability measure particular establish exponential deviation inequality posterior contraction rate via existence tests guaranteed lemma exp smallest index index may deviate substantially small indices step argue cost projection step essentially factor probability bound multiplicative exp lemma made possible assumption requiring obtain conclusion definition fact existence tests lemma used step step inspired work context frequentist least squares estimator polyhedral cone gaussian regression setting localized problem therein estimation signals face risk adaptation happens bayesian context used change measure argument gaussian regression setting different purpose proof strategy viewed extension ideas beyond simple gaussian regression model statistical experiments section work couple specific statistical experiments satisfy assumption illustrate scope general theory section examples come identify intrinsic metric use examples since bernstein inequality fundamental probabilistic tool derived wide range complicated dependent settings expect many experiments covered beyond ones present regression models suppose want estimate given model following regression models gaussian binary bern han poisson poisson use following metric lemma assumption holds gaussian binary constants depend poisson constants depending theorem regression models let assumptions hold hold using similar techniques derive analogous results gaussian regression random design white noise model omit details density estimation suppose samples density respect measure sample space consider following form eeg natural metric use density estimation hellinger metric lemma suppose uniformly bounded assumption satisfied constants depending theorem density estimation let class uniformly bounded functions assumptions hold hold gaussian autoregression suppose generated belongs function class uniform bound markov chain transition density normal density arguments page chain unique stationary distribution density respect lebesgue measure assume generated stationary distribution true consider following metric lemma suppose uniformly bounded assumption satisfied constants depending theorem gaussian autoregression model uniformly bounded let assumptions hold hold bayes model selection compared results obtained section identify intrinsic metric weighted norm gaussian autoregression model uses weighted norm check local entropy condition average hellinger metric loss function gaussian time series suppose stationary gaussian process spectral density defined covariance matrix given consider special form exp use following metric ktn denotes matrix frobenius norm lemma suppose uniformly bounded assumption satisfied constants depending theorem gaussian time series model uniformly bounded let assumptions hold hold metric always bounded usual metric related metric lemma result shows metric use entropy condition weakened usual norm rather much stronger norm page covariance matrix estimation suppose observations set covariance matrices whose minimal maximal eigenvalues bounded respectively use frobenius norm lemma setting assumption holds metric constants depending theorem covariance matrix estimation let assumptions hold hold applications section consider concrete applications seen previous sections construction adaptive bayes procedures intrinsic metric experiment essentially reduces design priors hence consider simplest setup particular structure instance understand analyze convex gaussian regression problem similarly consider convex regression convex density estimation gaussian autoregression convex functions gaussian time series convex spectral density problems han respective intrinsic metrics hence emphasis examples focused analysis different model structures models handled using similar techniques presented detail remark explicitly state corresponding oracle inequalities form example considered corresponding results omitted trace regression consider fitting gaussian regression model let index set rmax rmax rmax let rank frmax although various bayesian methods proposed literature see summary theoretical understanding limited derived oracle inequality exponentially aggragated estimator matrix completion problem result purely frequentist consider two step prior similar derive corresponding posterior contraction rates matrix bij let kbkp denote schatten correspond nuclear norm frobenius norm respectively introduce notion rip let linear map defined via definition linear map said satisfy rip rmax iff holds matrices rank rmax satisfies rip iff satisfies rip rmax furthermore said satisfy uniform rip index set iff satisfies rip rip variant rip condition introduced scaling factors example matrix completion suppose takes value one position otherwise assume let denote indices take value easy calculations show trick defining models experiments also used applications later subsections explicitly state kbk singular values assumption usually satisfied applications fact netflix problem main motivating example matrix completion rating matrix rows indexing users columns indexing movies simply take one star five stars bayes model selection take defined uniform rip example gaussian measurement ensembles suppose random matrices whose entries standard normal theorem entails uniform rip probability least exp provided consider prior form exp ctr log given chosen index prior pra induced matrices form use product prior distribution lebesgue density simplicity use symmetric let arg minb rank max denotes largest singular value theorem fix rmax suppose exists linear map satisfies uniform rip log exists ctr depending log inf exp max inf rank log constants rank citr depend theorem rate minimax optimal logarithmic factor best knowledge author theorem first result literature addresses posterior contraction rate context trace regression fully bayesian setup may verified manner generically take model well specified cost sacrificing used union bound get probability estimate max exp exp assumption always use prior design examples section han form oracle inequalities still get nearly optimal posterior contraction rates particular first condition prevents largest eigenvalue growing fast similar spirit theorem showing magnitude signals large priors work sparse normal mean model second condition typically mild technical condition need choose small enough isotonic regression consider fitting gaussian regression model simplicity design points assumed let piecewise constant constant pieces consider following prior exp ciso log let symmetric valid density given chosen model prior randomly pick set change points put prior proposed similar prior uniform since assumed maximum number change points known apriori derive theoretical result without assuming knowledge let iso arg theorem fix suppose iso log exists ciso depending log iso inf iso exp max inf log constants iso depend implies piecewise constant posterior distribution contracts nearly parametric rate checked following lemma square integrable prior density sense exists lim inf holds uniformly large enough depending value outside defined canonical way extending towards endpoints bayes model selection convex regression consider fitting gaussian regression model class convex functions let denote class piecewise affine convex functions pieces focus multivariate case since univariate case easily derived using techniques exploited isotonic regression prior model induced prior slopes intercepts use prior density induce prior let arg admit let representation given cvx min kai prior use index given exp ccvx log log first step prior used poisson proposal slightly differs logarithmic factor would affect contraction rate logarithmic factor theorem fix suppose cvx log exists ccvx depending log log cvx inf cvx exp max inf log log constants cicvx depend oracle inequality shows posterior contraction rate theorem therein far optimal satisfied using priors spirit lemma square integrable design points regular enough using regular grids moreover explicit rate results obtained using approximation techniques lemma therein omit detailed derivations remark univariate convex regression term log removed logarithmic term due fact pseudodimension scales log lemma remark using similar priors proof techniques construct nearly adaptive bayes estimator support function regression problem convex bodies models support functions indexed polytopes vertices prior induced prior location vertices controlled using techniques developed details omitted han partially linear model consider fitting gaussian regression model partially linear model dimension parametric part diverge consider class functions illustration section consider models denotes class piecewise constant functions constant pieces example model index lattice goal construct estimator satisfies oracle inequality models consider following model selection prior exp log log chosen model consider following prior pick randomly support set change points put prior simplicity use product prior prior constructed section let inf write let let design matrix normalized diagonal elements taking value theorem fix suppose log log exists chp depending log log inf exp log log max inf constants cihp depend first condition requires magnitude grow fast see also comments following theorem second condition model sense oracle rate becomes log log inf inf common assumption section bayes model selection two terms rate trades two structures experiment sparsity smoothness level resulting phase transition rate terms structures sense similar results hard see improved general hence bayes estimator automatically serves theoretically nearly optimal adaptive estimator partially linear regression model covariance matrix estimation sparse factor model suppose observe covariance matrix modelled sparse factor model example model index lattice sparsity structure depends rank structure consider following model selection prior exp log theorem let exist ccov sequence sieve priors depending log cov inf cov exp max inf log constants cicov depend since spectral norm dominated frobenius norm intrinsic result shows model sense construct adaptive bayes estimator convergence rates norms worse log considered sparse factor model proved strictly rate log log spectral norm log considered closely related sparse pca problem convergence rate spectral norm achieves rate theorem therein factor lost using frobenius norm loss function remark therein mentioned sieve prior constructed using metric entropy hence resulting bayes estimator posterior mean point estimator purely theoretical use example illustrate construction scheme nearly optimal adaptive procedure experiment based metric entropy underlying parameter space derivation contraction rates metrics metrics related intrinsic metrics nicely han proofs main results proof theorem main steps first need lemma allowing argument lemma let assumption hold exists constant depending random variable next propositions solve posterior contraction problem local model proposition fix exists constant depending constants assumption let proposition fix inf proofs results detailed later subsections proof theorem main steps instead prove slightly stronger statement follows inf constants depends constants involved assumption proof first consider overfitting case proposition lemma see holds min bayes model selection second line used fact completes estimate overfitting next consider underfitting case fix apply proposition lemma use arguments similar see min second line used claim follows combining proof proof essentially integration tail estimates peeling device let event defined via inequality first line display due jensen inequality applied followed inequality summation bounded constant depending inequality follows since quantity bounded constant multiple independent majorizes constant proof complete noting taking infimum proofs propositions need several lemmas proof propositions lemma let assumption hold let function class defined sample space suppose function han every holds exists test sup constants taken lemma lemma let assumption hold suppose probability measure every exists depending proof lemmas found appendix proof proposition fix invoke lemma since log see exists test log sup note used fact definition fixed statement proposition let global test big models used left side implies random variable power side jhm applied see sup sup jhm jhm bayes model selection first inequality follows right side since jhm last inequality follows left side hand applying lemma see exists event enc holds event note inequality follows hand expectation term display calculated follows sup fjhm first term third line follows second term follows assumption along left side assumption hence conclude probability estimate enc han proof proposition proof largely follows lines proposition see appendix details completion proof theorem following proof similar reasoning fjhm fjhm established controlling probability estimate enc proposition change measure argument using lemma proof lemma proof lemma let constant specified later consider first consider type test statistics log error null hypothesis log exp exp choosing small enough depending get exp depend next handle type error end specified later consider event constant log enc log log ndn exp bayes model selection choosing small enough depending see enc exp constants depending particular depend hand log enc enc continue computation log ndn since log choose small enough depending see exp depending need choose done choosing min proof lemma recall standard fact lemma random variable satisfies exp exp exp proof lemma consider event log lemma constant depending log enc log since exp exp han remind reader constant may series inequalities hence last inequality follows noting replace denumerator second last line increase completing proof proof proposition proof proposition let total mass first condition ofp trivial need verify second condition first inequality follows second condition proofs applications proofs theorems section follow similar route verifying local entropy condition assumption summability condition iii sufficient mass condition assumption remind reader use examples model selection prior prove theorems section proofs theorems deferred appendix proof theorem lemma let suppose linear map uniform rip rank log log need following result lemma let rank proof lemma case follows lemma general case follows scaling argument omit details proof lemma need consider case rmax first note entropy question equals log rank bayes model selection uniform rip set covered display contained rank hand uniform rip set frobenius norm induces euclidean norm implies bounded log log last inequality follows lemma log log clearly satisfies take lemma suppose uniform rip holds assumption holds proof lemma need consider rmax first note rank rank let spectral decomposition let let noting frobenius norm kui kvi bounded follows see vol vol log vol hence order right side display bounded suffices han require log max log log easy calculate rmax conclusion follows noting implies since rmax proof theorem theorem follows theorems proposition coupled lemmas proof theorem lemma log log log proof set involved entropy equivalent claim see let singular value decomposition unitary matrices diagonal matrix proving claim combined euclidean embedding see entropy question bounded log log log log rpk proof theorem take log depending absolute constant apparently holds prior uniform disq log set tribution minimal cks frobenius norm lemma entails cardinality cover exp log another constant depending hence exp log bounded exp choosing large enough claim theorem follows considerations along theorems proposition bayes model selection appendix proof lemmas section proof lemma let denote probability measure induced joint distribution underlying signal first consider gaussian regression case easy calculate log log exp exp log log exp next consider binary regression easy calculation shows log log log log log log using inequality log depending shown log assumed condition verify bernstein condition exp log log exp exp log last inequality follows hoeffding inequality section claim follows noting log assumed condition aforementioned inequality log constrained range han finally consider poisson regression easy see log log log log note log log middle used fact log bounded away shows log next verify bernstein condition exp log log exp exp log hand log completing proof proof lemma since ratio decomposed sums ratio single samples ratio uniformly bounded since bounded classical bernstein inequality applies see couple bernstein condition assumption holds log depend hence need verify log log seen lemma fact hellinger metric dominated divergence lemma let random variable bounded exp exp proof note log exp log exp exp last inequality follows taylor expansion xem bayes model selection proof lemma omit explicit dependence notation proof let denote probability measure induced joint distribution distributed according stationary density easy computation shows log log denotes lebesgue measure arguments page see hence need verify bernstein condition inequality exp exp log exp first term handled inductive calculation first note ecm first inequality follows lemma second inequality follows holds constant involved depends let let ecm han last inequality follows iterate calculation see exp next consider since random variables ezin follows exp exp last inequality follows stationarity hand jensen inequality exp log exp exp collecting see exp log log exp log exp completing proof proof lemma let denote probability density function multivariate normal distribution covariance matrix expectation taken respect density log log det log log det used fact random vector covariance matrix let log log bayes model selection let spectral decomposition orthonormal diag diagonal matrix compute standard normal note log inequality follows log hence apply display maxi exp exp exp exp maxi denote matrix operator norm frobenius norm respectively arguments page since class uniformly bounded function classes spectrum covariance matrices inverses running must bounded hence kbk next note kbkf first inequality used symmetric matrices general rule qkf kkqkf collecting see assumption satisfied constants depending han finally establish equivalence log first log det log det ndn second line used fact det det third line used fact log det matrix due inequality log hand using reversed inequality log constant depending log establish log proof thereby completing proof lemma note log det log log log det rest proof proceeds along line lemma appendix proof remaining theorems section proof theorem lemma let log log log proof lemma let denote design points easy see given mpartition let denote monotonic functions constant partition entropy question bounded log max hand fixed entropy term equals pythagoras theorem set bayes model selection involved entropy included natural projection onto subspace clearly contained linear subspace dimension using entropy result space problem page combined discussion page relating packing number covering number log log log claim follows log log log clear log hence take satisfied lemma suppose holds assumption holds proof lemma let associated convention ordered smaller values bigger ones easy see easy see satisfying property leads error hence estimate inf iso log log log iso first inequality last line follows definition iso claim follows verifying implies second log third term exponent bounded third term contribute condition since noting gaussian regression setting definition proof theorem theorem follows theorems proposition coupled lemmas prove lemma need following result han lemma let arg suppose exists element satisfies proof lemma seen arg arg loss function satisfies triangle inequality contradicting definition minimizer shows claim proof lemma let note lemma see entails conclusion follows left side least order proof theorem checking local entropy assumption requires additional work notion useful regard following section subset said denoted pdim every indices always find set satisfies lemma let suppose pdim log log constant depending prove lemma need following result theorem lemma let subset pseudodimension every holds absolute constant proof lemma note entropy question bounded log since translation change set bounded assumption note uniformly bounded hence application lemma yields display bounded follows log log log constant depending whenever bayes model selection class piecewise affine functions well controlled following lemma shows lemma lemma pdim log immediate result lemmas take logn log depending lemma suppose holds assumption holds proof lemma write throughout proof first claim max let see exists index hence aix bix aix bix reverse direction shown similarly whence claim follows taking supremum entails kai log exp log log vol used fact requiring log log max log log claim follows verifying implies since second term bounded log inequality follows noting lemma satisfied han log log throughproof fixed write proof since log log log log log log log log second condition note order verify suffices log log equivalently hence suffices valid hence completing proof proof theorem direct consequence theorems lemma combined proposition proof theorem lemma let log log log log proof proof borrows notation proof lemma let denote subsets cardinality entropy question bounded log max log log log max supp constant partitions contained linear subspace dimension similar arguments lemma shows entropy term display bounded log proving claim log log hence take log lemma holds proof first condition note log proof log since log log log log second condition easy verify choices lemma suppose holds assumption holds bayes model selection proof using notation lemma throughout proof let log log log bound prior mass display suffices bound product following two terms first term equals inequality follows noting denotes largest singular value note since trace trace matrix dominates largest eigenvalue set last line supported hence bounded vol hence exp log log log last inequality used repeating arguments proof lemma exp log log combining see log exp log log log log exp log han order right side display bounded exp need require log log min log min log log log first terms two lines lead terms two lines contribute noting log since gaussian regression model proof theorem claim theorem follows theorems proposition lemmas appendix proof auxiliary lemmas section proof lemma let collection functions form minimal covering set metric assumption furthermore follows lemma exists test sup ndn recall hence indexing set contains see sup consider global test hand exists hence right hand side display independent individual hence claim follows bayes model selection proof lemma jensen inequality left side bounded log log log log exp exp log log using jensen inequality last term right side display bounded log exp log last inequality follows fubini theorem assumption condition prior entails exp claim follows choosing small enough depending proof proposition may assume without loss generality since definition case global test constructed via analogous random variable sup similar exists event following true event han repeating reasoning see sup third line valid since right side entails fourth line uses assumption together fact follows probability estimate enc acknowledgements author indebted chao gao numerous suggestions lead substantially improved version paper thanks johannes helpful comments earlier version paper author would also like thank jon wellner constant support continuous encouragement work developed references adamczak tail inequality suprema unbounded empirical processes applications markov chains electron alquier cottet chopin rousseau bayesian matrix completion prior specification arxiv preprint banerjee ghosal posterior convergence rates estimating large precision matrices using graphical models electron barron massart risk bounds model selection via penalization probab theory related fields bellec sharp oracle inequalities least squares estimators shape restricted regression arxiv preprint approximation dans les espaces estimation wahrsch verw gebiete robust testing independent nonidentically distributed variables markov chains specifying statistical models volume lect notes pages springer new york model selection via testing alternative penalized maximum likelihood estimators ann inst probab bayes model selection boucheron lugosi massart concentration inequalities oxford university press oxford nonasymptotic theory independence foreword michel ledoux van geer statistics data springer series statistics springer heidelberg methods theory applications bunea tsybakov wegkamp aggregation gaussian regression ann plan tight oracle inequalities matrix recovery minimal number noisy random measurements ieee trans inform theory tao decoding linear programming ieee trans inform theory castillo bayesian supremum norm contraction rates ann castillo van der vaart bayesian linear regression sparse priors ann castillo van der vaart needles straw haystack posterior concentration possibly sparse sequences ann chatterjee guntuboyina risk bounds isotonic shape restricted regression problems ann gao van der vaart zhou general framework bayes structured linear models arxiv preprint gao zhou posterior contraction sparse pca ann gao zhou rate exact bayesian adaptation modified block priors ann ghosal ghosh van der vaart convergence rates posterior distributions ann ghosal lember van der vaart nonparametric bayesian model selection averaging electron ghosal van der vaart convergence rates posterior distributions observations ann ghosal van der vaart posterior convergence rates dirichlet mixtures smooth densities ann guntuboyina optimal rates convergence convex set estimation support functions ann han wellner multivariate convex regression global risk bounds adaptation arxiv preprint hannah dunson bayesian nonparametric multivariate convex regression arxiv preprint hoffmann rousseau adaptive posterior concentration rates ann holmes heard generalized monotonic regression using random change points statistics medicine kleijn van der vaart misspecification bayesian statistics ann cam convergence estimates dimensionality restrictions ann cam local global properties theory asymptotic normality experiments pages cam asymptotic methods statistical decision theory springer series statistics new york han mai alquier bayesian approach noisy matrix completion optimal rate general sampling distribution electron mariucci ray szabo bayesian nonparametric approach density estimation arxiv preprint massart concentration inequalities model selection volume lecture notes mathematics springer berlin lectures summer school probability theory held july foreword jean picard peligrad rio bernstein inequality moderate deviations strong mixing conditions high dimensional probability luminy volume volume inst math stat ims pages inst math beachwood pati bhattacharya pillai dunson posterior contraction sparse bayesian factor models massive covariance matrices ann pollard empirical processes theory applications regional conference series probability statistics institute mathematical statistics hayward american statistical association alexandria recht fazel parrilo guaranteed solutions linear matrix equations via nuclear norm minimization siam rohde tsybakov estimation matrices ann rousseau rates convergence posterior distributions mixtures betas adaptive nonparametric estimation density ann shen wasserman rates convergence posterior distributions ann tsybakov aggregation minimax optimality estimation proceedings international congress mathematicians pages van der vaart van zanten rates contraction posterior distributions based gaussian process priors ann van der vaart van zanten adaptive bayesian estimation using gaussian random field inverse gamma bandwidth ann van der vaart wellner weak convergence empirical processes springer series statistics new york yang pati bayesian model selection consistency oracle inequality intractable marginal likelihood arxiv preprint yoo ghosal supremum norm posterior contraction credible sets nonparametric multivariate regression ann levine cheng minimax optimal estimation high dimensional semiparametric models arxiv preprint yuan zhou minimax optimal rates estimation high dimensional additive models universal phase transition arxiv preprint han department statistics box university washington seattle usa address royhan
| 10 |
ref international conference artificial neural networks icann springer lncs vol barcelona spain september deep neural estimation metric jigsaw puzzle problem dror eli omid nathan nov department computer science university israel mail nathan center automation research university maryland college park nathan abstract paper introduces first deep neural estimation metric jigsaw puzzle problem given two puzzle piece edges neural network predicts whether adjacent correct assembly puzzle using nothing pixels piece proposed metric exhibits extremely high precision even though manual feature extraction performed incorporated existing puzzle solver solution accuracy increases significantly achieving thereby new standard fig jigsaw puzzle reassembly using scheme enhanced solver introduction jigsaw puzzles popular form entertainment available different variation difficulty challenge children adults even professional players given sholomon david netanyahu different tiles image objective reconstruct original image taking advantage shape chromatic information piece despite popularity vast distribution jigsaw puzzles assembly trivial computationally problem proven nevertheless computational jigsaw solver may applications many realworld applications biology chemistry literature speech descrambling archeology image editing recovery shredded documents photographs regardless noted research topic may justified solely due intriguing nature recent years witnessed vast improvement research development automatic jigsaw puzzle solvers manifested puzzle size solution accuracy amount manual human intervention required basic form every puzzle solver requires function evaluate compatibility adjacent pieces strategy placing pieces accurately possible strategies greedy rely heavily trick estimate whether two pieces truly adjacent two pieces compatible piece pieces one another four pieces form loop pair compatibility threshold etc heuristics dubbed estimation metric allow estimating adjacency correctness two pieces without knowing correct solution majority recent works focused devising elaborate compatibility functions estimation metrics despite proven effectiveness neural networks field computer vision attempt made automatically devise estimation metric jigsaw puzzle problem might due highly imbalanced nature puzzle problem puzzle matching possible mismatching ones paper propose novel estimation metric relying neural networks proposed metric achieves extremely high precision despite lack manually extracted features proposed metric proves highly effective scenarios incorporated metric solver using sophisticated compatibility measure experimented currently known challenging benchmarks hardest variant jigsaw puzzle problem square pieces chromatic information available solver piece orientation puzzle dimensions unknown enhanced solver proposed sets new terms accuracy solutions obtained number perfectly reconstructed puzzles previous work jigsaw puzzles first introduced around john spilsbury londonian engraver mapmaker nevertheless first attempt scientific community computationally solve problem attributed freeman garder presented solver could handle problems ever since research focus regarding problem shifted merely solvers puzzles cho presented deep neural network based estimation metric jigsaw puzzle problem probabilistic puzzle solver could handle pieces given priori knowledge puzzle results improved year later yang presented particle solver furthermore pomeranz introduced year first time fully automated square jigsaw puzzle solver could handle puzzles pieces gallagher advanced considering general variant problem neither piece orientation puzzle dimensions known son improved accuracy latter variant using palkin tal improved accuracy handled puzzles missing pieces sholomon presented genetic algorithm solver puzzles known orientation later generalized variants compatibility measures estimation metrics stated earlier works focus compatibility measure estimation metric compatibility measure function given two puzzle piece edges right edge piece versus upper edge piece predicts likelihood two edges indeed placed neighbors correct solution measure applies possible pair piece edges estimation metric hand predict whether two piece edges adjacent may apply many possible pairs following detailed review efforts made far field cho surveyed four compatibility measures among found dissimilarity accurate dissimilarity sum neighboring pixels squared color differences color bands assuming pieces represented color space like rgb yuv matrix piece pixels dissimilarity right example denotes color band pomeranz also used dissimilarity measure found empirically using norm works better usual norm moreover presented metric pieces said bestbuddies ieces ieces ieces set given image pieces complementary spatial relations right left vice versa gallagher proposed yet another compatibility measure called mahalanobis gradient compatibility mgc preferable compatibility measure sholomon david netanyahu used pomeranz mgc penalizes changes intensity gradients rather changes intensity learns covariance color channels using mahalanobis distance also gallagher suggested using dissimilarity ratios absolute distances potential piece edge matches sometimes indicative example smooth surfaces like sea sky considering absolute score divided score available seems indicative son suggested four puzzle piece edges compatibility ratio pair top ten among possible pairs piece edges given puzzle palkin tal proposed greedy solver based asymmetric dissimilarity estimation metric motivation propose novel estimation metric called goal obtain classifier predicts adjacency likelihood two puzzle piece edges correct puzzle configuration note despite exponential nature problem possible arrangements pieces taking account rotations problem solved theoretically assigning correctly consecutive manner pairs reminiscent finding minimal spanning tree noted hence classifier precision far greater importance recall classifier perfect precision recall possible matches might achieve perfect solution challenges solution might train neural network ones however issue jigsaw puzzle piece matching imbalanced nature puzzle matching pairs piece edges possible nonmatching ones thorough review challenges tactics avoid found trivial approach random uninformed undersampling randomly choosing required number nonmatching pairs leads highrecall metric opposite goal set beforehand believe reason shortcoming exist many mismatches handful ones thus resort informed undersampling choosing subset good mismatching pairs according criterion nevertheless avoid using manual feature selection sophisticated means jigsaw puzzle domain similarly many problem domains solver actually try reassemble original image problem deep neural network based estimation metric jigsaw puzzle problem mathematically defined rather tries solving proxy problem achieve image whose global overall score minimal thus choose using compatibility measure undersampling criterion neural network training training use images size pixels iapr benchmark image first converted yuv space followed normalization channel separately via normalization next puzzle image divided tiles tile size pixels previous works finally create balanced set positive negative samples pairs using informed undersampling described end obtain balanced set pairs overall balance dataset use basic compatibility score dissimilarity two yuv described undersampling criterion puzzle piece edge find compatible piece edge second compatible piece edge pair edges indeed adjacent original image add pair pool samples toss pair pool samples otherwise added samples pair discarded latter done avoid training network adjacent pieces happen vastly different due significant change image scenery corresponding region words restrict interest highly compatible piece edges indeed adjacent since method leads negative samples positive ones eventually randomly throw negative samples balance set image pair extract two columns near edge column abutting pixels edge one next results input size pixels use neural network ffnn five fully connected layers size output softmax layer containing two neurons expect matching pairs otherwise activation used layers rectified linear unit relu function max figure depicts network structure trained network supervised manner using stochastic gradient descent minimizes negative log likelihood error iterations resulting network reaches accuracy training set test set dataset preparation network training performed using experimental results piece edge compatible piece edge classified positively using network define piece edge note piece edge single sholomon david netanyahu fig architecture scheme also pieces might compatible piece classified one network first evaluate precision proposed metric many dnnbuddies indeed adjacent original image using well known dataset presented cho puzzles obtained precision next incorporated estimation metric due proposed scheme solver proposed previously unfortunately due lack space review genetic algorithms proposed method included paper nevertheless modification required respect existing framework rather simple pair appears one parents assign pair child figure describes modified crossover operator framework according see step includes new phase relative relations assigned try assigning common relative relations parents try assigning relative relations parents try assigning relative relations parents try assigning existing relative relations try assigning random relative relations fig crossover overview ran augmented solver puzzle set two additional datasets proposed pomeranz piece puzzles evaluated results according neighbor comparison measures fraction correct neighbors number puzzles perfectly reconstructed set deep neural network based estimation metric jigsaw puzzle problem table presents accuracy results solver without metric dataset achieve considerable improvement overall accuracy solution well number perfectly reconstructed puzzles moreover enhanced deep neural scheme appears outperform current results yields accuracy levels surpass respectively best results known pieces neighbor perfect neighbor perfect table comparison accuracy results without new estimation metric conclusions paper presented first neural estimation metric jigsaw puzzle problem unlike previous methods manual feature crafting employed novel method exhibits high precision combined puzzle solver significantly improves solution accuracy set new art standard references altman solving jigsaw puzzle problem linear time applied artificial intelligence international journal brown nehab burns dobkin vlachopoulos doumas rusinkiewicz weyrich system acquisition matching fresco fragments reassembling theran wall paintings acm transactions graphics cao liu yan automated assembly shredded pieces multiple photos ieee international conference multimedia expo cho avidan freeman probabilistic image jigsaw puzzle solver ieee conference computer vision pattern recognition cho butman avidan freeman patch transform applications image editing ieee conference computer vision pattern recognition collobert kavukcuoglu farabet environment machine learning biglearn nips workshop deever gallagher assembly real shredded documents icip sholomon david netanyahu demaine demaine jigsaw puzzles edge matching polyomino packing connections complexity graphs combinatorics freeman garder apictorial jigsaw puzzles computer solution problem pattern recognition ieee transactions electronic computers gallagher jigsaw puzzles pieces unknown orientation ieee conference computer vision pattern recognition goldberg malon bern global approach automatic solution jigsaw puzzles computational geometry theory applications grubinger clough deselaers iapr benchmark new evaluation resource visual information systems international workshop ontoimage vol garcia learning imbalanced data knowledge data engineering ieee transactions justino oliveira freitas reconstructing shredded documents feature matching forensic science international koller levoy reconstruction new matches forma urbis romae bullettino della commissione archeologica comunale roma marande burger mitochondrial dna genomic jigsaw puzzle science marques freitas reconstructing documents using color feature matching acm symposium applied computing morton levison computer literary studies ifip congress paikin tal solving multiple square jigsaw puzzles missing pieces computer vision pattern recognition ieee conference ieee pomeranz shemesh fully automated greedy square jigsaw puzzle solver ieee conference computer vision pattern recognition sholomon david netanyahu genetic solver large jigsaw puzzles ieee conference computer vision pattern recognition sholomon david netanyahu generalized genetic solver large jigsaw puzzles complex types aaai conference artificial intelligence sholomon david netanyahu genetic solver large multiple jigsaw puzzles unknown dimensions piece orientation acm conference genetic evolutionary computation son hays cooper solving square jigsaw puzzles loop constraints european conference computer vision springer wang determining molecular conformation distance density data thesis massachusetts institute technology yang adluru latecki particle filter state permutations solving image jigsaw puzzles ieee conference computer vision pattern recognition zhao chou lee puzzle solver application speech descrambling wseas international conference computer engineering applications
| 1 |
sep distributed linear equation solver minimum norm solutions jingqiu zhou wang xuan shaoshuai mou brian anderson october abstract paper proposes distributed algorithms networks achieve solution finite time linear equation full row rank minimum underdetermined case columns rows underlying network assumed undirected fixed analytical proof provided proposed algorithm drive agents individual states converge common value viz solution minimum norm solution underdetermined case numerical simulations also provided validation proposed algorithms introduction significant amount effort control community recently given distributed algorithms solving linear equations networks agent knows part equation controls state vector looked estimate solution overall linear equations numerous extensions along direction include achieving solutions minimum euclidean norm elimination initialization step reduction state vector dimension utilizing sparsity linear equation achieving least square solutions algorithms yield asymptotic convergence require infinite number sensing communication events work supported funding northrop grumman cooperation zhou wang mou school aeronautics astronautics purdue university west lafayette usa mous anderson hangzhou dianzi university hangzhou china australian national university csiro formerly nicta canberra act australia work supported csiro australian research council discovery projects corresponding author shaoshuai mou solutions underdetermined linear equations minimum norm perhaps important many engineering applications including earthquake location detection analysis statistical data solving biomeganetic inverse problems one intriguing case among applications compressive sensing enables transmission sparse data efficient way decoding process compressive sensing requires solving linear equations minimum number entries solution vectors however problem usually computationally costly thus researchers usually turn achieve solutions minimum norm instead function minimized convex existing results achieving minimum norm solutions based idea lasso including alternating direction method multipliers admm method gradient projection methods homotopy methods iterative methods proximal gradient methods interesting results either achieve limited accuracy dominated threshold parameter involve solving much larger linear equation lead high computational complexity paper aim develop distributed algorithms networks achieve finite time solution linear equations underdetermined case one minimum norm distributed meant agent knows part overall linear equation communicate nearby neighbors problem interest formulated section introduce section concepts employed paper including filippov maps filippov solutions generalized lie derivatives based preliminary result achieved section first propose distributed algorithm drive agents state vectors converge finite time solution overall linear equations present centralized update achieving solution minimum norm motivated flow proposed gradient flow consensus devised utilize combination proposed distributed linear equation solver proposed centralized algorithm minimum solutions develop distributed linear equation solver achieving minimum norm solution shown converge finite time provide simulations section concluding remarks section notation let denote arbitrary positive integer let denote vector entries equal let denote identity matrix let col stack matrices possessing number columns index ascending order let diag denote block diagonal matrix ith diagonal block entry meant square matrix positive definite positive respectively meant transpose matrix let ker image denote kernel image matrix respectively let denote kronecker product let denote norm vector problem formulation consider network agents inside network agent observe states certain agents called neighbors let denote set agent neighbors assume neighbor relation symmetric neighbor relations described undirected graph undirected edge connecting neighbors paper consider case connected fixed undirected suppose agent knows rni rni controls state vector stacked overall equation col col without loss generality problems interest assume full let denote solution underdetermined case unique let denote minimum solution arg min case necessarily coincide problem interest paper develop distributed algorithms agent update state vector using neighbors states converge finite time common value desired nonsquare case value key concepts preliminary results proceeding introduce key concepts preliminary results future derivation analysis key references background summarize filippov maps filippov solutions filippov map associated function meant stands open ball whose center radius denotes lebesgue measure stands convex closure let sgn function kth entry defined sgn follows filippov map sgn defined entrywise sgn note even ith jth entries vector sgn may necessarily equal since could chosen arbitrary values interval definition sgn one verify sgn holds filippov solution meant caratheodory solution almost absolutely continuous written form indefinite integral following two lemmas treat existence filippov solution lemma proposition measurable locally bounded initial point exists filippov lemma theorem page let defined domain time space measurable locally bounded open domain let point exists filippov solution note lemma establishes existence solution systems general lemma guarantees existence solutions systems generalized gradients generalized lie derivatives locally lipschitz function generalized gradient lim implication solution exists infinite interval arbitrarily chosen set measure zero denotes set points differentiable denotes convex hull specially function one computes kth element generalized gradient follows definition sgn sgn map generalized lie derivative defined exists definition generalized lie derivative implies check inner product fixed value inner product element note set may empty moreover locally lipschitz regular see detailed discussion regular functions one following lemma lemma proposition let solution map let locally lipschitz regular differentiable almost almost derivative satisfies lemma guarantees existence generalized lie derivatives functions locally lipschitz regular one focuses specific solution one show special vector summarized following lemma lemma see proof lemma let denote specific solution differential enclosure suppose locally lipschitz regular let denote time interval exists vector function called regular exists usual right directional derivative preliminary results positive matrix one define sgn compliment sgn impose requirement namely nonempty easily ensured let sgn sgn closed set fixed also note sgn one finite number different sets hence easy check given whether nonempty later use result proves easy check follows also closed set consequently continuous function nonzero minimum denote min definition one summarize one following lemma lemma matrix let defined suppose nonempty positive constant graph label nodes edges assign arbitrary direction edge incidence matrix denoted hik defined follows head kth edge tail kth edge hik otherwise since connected ker span moreover one following lemma lemma suppose rank connected let diag projection matrix ker let incidence matrix one image ker image proof lemma let vector vector lies image ker show zero establish define diag full row rank since full row rank follows multiplying sides equation one since holds since full column rank one equation one furthermore notice conclude true image exists vector vector sgn note projection matrix one sgn one together implies one true consider system sgn positive matrix existence filippov solution guaranteed lemma existence interval global bound side let denote filippov solution given note function locally lipschitz regular word introduced early reference lemma time derivative exists almost set generalized lie derivatives words exists set lebesgue measure dkx exists proposition let denote filippov solution given dkx exists finite time true dkx one remark note side projection gradient flow potential function also standard gradient law real analytic function lower bound converges single point local minimum critical point potential function however real analytic property hold convergence result may fail indeed function obviously real analytic one immediately assert drive minimum mention finite time result proposition thus proposition nontrivial serve foundation devising distributed linear equation solvers paper proof proposition lemma one holds follows sgn exists sgn since could chosen vector sgn sgn one could choose together leads dkx note positive thus true use method contradiction prove exists finite time suppose finite time exist one dkx defined follows contradicts fact since positive constant lemma thus exists finite time assumption fact semipositive definite one follows integration one completes proof algorithms main results section study three related problems finite time distributed solution centralized solution achieving minimum norm finally using ideas first two finding distributed algorithm achieving minimum solution finite time distributed linear equation solver subsection present distributed update achieving solution finite time course assumed full row rank necessarily square recall distributed linear equation solvers based agreement principle require agent limit update subject constraint seeking consensus neighbors agreement principle systems achieved flow agent projects function neighbors states subspace defined linear equation choosing gradient flow consensus developed within flow led postulate following update agent sgn denote projection matrix kernel note special property sgn point chosen arbitrarily interval generally speaking different agents may different choices sgn proceeding make following assumption coordinations neighbor agents assumption pair neighbor agents takes choice assumption definition sgn one always matter whether equal let col diag incidence matrix sgn lemma exists filippov solution system denote solution lemma exists set exists moreover lebesgue measure dky one following main theorem establishes existence limiting consensus solution moment leaves unspecified theorem assumption updates assumed full row rank converge single solution finite time proof theorem since kernel one prove theorem sufficient show reach consensus finite time note connected ker spanned vector equal thus prove theorem suffices prove converges finite time multiplying sides one sgn proposition dkz exists finite time know fact image recalling lemma one implies follows completes proof centralized update minimum solution subsection propose centralized update achieving minimum solution noting convex conceive using negative gradient flow subject remaining manifold order achieve arg leads following update sgn denotes projection matrix onto kernel lemma one exists filippov solution system denote lemma exists set measure dky exists moreover following main theorem theorem full row rank filippov solution converges finite time constant minimum solution proof theorem proposition one dky exists finite time exists vector sgn imply moreover let denote solution recall holds since ker one image implies exists vector one solution thus minimum norm solution implies reaches minimum value subject thus dky satisfies assumption proposition thus minimum solution completes proof distributed update minimum solutions subsection develop distributed update network achieve minimum solution finite time motivated study combination distributed linear equation solver centralized update minimum solutions propose following update agent sgn sgn case assume measurable locally bounded almost everywhere lim sufficiently small nonnegative number depending connection network note always feasible choice one example one simple case choosing choice resulting obtained taking zero choice obviates need decide small one meet sufficiently small condition may result rather slow convergence fact projection kernel ensures ker one let col diag incidence matrix updates assumption one sgn sgn note sgn sgn measurable locally bounded almost everywhere lemma exists filippov solution system given satisfying denote col lemma exists set exists one lebesgue measure dky following theorem theorem assumption update full row rank converge finite time value minimum solution proof theorem first prove reach consensus finite time showing converges finite time multiplying sides one sgn sgn lemma one dkz vector note sgn dkz sgn sgn chosen equal since image lemma also one thus long definition lemma one long positive constant let denote upper bound define upper bound captures idea stated previously depends graph chosen must exist finite time one dkz long positive constant thus must exist finite time next prove prove contradiction suppose true exists time since continuous exists time takes maximum value since continuous exists sufficiently small positive differentiable differentiable almost everywhere know dkz contradicts fact maximum value thus true one exists vector moreover since prove theorem need prove converges minimum solution see let denote projection matrix kernel multiplying sides one since undirected appears update appears neighbor update adding updates noting two neighbors one sgn one knows reach consensus note kth entry kth entry selected arbitrary value may different different entries average still arbitrary value thus sgn sgn let one sgn exactly centralized update theorem exists finite time minimum solution relation one correspondingly exist finite time minimum solution completes proof simulation result section report several simulations proposed algorithms solving underdetermined linear equation undirected connected network figure partitioned figure four agent network respectively agent knows example utilize distributed update achieve solution finite time network let denote state agent estimate agent measures difference agents estimations solution shown simulations figure reaches finite time suggests agents states achieves consensus finite time consistent claim theorem example employ centralized update state vector achieve denotes minimum solution shown figure finite time achieving solution update figure reaches finite time maintains afterwards indicates minimum solution achieved finite time corresponding theorem worth noting one could observe multiple phases convergence figure sgn update takes different values results different convergence rates figure centralized solver achieving minimum norm solution update example finally utilize distributed update achieve minimum solution denoted finite time chosen take form constants still let denote state agent estimate agent measures difference agents estimations shown figure figure reach minimum solution finite time regardless different choices moreover fixing increasing value one achieves significantly faster convergence shown figure similarly increasing fixed also leads faster convergence although dramatically shown figure figure distributed solver achieving minimum norm solution update fixed different values figure distributed solver achieving minimum norm solution update fixed different values also note figure figure convergence time required distributed way minimum solutions dramatically longer roughly speaking times longer centralized case figure major reason centralized update appearing distributed update scaled smaller time required consensus network example minor distributed update indicated figure let denote average four agents states evolution figure suggests agents states reach consensus finite time similar figure anticipate comes large network convergence time consensus might play significant role convergence distributed update figure consensus distributed solver update conclusion developed distributed algorithms achieving solutions minimum solutions respectively linear equations finite time algorithms result combination projectionconsensus flow proposed gradient flow consensus devised work fixed undirected networks future work includes generalization proposed update networks directed references mou liu morse distributed algorithm solving linear algebraic equation ieee transactions automatic control anderson mou helmke morse decentralized gradient algorithm solution linear equation numerical algebra control optimization tang distributed algorithm solving positive definite linear equations networks membership dynamics ieee transactions control network systems wang elia distributed solution linear equations unreliable networks proceedings american control conference pages mou morse distributed algorithm solving linear algebraic equation european control conference pages wang mou sun improvement distributed algorithm solving linear equations ieee transactions industrial electronics wang ren duan distributed minimum weighted norm solution linear equations associated weighted inner product proceedings conference decision control pages wang fullmer morse distributed algorithm arbitary initialization solving linear algebraic equation proceedings american control conference pages mou lin wang fullmer morse distributed algorithm efficiently solving linear equations applications special issue jcw systems control letters wang elia distributed least square intermittent communications american control conference acc pages june wang elia control perspective centralized distributed convex optimization ieee conference decision control european control conference pages dec gharesifard distributed convex optimization digraphs ieee transactions automatic control march shi distributed network flows solving linear algebraic equations proceedings american control conference pages liu lageman anderson shi exponential least squares solvers linear equations networks ifac world congress toulouse pages liu lou anderson shi network flows least squares solvers linear equations ieee conference decision control melbourne accepted shearer improving local earthquake locations using norm waveform cross correlation application whittier narrows california aftershock sequence journal geophysical research solid earth dodge statistical data analysis based related methods beucker schlitt minimal solutions biomagnetic inverse problem technical report zentralinstitut angewandte mathematik baron duarte wakin sarvotham baraniuk distributed compressive sensing eldar kutyniok compressed sensing theory applications cambridge university press candes tao decoding linear programming ieee transactions information theory candes romberg tao robust uncertainty principles exact signal reconstruction highly incomplete frequency information ieee transactions information theory yang genesh zhou sastry review fast algorithms robust face recognition technical report california univ berkeley dept electrical engineering computer science boyd parikh chu peleato eckstein distributed optimization statistical learning via alternating direction method multipliers foundations trends machine learning frisch logarithmic potential method convex programming memorandum university institute economics oslo kojima megiddo mizuno theoretical convergence largestep primaldual interior point algorithms linear programming mathematical programming figueiredo nowak wright gradient projection sparse reconstruction application compressed sensing inverse problems ieee journal selected topics signal processing osborne presnell turlach new approach variable selection least squares problems ima journal numerical analysis daubechies defrise mol iterative thresholding algorithm linear inverse problems sparsity constraint communications pure applied mathematics pages becker bobin nesta fast accurate firstorder method sparse recovery siam journal imaging sciences pages cao morse anderson agreeing asynchronously ieee transactions automatic control cao morse anderson reaching consensus dynamically changing environment graphical approach siam journal control optimization cao spielman morse lower bound convergence distributed network consensus algorithm decision control european control conference ieee conference pages ieee lageman sun consensus spheres convergence analysis perturbation theory decision control cdc ieee conference pages ieee qin exponential consensus general linear systems directed dynamic topology automatica convergent gradient flows applications network consensus automatica cortes discontinuous dynamical system tutorial solutions nonsmooth analysis stability ieee control system magazine pages filippov differential equations discontinuous righthand sides control systems volume springer science business media clarke optimization nonsmooth analysis siam bacciotti ceragioli stability stabilization discontinuous systems nonsmooth lyapunov functions esaim control optimisation calculus variations chung spectral graph theory american mathematical soc absil kurdyka stable equilibrium points gradient systems systems control letters
| 3 |
wireless communication designs propulsion energy limitations subin eom hoon lee junhee park inkyu lee fellow ieee jan school electrical korea university seoul korea email inkyu abstract paper studies unmanned aerial vehicle uav aided wireless communication systems uav supports uplink communications multiple ground nodes gns flying area interest system propulsion energy consumption uav taken account uav velocity acceleration exceed certain threshold formulate minimum average rate maximization problem energy efficiency maximization problem jointly optimizing trajectory velocity acceleration uav uplink transmit power gns problems general employ successive convex approximation sca techniques end proper convex approximations constraints derived iterative algorithms proposed converge local optimal point numerical results demonstrate proposed algorithms outperform baseline schemes problems especially maximization problem proposed algorithm exhibits gain baseline scheme ntroduction recently unmanned aerial vehicles uavs received great attentions new communication entity wireless networks compared conventional terrestrial communications users served ground base stations bss fixed given position systems could dispatched field various purposes disaster situations military uses moreover located high users uavs likely los communication links channels utilizing advantages uavs considered diverse wireless communication systems authors studied mobile relaying system uav helps communication ground nodes gns without direct communication links relaying system compared conventional static relay schemes uav move closer source destination nodes order obtain good channel conditions thus system throughput significantly improved throughput mobile relaying channels maximized optimizing transmit power source relay node well trajectory mobile relay fixed relay trajectory work addressed secrecy rate maximization problem relaying system external eavesdropper addition uavs adopted assist conventional terrestrial communication infrastructures disaster situation uavs employed recover malfunctioned ground infrastructure work examined system uav serves users jointly optimizing uav trajectory bandwidth allocation user partitioning also flying computing cloudlets uavs introduced provide offloading opportunities multiple users moreover uavs could play role mobile bss wireless networks authors derived mathematical expressions optimum altitude uavs maximizes coverage cellular network also trajectory optimization methods mobile bss presented assuming gns located line minimum throughput performance maximized optimizing position uav straight line result extended general scenario multiple uavs fly space communicate gns joint optimization algorithms uav trajectory transmit power time allocation provided maximize minimum throughput performance however works consider propulsion energy consumption uavs necessary practical uav designs limited energy situation taking issue account recent works investigated energy efficiency uav system different conventional systems consider communicationrelated energy consumption uav addresses propulsion energy uav additionally authors maximized controlling turning radius uav mobile relay systems also jointly optimizing time allocation speed trajectory spectrum efficiency maximized propulsion energy consumption uav theoretically modeled uav maximized single system paper studies wireless communications uav limited propulsion energy receives data multiple gns uplink assumed gns uav operate frequency band direct communication links among gns setup formulate minimum rate maximization problem maximization problem jointly optimizing uav trajectory velocity acceleration uplink transmit power gns similar approach solving minimum rate maximization studied authors involve propulsion energy consumption uav maximization problem work regarded generalization single system scenario thus need deal interference well due issues existing algorithms presented directly applied problems tackle problem interest introduce auxiliary variables couple trajectory variables uplink transmit power order jointly optimize variables equivalent problem still employ successive convex approximation sca technique successively solves approximated convex problems original one order apply sca optimization problems present new convex surrogate functions constraints propose efficient algorithms minimum rate maximization problem maximization problem yield local optimal solutions simulation results confirm proposed algorithms provide significant performance gain baseline schemes rest paper organized follows section explains system model problem formulations communication systems section iii minimum rate maximization maximization algorithms proposed examine circular trajectory case baseline schemes section section presents numerical results proposed algorithms conclude paper section notations throughout paper bold normal letters denote vectors scalars respectively space vectors represented vector kak indicate norm transpose respectively gradient function defined function stand derivatives respect time respectively fig wireless network ystem odel roblem ormulation shown fig consider wireless communications uav receives uplink information transmitted gns uav horizontally flies constant altitude time period gns located fixed positions perfectly known uav advance location gns uav employ cartesian coordinate system thus horizontal coordinate denoted also define horizontal coordinate uav time instant instantaneous velocity acceleration uav expressed respectively continuous time expressions variables make analysis derivations uav systems intractable ease analysis discretize time duration time slots time interval result trajectory uav represented vector sequences discretized time interval chosen small number velocity acceleration approximated using taylor expansions also assuming periodical operation uav implies one period uav returns starting location velocity acceleration addition acceleration velocity practical uav subject amax vmin vmax amax indicates maximum uav acceleration vmin vmax stand minimum maximum uav speed constraints respectively notice minimum speed constraint vmin important practical uav designs need move forward remain aloft thus hover fixed location power consumption uav take account propulsion power utilized maintaining uav aloft supporting mobility propulsion power uav pprop time slot given pprop parameters related aircraft design equals gravitational acceleration thus average propulsion power total consumed propulsion energy time slots obtained pprop pprop respectively power consumed signal processing circuits converters channel decoders ignored since practically much smaller propulsion power let explain channel model uav gns assume communication links dominated los links moreover doppler effect due uav mobility assumed well compensated effective channel gain uav time slot follows path loss model represents reference ratio snr channel power white gaussian noise power uav respectively distance written time slot transmits data signal uav power ppeak ppeak peak transmission power constraint gns accordingly instantaneous achievable rate expressed term stands interference gns therefore achievable average rate total information bits transmitted time slots denoted respectively means bandwidth paper jointly optimize variables uplink transmit power gns minimum average rate among multiple gns maximized respectively first minimum rate maximization problem formulated max ppeak pprop plim plim indicates propulsion power constraint uav next support individual gns fairness based suitable thus define wireless communication systems ratio minimum information bits transmitted among gns total energy consumed uav therefore maximization problem written max pprop general problems due constraints objective functions compared additionally consider propulsion power constraint minimum rate maximization problem also note maximization problem regarded generalization investigated single scenario respects works regarded special cases problems respectively solve problems adopt sca framework iteratively solves approximated convex problems original problems iii roposed lgorithm section propose iterative algorithms efficiently solving applying sca method first minimum rate maximization problem considered section followed maximization problem section minimum average rate maximization applying change variables new optimization variable constraint becomes max max ppeak ppeak rewrite achievable rate introducing new auxiliary variables recast max max plim vmin vmax shown optimal point inequality constraint holds equality since otherwise enlarge feasible region corresponding increasing therefore conclude equivalent thanks new auxiliary variables constraints become convex still general address constraints employ sca methods first checked constraint given difference two concave functions hence convex surrogate function computed first order taylor approximation indicates solution attained iteration sca process next identify surrogate functions present following lemmas lemma denoting solution calculated iteration concave surrogate function glb max max expressed glb max ppeak max constants respectively given kql kql kql kql proof please refer appendix lemma solution obtained iteration concave surrogate function computed proof applying similar process appendix conclude function satisfies conditions concave surrogate function aid lemmas iteration constraints approximated glb max result given solutions iteration solve following problem iteration sca procedure max denotes lower bound original problem since convex problem optimally solved via existing convex optimization solvers cvx based results summarize proposed iterative procedure algorithm algorithm proposed algorithm initialize let repeat compute given update convergence obtain convergence analysis algorithm let define objective values iteration respectively express relationship first equation holds surrogate functions tight given local points second inequality derived property optimal solution third inequality follows fact approximation problem lower bound original problem conclude objective value every iterations algorithm since objective value finite upper bound value given local points surrogate functions obtain gradients original functions verified algorithm guaranteed converge least local optimal solution energy efficiency maximization subsection consider maximization problem first applying introducing auxiliary variable transformed max similar see equivalent still due constraints tackle issue employ similar sca process presented section adopting lemmas convex approximation iteration given max denotes lower bound original problem shown fractional problem optimally solved via dinkelbach method denoting given constant converted max based summarize proposed iterative procedure algorithm convergence local optimality algorithm verified similar algorithm thus details omitted brevity algorithm proposed algorithm initialize let repeat repeat compute given update convergence let update let convergence obtain worthwhile note need initialize trajectory variables however trivial find variables satisfying uav movement constraints propulsion power constraint clearly explained section ircular trajectory system examine circular trajectory system used baseline scheme first choose center circular trajectory geometrical mean gns denoting radius trajectory angle circle along uav flies time slot horizontal coordinate uav obtained cos sin also location represented cos sin equal distance angle geometric center respectively thus distance uav expressed cos adopting angular velocity angular acceleration equations rewritten kak pprop tangential centripetal accelerations respectively vmin vmax indicate minimum maximum angular velocity respectively similar section iii address minimum average rate maximization problem maximization problem circular trajectory respectively formulated max rmin rmax max rmin vmin rmax pprop vmax max min denote minimum max maximum radius circular trajectory respectively emphasized difficult solve constraints objective functions deal problems similar sca frameworks section iii applied minimum average rate maximization maximization minimum average rate maximization problem first find given updates fixed given adopt change variable max cos ppeak cos max ppeak similar method section employ sca max based lemma concave surrogate function max max solution iteration chosen max ppeak max constants respectively given cos applying fixed reformulated approximated convex problem iteration sca max max successively solved cvx convergence next present solution given obtain concave surrogate function max introduce following lemma identifies surrogate function cosine function lemma given concave surrogate function cos computed sin cos cos proof similar process appendix conclude function satisfies conditions concave surrogate function inspecting lemmas concave surrogate function max max identified sin max ppeak ppeak max given sin cos sin utilizing iteration sca algorithm given approximated following convex problem max max successively solve cvx convergence similar algorithm solution problem obtained alternately solving objective value converges maximization problem circular trajectory case apply similar methods section based given transformed two fractional problems using algorithm alternately solve problems convergence trajectory initialization initialize proposed algorithms employ simple circular path concept first initial angular velocity set implies next initial radius chosen fulfill constraints expressed vmin vmax amax min plim simply find maximizes minimum rate constraints via line search maximization problems computed range result initial trajectory written cos sin initial velocity simply obtained assuming sec sec sec sec fig optimized uav trajectories different periods plim umerical esults section provide numerical results validate effectiveness proposed algorithms simulations consider gns distributed fig locations gns marked triangles constant altitude bandwidth reference snr peak transmission power set mhz ppeak dbm respectively also minimum velocity maximum velocity maximum acceleration uav determined vmin vmax amax respectively propulsion power consumption model constants set respectively make minimum propulsion power consumption pprop min kvk first demonstrate performance minimum rate maximization algorithms fig illustrates optimized uav trajectories various plim observed smaller sec increases uav tries get closer gns order improve channel conditions gns contrast sufficiently large sec uav able visit gns within given time period thus uav rate proposed circular opt circular opt circular opt period sec fig rate respect period plim hover traveling smooth path around gns different results uav practical movement constraints explained follows due constraints velocity propulsion power uav stay fixed positions therefore uav continuously moves around close gns possible maintain good communication channels without exceeding propulsion power limit plim fig shows maximized minimum rate performance proposed algorithm function compare performance proposed algorithm following circular trajectory based methods circular optimum radius angular velocity angular acceleration uplink transmit power jointly optimized section circular trajectory circular optimum radius uplink transmit power jointly optimized section circular trajectory circular optimum radius optimized ppeak initial circular trajectory plim plim power limit fig optimized uav trajectories different propulsion power limit plim sec section first verified proposed algorithm outperforms baseline schemes regardless time period also see rate proposed algorithm monotonically increases since time available uav hover around contrast baseline schemes restricted circular shape trajectory rate performance first increases grows decreases certain due fact order satisfy propulsion power constraint radius circular trajectory increase gets large thus uav may become far away geometric center gns certain therefore expect performance gain proposed algorithm baseline schemes grow fig illustrates optimized uav trajectories various propulsion power limit plim sec shown plim trajectory uav restricted smooth path large turning radius consume low propulsion power however plim gets larger observe quick changes along trajectory path thus uav move much smaller turning radius enhances rate performance proposed circular opt circular opt circular opt rate propulsion power limit fig rate respect propulsion power limit plim sec fig depict average rate various schemes function propulsion power constraint plim proposed algorithm baseline schemes rate first increases plim grows gets saturated explained follows large plim trajectory velocity uav change freely attain good channel conditions thus rate increases however even large plim given rate continue increase practical limits velocity acceleration similar fig see proposed algorithm provides significant performance gains baseline schemes next fig investigate optimized trajectory maximization problem various increases overall patterns similar fig nevertheless balance rate performance propulsion power consumption maximization trajectory shows smooth path relatively large turning radius thus average propulsion power consumption becomes lower present impact energy efficient uav communication designs fig depicts uav speed proposed maximization method sec comparison sec sec sec sec fig optimized energy efficient uav trajectories different periods minimum rate maximization lim maximization speed time sec fig uav speeds rate without propulsion power constraint maximization sec table erformance comparison max min rate maximization rate plim maximization proposed circular proposed circular sec average speed average acceleration average rate average power watts energy efficiency also consider rate scheme without propulsion power constraint observed rate case uav tries fly gns fast possible stay gns low speed hand maximization scheme keeps speed uav around order waste propulsion energy finally table presents performance comparison rate without propulsion power constraint maximization designs proposed baseline schemes sec see rate methods consume much higher propulsion power allowing large variation speed average acceleration contrast speed proposed maximization design slowly varies low acceleration thus much higher achieved observe proposed maximization algorithm exhibits gain rate without propulsion power constraint gain circular baseline maximization scheme onclusion paper studied wireless communication optimization practical propulsion energy constraint uav minimum average rate maximization problem maximization problem uav trajectories uplink transmit power gns jointly optimized applying sca technique proposed efficient iterative algorithms find local optimal solutions numerical results demonstrated proposed algorithms provide substantial performance gains compared baseline schemes ppendix proof lemma let define function positive constants given order arbitrary function concave surrogate function must satisfy following conditions denoting function easily shown fulfills first condition surrogate function also gradient respect respectively computed bul since two gradients become identical satisfies second condition surrogate function prove global lower bound condition calculate hessian matrix function one easily check hessian positive matrix implies convex function since global minimum achieved result show greater equal given thus third condition surrogate function holds substituting multiplying ppeak lemma thus proved eferences zeng zhang lim wireless communications unmanned aerial vehicles opportunities challenges ieee commun vol may lee lee kwak ihm han mimo cooperation challenges practical solutions systems ieee wireless vol zeng zhang lim throughput maximization mobile relaying systems ieee trans vol wang chen mei fang improving physical layer security using mobile relaying ieee wireless commun vol jun song lee lee designs mimo wireless relaying networks challenges solutions ieee access vol may kong song park lee new beamforming design mimo relaying systems direct link ieee trans vol jul merwaday guvenc uav assisted heterogeneous networks public safety communications proc ieee wcnc may lyu zeng zhang spectrum sharing cyclical multiple access cellular offloading arxiv preprint jeong simeone kang mobile edge computing via cloudlet optimization bit allocation path planning accepted ieee trans veh technol online available http kandeepan lardner optimal lap altitude maximum coverage ieee wireless commun vol jul lyu zeng zhang cyclical multiple access communications tradeoff ieee wireless commun vol zeng zhang joint trajectory communication design enabled wireless networks filippone flight performance fixed rotary wing aircraft elsevier choi kim sung maneuvering communication single relay ieee trans aerosp electron vol jul zhang zeng zhang spectrum energy efficiency maximization mobile relaying proc ieee icc may zeng zhang uav communication trajectory optimization ieee trans wireless vol jun kim lee song lee lee optimal power allocation scheme energy efficiency maximization distributed antenna systems ieee trans vol qiu energy efficiency optimization mimo broadcast channels ieee trans wireless vol lee jung park lee new beamforming strategy miso interfering broadcast channels based large systems analysis ieee trans wireless vol apr pan zhang chen distributed power optimization comp systems fairness ieee commun vol jun sheng tan zhang sun wang shi subcarrier assignment power allocation ofdma systems fairness guarantees ieee trans vol sheng wang zhang wen power allocation wireless networks ieee trans veh vol marks wright general inner approximation algorithm nonconvex mathematical programs operations research vol sun babu palomar algorithms signal processing communications machine learning ieee trans signal vol grant boyd cvx matlab software disciplined convex programming version available http dinkelbach nonlinear fractional programming management science vol mar zappone jorswieck energy efficiency wireless networks via fractional programming theory foundations trends communications information theory vol jun
| 7 |
xxxx society industrial applied mathematics uncertainty quantification vol mathematical properties polynomial dimensional decomposition apr sharif abstract many uncertainty quantification problems solved polynomial dimensional decomposition pdd represents series expansion terms random orthonormal polynomials increasing dimensions study constructs orthogonal splitting polynomial spaces proves completeness polynomial orthogonal basis prescribed assumptions demonstrates convergence correct limit associated pdd error analysis reveals pdd commit larger error polynomial chaos expansion pce appropriately chosen truncation parameters comparison computational required estimate precision variance output function involving exponentially attenuating expansion pdd approximation markedly pce approximation key words uncertainty quantification anova decomposition multivariate orthogonal polynomials polynomial chaos expansion introduction polynomial dimensional decomposition pdd hierarchical infinite series expansion random variable involving orthogonal polynomials independent random variables introduced author polynomial variant anova dimensional decomposition add pdd deflates curse dimensionality extent developing behavior complex systems low dimensions wherein degrees interactions among input variables weaken rapidly vanish altogether approximations stemming truncated pdd commonly used solving uncertainty quantification problems engineering applied sciences including multiscale fracture mechanics random eigenvalue problems computational fluid dynamics stochastic design optimization name however existing works pdd focused practical applications almost mathematical analysis pdd indeed number mathematical issues concerning necessary conditions completeness pdd basis functions convergence exactness optimal analyses pdd approximation quality truncated pdd yet studied resolved paper fills gap establishing fundamental mathematical properties empower pdd solid foundation pdd credible close cousin polynomial chaos expansion pce providing alternative better choice uncertainty quantification computational science engineering principal objective work examine important mathematical properties pdd studied heretofore arbitrary independent probability measures input random variables paper organized follows section defines discusses mathematical notations preliminaries two sets assumptions input probability measures required pdd explained brief exposition univariate multivariate orthogonal polynomials consistent general probability measure work supported national science foundation grant number college engineering applied mathematics computational sciences university iowa iowa city questions comments corrections document may directed email address rahman including second moment properties given section section also describes relevant polynomial spaces construction orthogonal decompositions orthogonal basis completeness multivariate orthogonal polynomials also proved section briefly explains add followed presentations pdd random variable convergence exactness pdd explained section truncated pdd approximation quality discussed formulae mean variance truncated pdd also derived section ends explanation pdd extended infinitely many input variables section briefly describes orthogonal decompositions polynomial spaces leading pce section error analysis pdd conducted followed comparison pce finally conclusions drawn section input random variables let represent sets positive integer natural integer real numbers respectively denote ith bounded unbounded subdomain let complete probability space sample space representing abstract set elementary events probability measure representing borel consider input random vector describing statistical uncertainties system parameters stochastic problem input random variables also referred basic random variables finite integer represents number input random variables often referred dimension stochastic problem denote joint distribution function admitting joint probability density function given abstract probability space image probability space viewed image mapping also support similarly component random variable defined abstract marginal probability space comprising sample space probability measure corresponding image probability space fxi dxi image sample space borel fxi marginal probability density function relevant statements objects abstract probability space obvious counterparts associated image probability space probability spaces used paper two sets assumptions used pdd follows assumption input random vector satisfies following conditions input random variable absolutely continuous marginal distribution function fxi continuous marginal density function fxi bounded unbounded support component random variables statistically independent necessarily identical consequence endowed probability density function fxi bounded unbounded support input random variable possesses finite moments orders polynomial dimensional decomposition xli xli fxi dxi expectation operator respect probability measure assumption moments marginal density function input random variable satisfy least one following conditions density function fxi compact support exists compact interval moment sequence holds lim inf moment sequence holds random variable exponentially integrable exists real number exp fxi dxi density function fxi symmetric strictly positive exists real number fxi dfxi dxi fxi assumption assures existence infinite sequence orthogonal polynomials consistent input probability measure assumption addition assumption guarantees input probability measure determinate resulting complete orthogonal polynomial basis function space interest assumptions impose mild restrictions probability measure examples input random variables satisfying assumptions gaussian uniform exponential beta gamma variables commonly used uncertainty quantification assumptions explained next section vitally important determinacy probability measure completeness orthogonal polynomial basis therefore pdd pce entail orthogonal polynomial expansions assumptions necessary unfortunately always clearly specified pdd pce literature prototypical example assumption satisfied assumption case lognormal random variable noted ernst violation assumption leads indeterminacy input probability measure thereby fails form complete orthogonal polynomial basis finally assumptions modified account random variables discrete mixed distributions dependent random variables discrete mixed distributions dependent variables considered paper rahman orthogonal polynomials polynomial spaces univariate orthogonal polynomials consider ith random variable defined abstract probability space image fxi dxi let space real polynomials polynomial pair define inner product fxi dxi respect probability measure fxi dxi induced norm dxi dxi fxi dxi assumption moments orders exist finite including zeroorder moments fxi dxi always positive clearly according gautschi inner product therefore exists infinite set univariate orthogonal polynomials say consistent probability measure fxi dxi satisfying notation polynomial first second indices refer ith variable degree respectively prominent examples classical univariate orthogonal polynomials comprise hermite laguerre jacobi polynomials consistent measures defined gaussian gamma beta densities whole real line interval bounded interval respectively many orthogonal polynomials including three classical polynomials mentioned expressed unified way invoking hypergeometric series incorporated tree structure askey scheme even general measures established numerical techniques stieltjes procedure used generate orthogonal polynomials multivariate orthogonal polynomials denote index set subset including empty set cardinality denoted degree jip represents pth component let subvector defined abstract probability space sample space probability measure corresponding image probability space fxu dxu image sample space symbol used designating cardinality set degree paper polynomial dimensional decomposition borel fxu marginal probability density function supported assumption fxu fxi denote space real polynomials given inner product fxu dxu dxu two polynomials called orthogonal dxu moreover polynomial said orthogonal polynomial respect fxu dxu orthogonal polynomials lower degree dxu deg deg let represent infinite set multivariate orthogonal polynomials consistent probability measure fxu dxu satisfying dxu clearly multivariate orthogonal polynomial satisfying due probability measure consequence statistical independence assumption multivariate polynomials exist easily constructed tensorizing univariate orthogonal polynomials proposition vector input random variables fulfilling assumption suppose sets univariate orthogonal polynomials marginal measures obtained set multivariate orthogonal polynomials consistent probability measure fxu dxu symbol denotes tensor product terms element multivariate orthogonal polynomial degree proof consider two distinct polynomials set satisfying since must least one component without loss generality suppose fubini theorem statistical independence random variables mind dxu fxu dxu jip xip kip xip fxip xip dxip rahman equality zero last line results recognition inner integral vanishes setting addition dxu finite virtue existence set univariate orthogonal polynomials therefore satisfying set multivariate orthogonal polynomials consistent probability measure fxu dxu multivariate orthogonal polynomials obtained scaled generate multivariate orthonormal polynomials follows definition multivariate orthonormal polynomial degree consistent probability measure fxu dxu defined univariate orthonormal polynomial degree consistent probability measure fxi dxi orthogonal decomposition polynomial spaces orthogonal decomposition polynomial spaces entailing splitting leads pdd facilitate splitting polynomial space limit power jip variable take positive integer values consequence degree varying monomial variables product xjuu total degree linear combination xjuu homogeneous polynomial degree denote qul span xjuu space homogeneous polynomials degree individual degree variable span xjuu space polynomials degree least individual degree variable dimensions vector spaces qul respectively dim polynomial dimensional decomposition dim dim qul let denote zlu space orthogonal polynomials degree exactly orthogonal polynomials zlu fxu dxu zlu provided support fxu interior vector space dimension dim zlu dim qul many choices exist basis zlu formally proved section select zlu basis zlu comprising number basis functions basis function multivariate orthogonal polynomial degree defined earlier clearly zlu span according proposition presented later orthogonal whenever arbitrary therefore two distinct polynomial subspaces zlu orthogonal whenever consequence exist orthogonal decompositions zlu span span symbol representing orthogonal sum zlv span span span constant subspace needs added subspace zlv excludes constant functions recall space real polynomials setting first swapping yields yet another orthogonal decomposition zlu span span rahman note last expression equal span representing infinite set orthogonal polynomials given orthogonal splitting function input random vector expanded series hierarchically ordered multivariate orthogonal orthonormal polynomials expansion referred pdd formally presented analyzed section statistical properties random multivariate polynomials input random variables instead real variables inserted argument multivariate polynomials become functions random input variables therefore important establish properties exploited remaining part section section proposition vector input random variables fulfilling assumption moments multivariate orthogonal polynomials otherwise respectively independence random variables proof using statistical since component constant function mind produces resulting obtain result set use directly trivial result obtained considering two subcases first yields result already second arbitrary least one element suppose element associated degree using statistical independence random variables fact already demonstrated produces desired result corollary secondorder moments multivariate orthonormal polynomials respectively otherwise polynomial dimensional decomposition orthogonal basis completeness important question regarding multivariate orthogonal polynomials discussed preceding subsection whether constitute complete basis function space interest hilbert space let represent hilbert space functions respect probability measure supported following two propositions show indeed orthogonal polynomials span various spaces interest proposition vector input random variables fulfilling assumption subvector set multivariate orthogonal polynomials degree consistent probability measure fxu dxu basis zlu proof assumption orthogonal polynomials consistent probability measure fxu dxu exist denote column vector elements arranged according monomial order let atu row vector comprising constants set atu multiply sides equality right ptu integrate respect measure fxu dxu apply transposition obtain ptu matrix element fxu dxu representing covariance two elements according proposition two distinct polynomials orthogonal meaning zero positive finite consequently diagonal matrix hence invertible therefore yields proving linear independence elements set furthermore dimension zlu matches exactly number elements aforementioned set therefore spanning set forms basis zlu proposition vector input random variables fulfilling assumptions subvector consistent probability measure fxu dxu let set multivariate orthogonal polynomials degree basis zlu set polynomials orthogonal sum span dense moreover zlu rahman overline denotes set closure proof assumption orthogonal polynomials exist according theorem ernst exploits assumption polynomial space dense fxi dxi use theorem petersen asserts dense fxi dxi therefore set polynomials orthogonal sum equal per dense including limit points orthogonal sum yields polynomial dimensional decomposition let realvalued output random variable defined probability space vector space hilbert space inner product norm elementary show add add expressed recursive form finite hierarchical expansion terms input variables increasing dimensions subset complementary set component function describing constant interaction denotes vector whose ith component summation comprises terms term depending group variables indexed particular subset sum vanishes resulting expression constant function integration last line empty set reproducing hence finding last function indeed component functions obtained interpreting literally decomposition first presented relation seminal work studied many researchers described efron stein author references cited therein polynomial dimensional decomposition add also generated tensorizing univariate function space decomposition constant subspace remainder producing fxu dxu add subspace comprising component functions however subspaces general therefore discretization necessary instance introducing orthogonal polynomial basis discussed section component function expressed linear combination basis functions indeed comparing yields closure orthogonal decomposition zlu polynomial spaces zlu result polynomial refinement add commonly referred pdd pdd pdd random variable simply expansion respect complete hierarchically ordered orthonormal polynomial basis least two ways explain pdd polynomial variant add orthogonal polynomial expansion polynomial variant add first approach explained author prior work involves following two steps expand anova component function terms basis originally stems basis zlu fxu dxu representing associated expansion apply exploit orthogonal properties basis end result pdd eventually comparing connection pdd add clearly palpable former viewed polynomial variant latter instance represents pdd component function describing polynomial approximation addition pdd inherits desirable properties add rahman orthogonal polynomial expansion second approach entails polynomial expansion associated orthogonal splitting polynomial spaces explained section latter approach published elsewhere therefore formally presented theorem theorem let vector input random variables fulfilling assumptions denote set multivariate orthonormal polynomials consistent probability measure fxu dxu random variable hierarchically expanded series referred pdd expansion defined pdd converges furthermore pdd converges probability distribution proof assumptions complete infinite set multivariate orthogonal polynomials consistent probability measure fxu dxu exists proposition fact orthonormality merely scaling set polynomials orthogonal sum span also dense therefore random variable expanded shown combining two inner sums expansion forms equality second line denseness one bessel inequality polynomial dimensional decomposition proving pdd converges determine limit convergence invoke proposition implies set left side complete therefore bessel inequality becomes equality known parseval identity multivariate orthonormal system every random variable furthermore pdd converges probability moreover expansion converges probability also converges distribution finally find expansion define second moment epdd full pdd sides respect write second third last lines obtained interchanging expectation operators performing swapping expectation summation operators applying corollary respectively interchanges permissible infinite sum convergent demonstrated preceding paragraph setting yields respectively completing proof rahman expressions expansion also derived simply replacing full pdd using corollary contrast proof given demonstrates pdd determined optimally emphasized function must meansquare convergences hold however rate convergence depends smoothness function smoother function faster convergence function polynomial pdd exactly reproduces function results easily proved using classical approximation theory related expansion known name also involves orthogonal polynomials connection add however existence convergence approximation quality expansion including behavior infinitely many input variables reported truncation full pdd contains infinite number orthonormal polynomials practice number must finite meaning pdd must truncated however multiple ways perform truncation straightforward approach adopted work entails keeping polynomials variables thereby retaining degrees interaction among input variables less equal preserving polynomial expansion orders total less equal result pdd containing number expansion including important clarify things truncated pdd proposed first truncation respect polynomial expansion order based opposed employed prior works therefore comparing existing truncation desired done care said proposed truncation one advantage existing one direct comparison truncated pce possible explained forthcoming sections second right side contains sums orthonormal polynomials representing pdd component functions therefore term used pdd approximation interpreted context including interaction input variables even though strictly function third outer sums vanish finally nouns degree order associated pdd orthogonal polynomials used synonymously paper polynomial dimensional decomposition converges sense generating hierarchical convergent sequence pdd approximations readers interested adaptive version pdd truncation parameters automatically chosen directed work yadav rahman including application design optimization natural ask approximation quality since set polynomials orthogonal sum complete truncation error orthogonal element subspace chosen demonstrated proposition let pdd approximation truncation error orthogonal subspace span comprising polynomials degree interaction order including constants moreover proof let arbitrary expansion element subspace described last line follows corollary proving first part proposition latter part pythagoras theorem yields therefore theorem second part proposition entails convergence convergence described theorem however alternative route chosen proof proposition besides proposition implies pdd approximation optimal recovers best approximation subspace described corollary rahman corollary define subspace polynomials degree interaction order including constants pdd approximation best approximation sense inf proof consider two elements subspace former pdd approximation expansion coefficients defined latter polynomial function described arbitrary chosen expansion proposition truncation error orthogonal therefore orthogonal linear combinations yielding consequently second expectation right side first line thereby proving optimality pdd approximation motivations behind approximations following practical setting function fortunately dimension much lower meaning right side approximated sum component functions still maintaining random variables uncertainty quantification problem furthermore svariate pdd approximation grounded fundamental conjecture known true many uncertainty quantification problems given function pdd component function small hence negligible leading accurate lowvariate approximation computational complexity truncated pdd polynomial opposed exponential thereby alleviating curse dimensionality substantial extent although pce contains orthogonal polynomials recent work random eigenvalue analysis dynamic systems reveals markedly higher convergence rate pdd approximation pce approximation output statistics probabilistic characteristics mthorder pdd approximation viewed surrogate therefore relevant probabilistic characteristics including first two moments probability density function exists estimated statistical properties applying expectation operator imposing corollary means polynomial dimensional decomposition independent therefore pdd truncated values yields exact mean nonetheless referred pdd approximation mean applying expectation operator time employing corollary results variances var var respectively var referred pdd approximation variance clearly var approaches var exact variance convergent probability distribution probability density function exists also estimated however analytical formula exists density function case density estimated sampling methods monte carlo simulation mcs simulation confused crude mcs commonly used producing benchmark results whenever possible crude mcs expensive even prohibitive particularly sample size needs large estimating tail probabilistic characteristics contrast mcs embedded pdd approximation requires evaluations simple polynomial functions describe therefore relatively large sample size accommodated pdd approximation even expensive evaluate infinitely many input variables many fields uncertainty quantification information theory stochastic process functions depending countable sequence input random variables need considered certain assumptions pdd still applicable case finitely many random variables demonstrated following proposition proposition countable sequence input random variables defined probability space associated generated sequence satisfies assumptions pdd converges moreover pdd converges probability distribution proof according proposition dense hence every associated generated certain abuse notation used set polynomial functions real variables random variables apply theorem ernst says dense every rahman subspace also dense using span span demonstrating set polynomials orthogonal sum last line dense therefore pdd converges since convergence stronger convergence probability distribution latter modes convergence follow readily polynomial chaos expansion contrast splitting polynomial spaces pdd orthogonal splitting polynomial spaces results pce latter decomposition briefly summarized pce compared pdd next section orthogonal decomposition polynomial spaces let define monomial variables product xjnn total degree denote span space real polynomials degree let span space constant functions denote vln space orthogonal polynomials degree exactly orthogonal polynomials vln section mind select basis basis function multivariate orthogonal polynomial degree obviously vln span according orthogonal whenever therefore two polynomial subspaces vln vrn orthogonal whenever consequence exists another orthogonal decomposition vln span span compared represents orthogonal decomposition pce given orthogonal decomposition pce output random variable expressed polynomial dimensional decomposition infinite set multivariate orthonormal polynomials obtained scaling pce expansion like pdd pce assumptions also converges probability distribution since pce infinite series must also truncated applications commonly adopted truncation based retaining orders polynomials less equal specified total degree regard given pce approximation reads kind truncation related total degree index set defining recovered multivariate polynomial space pce approximation kinds truncation entail max describing tensor product hyperbolic cross index sets respectively name two total degree tensor product index sets common choices although latter one curse dimensionality making impractical problems hyperbolic cross index set originally introduced approximating periodic functions trigonometric polynomials relatively new idea yet receive widespread attention choices possibly others including anisotropic versions used truncating pce work however total degree index set used pce approximation consistent used truncating pdd error analysis pdd error define error stemming pdd approximation presented preceding section replacing right sides respectively produces second term vanishes expectedly lower limit outer sum exceeds upper limit first term pdd error due truncation rahman polynomial expansion orders involving interactive variables whereas second term pdd error contributed ignoring interactive larger variables obviously error general function depends expansion decay decay respect nonetheless error decays monotonically respect stated proposition nothing said pdd error proposition general function equal either equal proof setting using inequality zero last line results fact first term smaller equal second term similarly setting using finally setting corollary general function whenever practice interaction among input variables polynomial expansion equal order become increasingly weaker grow case variance decreases given rates decreases question arises fast decay respect proposition corollary subsequent discussions provide insights proposition class functions assume attenuates according three constants holds var proof recognition polynomial dimensional decomposition use obtain corollary function class described proposition equal either equal according corollary decays strictly monotonically respect rate parameters equality holds proposition figure comprising three subfigures presents three sets plots relative error five distinct values subfigures obtained correspond three distinct cases values cases error given decays first respect levels respective limit large limits get progressively smaller increases expected however magnitude behavior depends rates expansion attenuates respect degree interaction polynomial expansion order case figure top error given decays slowly respect due relatively weaker attenuation rate associated polynomial expansion order trend reverses attenuation rate becomes stronger reaches condition case figure middle larger values example respective limits significantly lower case case attenuation rates large case figure bottom decay rate error accelerates substantially relationship pdd pce since pdd pce share orthonormal polynomials related indeed relationship first studied rahman yadav determined one two infinite series pdd pce defined rearranged derive words pdd also viewed pce vice versa however due strong connection add endowed desired hierarchical structure pdd merits appellation importantly pdd pce truncated fact two important observations stand prominently first terms pce approximation organized respect order polynomials contrast pdd approximation structured respect degree interaction finite number random variables therefore significant may exist regarding accuracy convergence properties truncated sum second stochastic response highly nonlinear contains rapidly diminishing interactive multiple random variables pdd approximation expected pce approximation terms pdd approximation nonlinear selecting appropriate values contrast many terms expansion required included pce approximation capture high nonlinearity work theoretical comparison pdd pce context error analysis studied prior works presented error analysis convenient write pce approximation terms pdd approximation indeed exists striking result connecting pce pdd approximations explained proposition proposition pce approximation svariate pdd approximation respectively pce approximation rahman figure pdd errors various attenuation rates expansion top middle bottom polynomial dimensional decomposition pdd approximation denotes minimum proof according rahman yadav right side resulting long form pce approximation expressed jiq xiq sums jis sums terms pdd expansion note depending condition sums survive meaning pce approximation retains interaction polynomial expansion order accordingly compact form pce approximation written completing proof using proposition number expansion say associated pce approximation calculated required pdd approximation accordingly setting last expression commonly found pce literature advantage obvious pdd determined reused pce approximation subsequent error analysis thereby sidestepping calculations pce pdd pce errors define another error resulting pce approximation using proposition meaning pce error analysis conducted using pdd approximation proposition general function let denote pdd pce errors defined respectively given truncation parameter pce approximation truncation parameters pdd approximation chosen rahman denotes maximum proof result follows propositions corollary proposition aids selecting appropriate truncation parameters contrast errors due pdd pce approximations however proposition say anything computational proposition subsequent discussion explain relationship computational error committed pdd pce approximations special class functions proposition special class functions assume diminishes according three constants holds proof replacing respectively obtains result theoretically numbers expansion required pdd pce approximations used compare respective computational table presents requisite numbers expansion pdd truncated pce truncated calculated using pdd pce approximations respectively according table growth number expansion pce steeper pdd growth rate increases markedly polynomial expansion order large primarily pce approximation solely dictated single truncation parameter controls largest polynomial expansion order preserved degree interaction independently contrast two truncation parameters involved pdd approximation greater flexibility retaining largest degree interaction largest polynomial expansion order consequence numbers expansion hence computational pdd pce approximations vary appreciably table growth expansion pdd pce approximations using equalities figure depicts relative pdd error relative pce error vary respect polynomial dimensional decomposition figure pdd pce errors various attenuation rates expansion top middle bottom rahman number expansion required three preceding cases attenuation rates respect degree interaction polynomial expansion order studied cases pdd pce errors decay respect expected however pdd approximation error fixed may decline even increasing whereas possibility exists pce approximation behavior pronounced case figure top example case bivariate pdd approximation achieves relative error employing expansion contrast match error pce approximation needed committing relative error cost expansion therefore pdd approximation substantially economical pce approximation similar accuracy however case figure middle computational advantage pdd pce approximations disappears attenuation rate associated polynomial expansion order dominant associated degree interaction nonetheless case pdd approximation lowest possible commit error mthorder pce approximation computational finally attenuation rates case figure bottom pdd approximation still computationally pce approximation instance trivariate fifthorder pdd pce approximations require expansion commit errors respectively unlike case unnecessarily large polynomial expansion order may render pdd approximation expensive required readers take note comparative error analyses reported limited pdd pce approximations derived truncations according total degree index set index sets tensor product hyperbolic cross index sets would intriguing find whether similar conclusion arises conclusion fundamental mathematical properties pdd representing fourierlike series expansion terms random orthogonal polynomials increasing dimensions studied splitting appropriate polynomial spaces orthogonal subspaces spanned orthogonal polynomials constructed resulting polynomial refinement add eventually pdd prescribed assumptions set orthogonal polynomials proved form complete basis subspace leading orthogonal sum sets basis functions including constant subspace span space polynomials addition orthogonal sum dense hilbert space functions leading convergence pdd correct limit including case infinitely many random variables optimality pdd approximation quality due truncation demonstrated discussed error analysis general function random variables given pdd approximation pce approximation therefore pdd approximation commit larger error pce approximation comparison computational required estimate accuracy variance output function entailing exponentially attenuating expansion pdd approximation substantially economical pce approximation polynomial dimensional decomposition references askey wilson basic hypergeometric polynomials generalize jacobi polynomials mem amer math ams providence babenko approximation trigonometric polynomials certain class periodic functions several variables soviet math bellman dynamic programming princeton university press princeton caflisch morokoff owen valuation mortgage backed securities using brownian bridges reduce dimension journal computational finance cameron martin orthogonal development functionals series fourierhermite functionals ann chakraborty rahman stochastic multiscale models fracture analysis functionally graded materials engineering fracture mechanics courant hilbert methods mathematical physics vol interscience publishers dunkl orthogonal polynomials several variables encyclopedia mathematics applications cambridge university press second efron stein jackknife estimate variance annals statistics ernst mugler starkloff ullmann convergence generalized polynomial chaos expansions esaim mathematical modelling numerical analysis freud orthogonal polynomials akademiai budapest gautschi orthogonal polynomials computation approximation numerical mathematics scientific computation oxford university press golub van loan matrix computations john hopkins university press third griebel sparse grids related approximation schemes higher dimensional problems foundations computational mathematics pardo pinkus suli todd cambridge university press griebel kuo sloan anova decomposition function infinitely many variables every term smooth mathematics computation hoeffding class statistics asymptotically normal distribution annals mathematical statistics http kuo sloan wasilkowski wozniakowski decompositions multivariate functions mathematics computation wang random dimensional model representation orthogonality order component functions journal physical chemistry petersen relation multidimensional moment problem onedimensional moment problem math rahman polynomial dimensional decomposition stochastic computing international journal numerical methods engineering rahman extended polynomial dimensional decomposition arbitrary probability distributions journal engineering mechanics rahman approximation errors truncated dimensional decompositions mathematics computation rahman generalized anova dimensional decomposition dependent probability measures journal uncertainty quantification rahman yadav orthogonal polynomial expansions solving random eigenvalue problems international journal uncertainty quantification ren yadav rahman design optimization polynomial dimensional decomposition structural multidisciplinary optimization stieltjes quelques recherches sur thorie des quadratures dites mcaniques ann sci cole norm rahman tang congedo abgrall adaptive surrogate modeling anova sparse polynomial dimensional decomposition global sensitivity analysis fluid simulation journal computational physics wiener homogeneous chaos american journal mathematics xiu karniadakis polynomial chaos stochastic equations siam journal scientific computing yadav rahman polynomial dimensional decomposition highdimensional stochastic computing computer methods applied mechanics engineering
| 10 |
apr exploration retrieval sequencing samples sohan seth niko samuel kaski antti honkela helsinki institute information technology hiit department information computer science aalto university espoo finland biology program department medical genetics university helsinki helsinki finland helsinki institute information technology hiit department computer science university helsinki helsinki finland april abstract recent years field whole metagenome shotgun sequencing witnessed significant growth due sequencing technologies allow sequencing genomic samples cheaper faster better coverage technical advancement initiated trend sequencing multiple samples different conditions environments explore similarities dissimilarities microbial communities examples include human microbiome project various studies human intestinal tract availability ever larger databases measurements finding samples similar given query sample becoming central operation paper develop exploration retrieval method whole metagenome sequencing samples apply distributed string mining framework efficiently extract informative sequence pool metagenomic samples use measure dissimilarity two samples evaluate performance proposed approach two human gut metagenome data sets well human microbiome project metagenomic samples observe significant enrichment diseased gut samples results queries another diseased sample high accuracy discriminating different body sites even though method unsupervised software implementation dsm framework available https introduction metagenomics study microbial communities natural habitat using genomics techniques undergoing boom due proliferation sequencing technologies many studies focus targeted sequencing specific marker genes rrna gene bacteria recently growing interest whole metagenome sequencing see targeted studies provide data phylogenetic profiling lower cost whole metagenomes provide much information example collective metabolism population genetics community recent studies also found associations features whole human gut metagenomes type diabetes new data accumulating rapidly popular server listing almost public whole metagenomes analysing shotgun wms sequencing data challenging original sample typically contains genetic material hundreds thousands bacterial species different abundances fully sequenced previously sequencing obtain huge bag metagenomic samples actagtca tagcatag ccatgaca cttaatga atcgcaga aggttaat gtgtaccg tcaacggg actgactg attcctta ctatgcac gttgcttc atgacata gatcatga cacatgca catgactg feature extraction gatggatt gtcagtac gtactgac actgcatg dissimilarity evaluation dissimilarity values query retrieve query retrieve actggtca cttaaggc gtgtacca aggacaac figure given set metagenomic samples objective able retrieve relevant samples query sample need extract relevant features evaluate pairwise similarity dissimilarity measure samples ranked order increasing dissimilarity query collection short sequence reads whose species origin unknown significant progress made analysis relying either limited previously annotated genomes assembling reads novel complete genomes remains difficult inefficient potentially susceptible annotation biases paper introduce efficient purely feature extraction selection method well similarity measures wms sequencing data sets apply retrieval similar data sets retrieval extremely powerful tool exploration data generating hypotheses disease associations previously demonstrated gene expression data retrieval existing databases makes possible automatically explore much greater variety hypotheses relying solely common specifically designed focused studies similarity measures retrieval similar metagenomic data sets suggested previously based quantifying abundances relatively small number predetermined features requiring existing annotation thousands known taxa genes metabolic pathways used introduce similarity measures based solely raw sequencing reads hence unbiased insensitive quality existing annotation similar measure previously suggested pairwise comparisons using method computationally expensive scale even modestly large data sets furthermore instead considering sequences particular length also known done earlier tasks employ efficient distributed string mining algorithm find informative subsequences length order deal large number features feature selection necessary previous approaches detecting relevant features metagenomic data based direct comparison two classes samples methods work thousands features notable exception one study quantification association testing done million predefined genes without feature selection one use short limit set likely informative associated well characterised protein families previous examples unsupervised feature selection metagenomics common practice information retrieval text documents particularly relevant method assesses entropy distribution documents specific term occurs evaluate performance proposed unsupervised unconstrained retrieval method synthetic data well metagenomic samples human body sites evaluate performance retrieval engine use external validation based ground truth similarity two samples simplify process consider binary similarity crude easily accessible human gut samples come studies exploring change bacterial species composition healthy persons either inflammatory bowel disease type diabetes utilize disease state construct normalization collection metagenomic samples regularization distributed string mining dissimilarity matrix dissimilarity computation entropy evaluation annotations figure processing steps method given collection metagenomic samples use collection input distributed string mining method method estimate frequency evaluate informative compute needed dissimilarities finally paper evaluate performance considering existing annotations ground truth annotations needed retrieval general binary ground truth thus study given metagenomic sample person disease retrieval finds metagenomic samples related disease body site data use body sites ground truth investigate whether possible identify bacterial communities different body sites unsupervised setting without need reference genomes noted especially gut data two samples may related ways external validation one simple ground truth nonetheless provides objective platform comparing different methods given method unsupervised hence completely oblivious disease labels retrieval successful promising starting point developing methods leveraging data earlier patients early detection disease personalized medicine approach objective extract select suitable features representing wms sequencing samples form pairwise dissimilarity measure collection samples given dissimilarity one query sample retrieve samples similar fig measure needs reasonably rapidly computable yet captures relevant differences samples little prior biological knowledge annotations possible since detailed quantitative prior knowledge typically yet available metagenomics evaluating dissimilarity requires representing metagenomic sample suitable feature space standard choice representing objects strings estimate frequency values kmer string letters dna alphabet therefore possible given standard practice set specific value typically small value keep estimation problem tractable computationally statistically larger would give better discriminability without bounds finite data set sizes simply enough data estimate long argue instead setting particular value effective estimate possible possible data supports makes problem challenging since number observed different large becomes large become susceptible sequencing errors focusing appearing sample helps significantly relatively rare exactly sequencing errors two independent reads make method computationally efficient treat independent feature compute bayesian estimate relative frequencies across samples employed prior helps suppressing noise caused small observed read counts filtering step abundance distribution figure technical overview distributed string mining framework consisting client left server right processes processes responsible computing substring frequencies within sample separately substrings frequencies found using compressed suffix tree frequency information transmitted streaming representation sorted trie example trie left results parenthesis representation given middle server reads merges already sorted tries recursive manner node server computes entropy based received values updates affected pairwise distances achieved hashing prefix substring server corresponds certain range hash values samples used judge informativeness retrieval constant abundance discriminative power extreme present one sample generalize samples show filtering step significantly improves retrieval performance datasets distance measures finally compute dissimilarity two samples across features weighted average distances relative frequencies individual treating independent feature allows execute steps fast fly without storing intermediate results simplified distance measures necessary guarantee scalability given extremely high dimensionality features summarize introduce methods estimate frequencies large number multiple samples decide informative uninformative context retrieval task iii compute distance metric using filtered frequencies execute steps fast without explicitly storing frequency values fig summarizes method methods estimating frequencies normalization regularization filtering order perform feature selection filtering first compute bayesian estimates relative frequencies samples using observed frequencies distributions samples computed independently reasons computational efficiency even relative abundance every sample observed frequencies may differ different sequencing depth coverage different samples tackle issue employ normalization normalize frequency constant proportional total number base pairs sample largest sample collection terms total base pair count obtaining interpreted probabilistically probability observing sequence actual sample assuming every sample number base pairs start lost processing order estimate relative frequencies place conjugate symmetric dirichlet prior parameters multinomial distribution observed counts common choice uniform prior distribution corresponds dirichlet distribution parameters equal yields posterior mean estimate relative frequency values dirichlet prior parameters equal one ubiquitous document retrieval particularly suitable metagenomics due following observations distributed string mining algorithm described trades low counts speed ignores present sample prior makes missing count adding assists playing significance rare may appear due sequencing errors filtering step without affecting much finally given massive number potential crucially important improve ratio focusing informative ones unsupervised tasks comparing samples obviously distinguish samples informative concrete example consider kmer present samples similar abundance certainly give information useful comparing samples extreme present one specific sample potentially spurious due sequencing error case help comparing samples either hand present samples gives information samples similar specific sense informativeness sense measured entropy distribution samples filter based conditional entropies log log taken account distance computation normalized entropy lower certain threshold design notice standard information theory terminology higher entropy implies higher information however context informative low entropy also due bayesian estimation spurious small counts large conditional entropy filtered optimal value threshold varies datasets optimized supervised manner utilizing training set labelled samples absence labelled set suggest taking average distance metrics computed potential thresholds final metric refer final metrics two cases optimized metric average metric experimental randomly make split given dataset training str testing ste sets str ste str ste use str optimize entropy threshold query samples str retrieve relevant samples within set observe entropy threshold results best retrieval result see sec details comparing performance two methods always present evaluation ste query samples within ste retrieve relevant samples ste algorithms extract informative main computational challenge extract informative datasets feasible time space recall filtering step relies knowledge multiple samples decide respective informative retrieval task since typical collections wms samples huge size assume even plain input fits main memory single machine process datasets computation needs done either using external memory disk distributed manner computer cluster review two approaches counting distributed string mining first one standard approach literature fixed several limitations applied context multiple samples data show latter approach flexible context also generalized extract informative values simultaneously jellyfish dsk examples recent algorithmic improvements counting tools use hash tables compute distribution given fixed tools achieved keeping hash table disk main drawback approaches aimed counting single sample extending multiple samples example jellyfish could principle extended count multiple samples authors give roughly linear time algorithm merge two hash tables however intermediate counts would need stored disk requires significant amount additional space parallelized user manual sect bugs decision whether particular informative made looking frequency given wms samples tackle problem distributed string mining dsm framework handle inputs utilizing computer cluster main advantages framework divides data computation multiple cluster nodes intermediate counts stored explicitly iii additional disk strain except reading input advantages allow data analysis cluster consisting nodes limited main memory extend dsm framework compatible definition informative see subsection allows extract informative either fixed values feasible time dsm framework based model clients correspondence given samples client responsible computing frequencies within designated sample computation relies heavily suffix sorting techniques data structures strings input data first preprocessed compressed representation replaces input data acts efficient search structure computation straightforward server simply merges sorted input clients computes entropies updates distance matrices fig gives toy example interaction two crucial observations needed keep whole computation transmission costs feasible first informative seen subset substrings substrings whose instances differentiating continuation left right formally substring string called exists two symbols substrings similarly substring leftbranching substrings substring say second string length substrings total length substrings bounded log theorem first observation allows reduce computation smaller set substrings easy see frequency exists substring length exactly frequency follows frequency deduced branching substrings contain necessary information detect informative second observation guarantees feasible transmission cost clients servers upper bound concatenation substrings also acts upper bound running time amount communication needed drawback restricting substrings informative able detect appear least twice sample although limit may useful pruning spurious introduced sequencing errors detailed explanation analysis dsm framework given software implementation dsm framework available https dissimilarity metrics extracted informative use compute dissimilarity two metagenomic samples consider three dissimilarity metrics computed easily large number sequential manner one time without storing frequencies explicitly utilize natural variance structure abundant weight relative frequencies respective total counts utilize absolute frequencies defined mainly use simple jaccard distance consider abundances whether occurs given two sets detected present two different samples jaccard distance measures many elements shared two sets mathematically defined dcount despite simplicity observe jaccard distance performs well potential reason robustness measurement noise effectiveness two metagenomic samples differ terms presence absence certain species functionalities assume present sample frequency also experiment two metrics use abundance information euclidean distance obvious distance measure two metagenomic samples euclidean distance respective frequencies consider distance metric dsqrt computed sequentially new informative extracted square root transformation variance stabilizing transformation poisson popular model quantitative sequencing data log transformed euclidean distance also consider metric log transformation popular approach document retrieval dlog log log motivation using log transformation decreases sensitivity high frequency counts present high abundance almost every genome instance marker gene log transformation reduces effect metric evaluation metric evaluate performance dissimilarity metric terms performance task retrieving relevant samples given query metagenomics sample ground truth relevance either disease class disease known body site samples class considered relevant measuring retrieval performance use evaluation metric popular document retrieval mean average precision map given query retrieval method ranks samples increasing order dissimilarities given one retrieved top closest samples precision defined precision number relevant samples retrieved samples input size samples preproc total memory cpu time cpu time metahit hmp table computational resources required distributed string mining different datasets report times total cpu times fixed preprocessing done separately actual computation total memory memory requirement computation nodes experiments ran cluster dell poweredge nodes ram cores simulated data metahit run using nodes ran using nodes allowing parallelization hmp ran cluster nodes xeon cpus ram map defined using average precision map precision avep avep set queries number relevant samples query set locations ranked list relevant sample appears higher map implies better performance judge two map values significantly different employ randomization test described query test randomly reassigns aveps achieved two methods one another computes difference resulting map multiple reassignments get distribution true map value tested terms case two samples share dissimilarity query sample employ modification suggested break ties computing mean follow type approach using sample query retrieving rest collection simulated data human gut samples query positive samples testing set ste whereas body site samples query sample testing set cases retrieve entire set choosing entropy threshold supervised setting query str retrieve str synthetic data generation test method simulated four datasets containing samples separate classes interpretation samples class relevant datasets two classes classes samples species composition different relative abundances used metasim generate illumina reads length using error configuration file provided developers dataset contains samples belong positive class rest belong negative class dataset used species following genera acetobacter acetobacterium acidiphilium acidithiobacillus acinetobacter bacillus bacteroides bifidobacterium chlamydia chlamydophila clostridium escherichia haloarcula halobacterium lactobacillus pasteurella salmonella staphylococcus streptococcus abundance profiles generated two dirichlet distributions one positive negative class parameters dirichlet distributions shared two classes half species randomly chosen parameters used classes half species parameters randomly permuted example given species assigned parameters could parameters metahit log hmp jaccard entropy threshold jaccard jaccard entropy threshold jaccard entropy threshold jaccard jaccard entropy threshold entropy threshold entropy threshold entropy threshold figure number informative strings varying entropy thresholds proposed approach fixed lenthgs protein family based comparison figfam box denotes optimized entropy threshold used evaluate performance methods general observations follows number strings lower rest number strings much higher rest methods number strings close observe strings low real data sets simulated data indicate presence discriminative features also optimized entropy threshold varies different methods second fourth species species permuted exact species corresponding parameter values downloaded https resulting datasets relatively easy data high coverage reads per sample relatively difficult data low coverage reads per sample mixed data half samples rest simulate varying sequencing depth relatively difficult data coverage high additional noise class distributions simulate overlap classes elaborate relative abundance species phigh noise noise generated symmetric dirichlet distribution parameters equal results evaluated retrieval performance three human metagenomics datasets metahit metagenomic samples healthy people patients inflammatory bowel disease ibd syndrome sample average million reads goal retrieve ibd positive patients mean average precision metahit jaccard metahit log jaccard hmp jaccard fig abd method jaccard mean average precision fig abd method jaccard fig abd method fig abd method jaccard fig abd method jaccard fig abd method fig abd method fig abd method figure retrieval performance comparison proposed approach using following base measures fig retrieval performance using known protein family abd hellinger distance relative estimated abundance distance relative abundance uses optimized metric equally spaced threshold values errorbar shows map value along standard error grey horizontal line shows retrieval chance map computed zero similarity metric arrow present method indicates whether performance corresponding method significantly better worse stars denote significance level synthetic datasets bottom row relative abundance known experimental design present result metahit present performance jaccard log metric since latter performs much better compared former phase metagenomic samples healthy people patients type diabetes sample average million reads goal retrieve diabetic patients chose explore phase data instead phase data since former higher coverage reads latter hmp metagenomic samples different body sites samples passed assessment http discarded samples less number reads largest sample recapitulate metahit goal observe given positive sample patient particular disease one retrieve relevant samples similar disease whereas hmp goal observe given sample particular body site one retrieve relevant samples samples body site data applied quality threshold ignored base pairs quality less threshold table gives overview computational resources required data set additionally number used different methods data set available retrieval samples similar annotation applied proposed approach number alternatives retrieval similar samples data set evaluated many retrieved metahit log average precision jaccard average precision jaccard jaccard jaccard jaccard hmp jaccard figure comparison best performances different lengths figures show performance queries positive samples violin plot methods use optimized metric chosen equally spaced threshold values box denotes map value horizontal lines show retrieval chance avep computed zero dissimilarity metric straight line mean dotted lines quantiles respectively number relevant samples differ different queries arrow present method implies whether corresponding method performs significantly better worse stars denote significance level observe considering usually perform equally well respect considering single samples annotation class label disease state body site comparison obtained mean average precision values averaged queries positive samples shown fig results show performance achieved optimized metric alternatives considered retrieval performance based proposed distances frequencies counted specific known protein families figfams retrieval based hellinger distances relative species abundances estimated using metaphlan iii retrieval based distances relative frequencies simulated data two classes differ relative species abundance thus retrieval based ground truth abundance considered give upper limit performance highc proposed method performs closer ground truth performance methods although difference ground truth performance still statistically significant performance methods except protein family based comparison drop compared performance close despite presence low coverage samples encouraging observation showing robustness proposed approach varying sequencing depths real data sets proposed approach yielded statistically significantly higher mean average precision alternatives datasets except protein family based comparison works equally well interestingly retrieval performs relatively poorly suggesting differences classes easily captured species composition alone proposed features provide better separation retrieval based known protein family performed fairly well slightly worse proposed approach metahit observe mean average precision mean average precision metahit log jaccard fig fig jaccard hmp jaccard jaccard fig jaccard fig fig jaccard fig fig figure comparison best retrieval performance achieved optimized metric middle average metric right without entropy filtering left proposed approach individual well figfam based distance metric metrics optimized averaged equally spaced threshold values errorbar line shows map value along standard error grey horizontal line shows retrieval chance map computed zero dissimilarity metric arrow present method implies whether performance corresponding method top average metric bottom optimized metric better worse entropy filtering employed stars denote significance level observe filtering positive impact retrieval performance metahit jaccard metric performs poorly however change metric log significantly improves performance methods otherwise metrics usually work equally well different data sets effect using specific unspecific length next compared proposed approach using using specific retrieval performance using optimized metric shown fig figures show complete distribution average precision values different queries whose mean mean average precision fig performance proposed method usually better individual thus proposed method appears relatively safe choice suffer catastrophically bad performance data sets effect entropy filtering next evaluated efficacy filtering informative retrieval performance without filtering operation results presented fig observed entropy filtering usually improved retrieval performance tested lengths using optimized metric although improvement might always statistically significant although average metric often provides significant performance might always improve performance without filtering also retrieval performance figfam may may improve entropy filtering comparison across different metrics finally evaluated retrieval performance different dissimilarity metrics presented performance using optimized metric different metrics fig average precision metahit average precision hmp count sqrt metric log count sqrt metric sqrt metric log log count count count sqrt metric log sqrt metric log count sqrt metric log count sqrt metric log figure comparison best retrieval performance different distance metrics using show violin plot average performances queries positive samples data sets optimized metrics selected equally spaced threshold values box denotes map value horizontal lines show retrieval chance avep computed zero dissimilarity metric straight line mean dotted lines quantiles respectively number relevant samples differ different queries arrow present method implies whether corresponding method performs significantly better worse methods denoted colors stars denote significance level observe different distance metrics usually demonstrate similar performance observed simple metric dcount performed least well abundancesensitive log sqrt metrics except metahit data metrics performed better conclusion wake collecting multiple samples similar environments information retrieval metagenomic samples expected become handy tool metagenomics research paper addressed problem retrieving relevant metagenomic samples given query sample collection novelty proposed approach unsupervised rely availability reference databases suggested employing frequencies feature representation however rather exploring fixed scanned possible possible using distributed string mining proposed appropriate filtering technique discard uninformative evaluated method real simulated data observed approach effectively retrieve relevant metagenomic samples outperforming figfams method based known highly informative protein families well retrieval based species composition samples acknowledgement authors would like thank ahmed sobih help metaphlan experiments metahit part calculations presented performed using computer resources within aalto university school science project funding work supported academy finland project numbers references yael baran eran halperin joint analysis multiple metagenomic samples plos comput biol february caldas nils gehlenborg ali faisal alvis brazma samuel kaski probabilistic retrieval visualization biologically relevant microarray experiments bioinformatics caldas nils gehlenborg eeva kettunen ali faisal mikko andrew nicholson sakari knuutila alvis brazma samuel kaski information retrieval heterogeneous collections transcriptomics data links malignant pleural mesothelioma bioinformatics jan robert edwards robert olson terry disz gordon pusch veronika vonstein rick stevens ross overbeek real time metagenomics using annotate metagenomes bioinformatics dec sharon greenblum peter turnbaugh elhanan borenstein metagenomic systems biology human gut microbiome reveals topological shifts associated obesity inflammatory bowel disease proc natl acad sci jan bai jiang kai song jie ren minghua deng fengzhu sun xuegong zhang comparison metagenomic samples using sequence signatures bmc genomics december pmid manzini puglisi permuted longest common prefix array proc cpm lncs pages springer christine largeron christophe moulin mathias gry entropy based feature selection text categorization proceedings acm symposium applied computing sac pages association computing machinery kelvin monika bihan shibu yooseph barbara meth analyses microbial diversity across human microbiome plos one zhenqiu liu william hsiao brandi cantarel elliott franco drbek claire sparse learning simultaneous multiclass classification feature selection metagenomic data bioinformatics dec nicolas maillet claire lemaitre rayan chikhi dominique lavenier pierre peterlongo compareads comparing huge metagenomic experiments bmc bioinformatics suppl guillaume marais carl kingsford fast approach efficient parallel counting occurrences bioinformatics march frank mcsherry marc najork computing information retrieval performance measures efficiently presence tied scores proceedings research european conference advances information retrieval ecir page berlin heidelberg meyer paarmann souza olson glass kubal paczian rodriguez stevens wilke wilkening edwards metagenomics rast server public resource automatic phylogenetic functional analysis metagenomes bmc bioinformatics september folker meyer ross overbeek alex rodriguez figfams yet another set protein families nucleic acids research november pmid pmcid suparna mitra bernhard klar daniel huson visual statistical comparison metagenomes bioinformatics aug donovan parks robert beiko identifying biologically relevant differences metagenomic communities bioinformatics mar junjie qin human gut microbial gene catalogue established metagenomic sequencing nature march junjie qin association study gut microbiota type diabetes nature oct daniel richter felix ott alexander auch ramona schmid daniel huson metasim sequencing simulator genomics metagenomics plos one guillaume rizk dominique lavenier rayan chikhi dsk counting low memory usage bioinformatics mar siegfried schloissnig manimozhiyan arumugam shinichi sunagawa makedonka mitreva julien tap ana zhu alison waller daniel mende jens roat kultima john martin karthik kota shamil sunyaev george weinstock peer bork genomic variation landscape human gut microbiome nature jan nicola segata jacques izard levi waldron dirk gevers larisa miropolsky wendy garrett curtis huttenhower metagenomic biomarker discovery explanation genome biol nicola segata levi waldron annalisa ballarini vagheesh narasimhan olivier jousson curtis huttenhower metagenomic microbial community profiling using unique marker genes nature methods august mark smucker james allan ben carterette comparison statistical significance tests information retrieval evaluation proceedings sixteenth acm conference conference information knowledge management cikm pages new york usa acm xiaoquan jian kang ning efficient search similar microbial communities based novel indexing scheme similarity score metagenomic data bioinformatics oct human microbiome project consortium structure function diversity healthy human microbiome nature june gene tyson jarrod chapman philip hugenholtz eric allen rachna ram paul richardson victor solovyev edward rubin daniel rokhsar jillian banfield community structure metabolism reconstruction microbial genomes environment nature february niko simon puglisi distributed string mining sequencing data workshop algorithms bioinformatics wabi lncs pages james robert white niranjan nagarajan mihai pop statistical methods detecting differentially abundant features clinical metagenomic samples plos comput biol apr yiming yang jan pedersen comparative study feature selection text categorization proceedings fourteenth international conference machine learning icml pages morgan kaufmann publishers
| 5 |
amenable uniformly recurrent subgroups lattice embeddings mar adrien boudec abstract study lattice embeddings class countable groups defined property largest amenable uniformly recurrent subgroup continuous comes extremely proximal action envelope obtain restrictions locally compact groups contain copy lattice notably regarding normal subgroups product decompositions generally dense mappings product locally compact groups focus family finitely generated groups acting trees within class show embed cocompact irreducible lattices locally compact wreath products provides examples finitely generated simple groups wreath product finite group free group keywords lattices locally compact groups strongly proximal actions chabauty space groups acting trees irreducible lattices wreath products introduction questions considered article fall setting following general problem given class countable group study locally compact groups embeds lattice sits discrete subgroup carries probability measure malcev showed every finitely generated torsion free nilpotent group embeds cocompact lattice unique simply connected nilpotent lie group conversely locally compact group finitely generated nilpotent lattice modding compact normal subgroup identity component lie group polynomial growth characterized finitely generated virtually nilpotent statement combination several works first finitely generated nilpotent lattice necessarily cocompact since virtually torsion free classical fact totally disconnected general case deduced prop uses notably solution hilbert fifth problem particular compactly generated polynomial growth statement follows generalization gromov polynomial growth theorem locally compact groups beyond nilpotent case examples classifications embeddings cocompact lattice obtained dymarz several families date march work carried author postdoctoral researcher current affiliation cnrs umpa ens lyon adrien boudec examples solvable groups although directly related concerns also mention certain dual problem considered class amenable groups outside setting amenable groups furman addressed problem class lattices lie groups improving rigidity results mostow prasad margulis see references see also furstenberg considered large class countable groups defined certain group theoretic conditions established given lattice embedding general arithmeticity result setting connected component article consider class groups whose furstenberg uniformly recurrent subgroup continuous see definitions first part article address question extent properties furstenberg uniformly recurrent subgroup countable group influence locally compact groups embeds lattice second part focus family finitely generated groups within class embed cocompact irreducible lattices locally compact wreath products groups consideration countable group chabauty space sub subgroups compact space acts conjugation uniformly recurrent subgroup urs closed minimal subset sub glasner weiss showed every minimal action compact space gives rise urs see proposition called stabilizer urs associated action conversely every urs arises stabilizer urs minimal action see matte elek case finitely generated groups urs shown related study ideals reduced group algebras reduced crossed products urs several classes groups studied certain examples groups rigidity results minimal actions compact spaces obtained complete description space urs various results homomorphisms topological full groups groupoids notably obstructions involving invariants groupoids obtained via urs considerations precisely via complete description points chabauty space groups whose orbit approach trivial subgroup present article make use urs tool order study lattice embeddings class countable groups define urs amenable consists amenable subgroups every countable group admits largest amenable urs respect natural partial order urs see stabilizer urs associated action furstenberg boundary see definitions urs called furstenberg urs either point case rad rad amenable radical homeomorphic cantor space last case say continuous refer detailed discussion let denote class groups furstenberg urs continuous equivalently group belongs admits amenable urs whose envelope amenable see definition amenable urs lattice embeddings envelope class disjoint classes groups previously mentioned introduction precisely class disjoint class amenable groups class linear groups also classes groups specifically considered groups numbers acylindrically hyperbolic groups see class stable taking quotient amenable normal subgroup extension amenable group prop also normal subgroup belongs prop result complement class also stable extensions see prop study class groups also motivated work kennedy showed following characterization countable group belongs group reduced simple introduction historical developments problem refer survey harpe topological boundaries make use notion topological boundary sense furstenberg compact spaces minimal strongly proximal group action see definitions many different notions boundaries appear study groups group actions sometimes called boundary theory particularly well described introduction insist present article term boundary always refer topological boundary sense notion confused measured notions boundaries particular despite possibly confusing terminology maximal topological boundary called furstenberg boundary notion measured notion boundary lattices direct products special attention given products locally compact groups study lattices product groups motivated among things connections theory lattices lie groups rich geometric aspects well instances groups rare properties appearing setting refer literature see developments last years study lattices products locally compact groups given countable group continuous furstenberg urs group containing lattice interested understanding close group direct product two groups properties group share direct product course various notions closeness considered basic one ask whether group admits decompositions direct product one step one might consider quotient morphisms onto direct products groups theorems generally consider continuous morphisms dense image direct product groups make assumption injectivity maps injectivity composition projection one factor adrien boudec particular setting allows maps form closed normal subgroups dense first results central notion article one extremely proximal action minimal extremely proximal actions naturally arise geometric group theory boundaries sense furstenberg refer definitions examples say furstenberg urs countable group comes extremely proximal action exists compact space minimal extremely proximal whose associated stabilizer urs equal note typically furstenberg boundary urs envelope env definition subgroup generated subgroups theorem let countable group whose furstenberg urs comes faithful extremely proximal action let locally compact group containing lattice following hold assume env finitely generated direct product two groups assume env finite index finite abelianization continuous morphism dense image product locally compact groups one factor compact conclusions hold group commensurable compact kernels result applications setting groups acting trees see corollary make several comments theorem assume finitely generated compactly generated statement assumption env finitely generated admits variations see theorem making assumption size envelope respect natural sense general hope derive conclusion entire group envelope small extreme illustration groups whose furstenberg urs comes faithful extremely proximal action trivial lattices products psl inside psl psl see also discussion right corollary assumption env fact furstenberg urs comes faithful extremely proximal action equivalent asking action faithful extremely proximal see remark provides intrinsic reformulation assumption appealing auxiliary space theorem assumption statement env finite index env finite abelianization equivalent virtually simple see proposition urs approach study lattice embeddings allows consider generally subgroups finite covolume recall closed subgroup locally amenable urs lattice embeddings compact group finite covolume carries probability measure thus lattice discrete subgroup finite covolume stating following result need terminology recall notion disjointness introduced furstenberg compact disjoint whenever compact continuous equivariant surjective maps map makes natural diagram commute remains surjective see minimal equivalent asking diagonal product minimal consider following property two never disjoint group property called boundary indivisible glasner characterized minimal compact disjoint carrying fully supported measure whose orbit closure space probability measures minimal relation disjointness boundaries consider different spirit deals disjointness within class rather disjointness class locally compact groups cocompact amenable maximal subgroup examples boundary indivisible groups prop contrary many discrete groups boundary indivisible relevance property setting comes fact show proposition discrete group theorem boundary indivisible actually examples boundary indivisible discrete groups aware fall setting proposition recall convex compact irreducible contain proper closed convex subspace say subgroup topological group weakly whenever convex compact fixes point irreducible indeed weakening notion asks every convex compact points points hence irreducible unless trivial subgroup amenable weakly amenable normal weakly subgroup coamenable however general weak imply even discrete groups exhibit examples finitely generated groups every subgroup either amenable weakly subgroups finally say subgroup exists acts minimally refer context examples theorem let locally compact group amenable urs comes extremely proximal action whose envelope let locally compact group containing closed subgroup finite covolume boundary indivisible following hold whenever continuous morphism dense image one factor amenable relative version notion weak amenability adrien boudec subgroup uniformly recurrent weakly particular every normal subgroup make several comments group allowed discrete theorem applies groups theorem intermediate step proof theorem provides additional information rather independent conclusion theorem statement implies whenever closed normal subgroups dense least one must last sentence implies closed normal subgroup open centralizer either amenable see proposition know whether condition open removed theorem say anything amenable normal subgroups worst pointing illustrated examples discussed section happens discrete group satisfying assumptions theorem trivial amenable radical sits lattice group noncompact amenable radical remark provides showing statement conclusion strengthened saying theorem happens splits direct product two groups even additional assumption amenable urs comes faithful extremely proximal action see example amenable replaced compact statement view remarks illustrations limitations use topological boundaries urs problem addressed rather abstract setting theorem group actions trees natural source extremely proximal actions theorems find applications setting following statement locally finite simplicial tree corollary let aut countable group proper invariant subtree finite orbit assume amenable virtually simple locally compact group containing lattice continuous morphism dense image one factor compact particular direct product two groups conclusion theorem holds group corollary never discrete aut recall burger mozes constructed simple groups acting two locally finite regular trees image aut aut acts amenable urs lattice embeddings freely cocompactly cocompact lattice product aut aut examples illustrate fact assumption corollary essential examples groups corollary applies found among family groups denoted see corollary examples groups continuous furstenberg urs sym finite permutation group simply transitive regular subgroup group finitely generated group acting tree transitively vertices edges local action every vertex isomorphic refer definition normal subgroup structure groups highly sensible permutation groups permutation groups virtually admits free quotient proposition permutation groups subgroup index two preserving bipartition simple cor family groups family lattices product two trees contain instances finitely generated simple groups embed densely universal group despite similarities corollary shows group containing virtually simple lattice rather allergic direct product behavior compare theorem also mention examples groups corollary applied may found among family piecewise prescribed tree automorphism groups considered sec irreducible lattices wreath products leaving aside previous abstract situation focus family groups see definitions mentioned common properties discrete groups certain lattices product two trees provide motivation studying locally compact groups contain group lattice contribution article problem one hand conclusions given corollary see corollary hand describe embeddings groups irreducible lattices locally compact wreath products group acting set subgroup group semirestricted permutational wreath product introduced cornulier product set functions finitely many acts usual way definition somehow interpolates restricted unrestricted permutational wreath products correspond respectively case write locally compact compact open natural locally compact group topology see call lattice irreducible projection terminology motivated fact definition prevents generally subgroup commensurable form lattices following statement cyclic group order symmetric group elements vertex set tree subgroup index two preserving bipartition adrien boudec theorem let sym permutation groups acts freely index following hold group embeds irreducible cocompact lattice semis restricted permutational wreath product aut transitive permutation group finitely generated group isometric cayley graphs note finite index subgroup split product stabilizer edge projection action open subgroup split direct product two groups embedding snd aut inclusion subgroup aut twisted embedding associated cocycle given local action see section details also note image intersect amenable radical snd intersect subgroup aut along cocompact lattice aut case particular group actually restricted wreath product situation group irreducible cocompact lattice aut aut applications recall property virtually simple invariant indeed lattices constructed burger mozes show virtually simple finitely generated group may cayley graph product two finitely generated free groups theorem together simplicity results provide another illustration fact namely finitely generated simple groups cayley graph wreath product wreath product construction already known source examples finitely generated groups whose algebraic properties reflected cayley graphs two wreath products may isometric one solvable torsion free finite index subgroup second properties phenomenon exhibited theorem nonetheless different sense provides finitely generated groups isometric cayley graphs one wreath product simple hence commensurable wreath product recall finitely generated groups amenable invariant contrast theorem implies corollary among finitely generated groups property infinite amenable radical invariant examples theorem simultaneously show infinite elliptic radical also invariant recall elliptic radical discrete group largest locally finite normal subgroup recall theorem finitely generated group wreath product finite group must act properly cocompactly graph algebraic description isometry groups graphs given see also implies particular subgroup amenable urs lattice embeddings index two locally finite contrast theorem shows rigidity fails case questions end introduction two questions extreme proximality used crucial way different stages proofs theorems results fail without extreme proximality assumption simply group may well direct product putting aside trivial know whether serious algebraic restrictions locally compact group may derived existence lattice continuous furstenberg urs direction find following question natural question exist continuous furstenberg urs lattice group factors injective dense projection factor impose moreover trivial amenable radical theorem presents situation locally compact group two cocompact lattices stabilizer urs associated rad stabilizer urs associated continuous stands furstenberg boundary see examples group splits amenable radical lattice preserves splitting meaning hence act faithfully injective projection naturally raises following question let locally compact group two lattices acting faithfully possible topologically free topologically free happen homogeneous note prop condition act faithfully equivalent saying trivial amenable radical recall topologically free means dense subset points trivial stabilizer equivalently stabilizer urs trivial outline proofs organization article organized follows next section introduce terminology preliminary results topological boundaries extremely proximal actions section establish results uniformly recurrent subgroups used later sections particular prove certain gap property urs coming extremely proximal actions proposition combined observation compact spaces comparable stabilizer urs proposition deduce locally compact group amenable urs comes extremely proximal action whose envelope boundary indivisible proposition setting section group admitting free extremely proximal action establish intermediate results notably concerning normal subgroups proposition commensurated subgroups proposition deduce results class groups see proposition corollary section use results section together proposition furstenberg prove theorem specify discrete groups give adrien boudec proof theorem proof essentially splits two steps first one application theorem obtain amenability one factor second consists proving appropriate assumptions amenable factor compact using results section section consider groups acting trees apply previous results article setting giving proof corollary focus family groups prescribed local action study boundaries groups use results section order characterize discrete groups within family boundary indivisible see theorem includes virtually simple also contain simple instances finally study lattice embeddings groups give proof theorem acknowledgements grateful alex furman pointing proposition attention uri bader enlightening discussion proof also grateful caprace yves cornulier bruno duchesne matte bon nicolas monod pierre pansu interesting discussions comments related work finally indebted alain valette decisive remark made may eventually led theorem preliminaries conventions terminology letter usually refer topological group denote discrete group group homeomorphic automorphisms denoted aut whenever locally compact group always assume second countable notation refer topological space letters reserved compact spaces compact space equipped extremely proximal group action compact spaces assumed hausdorff space admits continuous action action minimal orbits dense said trivial space locally compact denote prob set regular borel probability measures space continuous compactly supported functions denoted prob defines linear functional endow prob weak net converges theorem prob relatively compact denote set closed subsets sets compact open form basis chabauty topology endowed chabauty topology space compact freely identify image natural inclusion note particular case locally compact group space sub closed subgroups closed particular sub compact space acts conjugation uniformly recurrent subgroup urs amenable urs lattice embeddings closed minimal subset sub set urs denoted urs extension also say subgroup uniformly recurrent closure conjugacy class sub minimal topological boundaries compact action strongly proximal closure prob contains dirac measure strong proximality stable taking products diagonal action continuous equivariant images see say boundary minimal strongly proximal every topological group exists unique boundary universal property boundary exists continuous surjection prop universal space referred furstenberg boundary easy verify amenable normal subgroup acts trivially admits cocompact amenable subgroup furstenberg boundary homogeneous space form containing precisely prop situation discrete groups quite different shown furstenberg boundaries discrete groups always unless trivial following fundamental property boundaries see theorem convex compact contains boundary fact irreducible convex compact action strongly proximal closure extreme points irreducible means proper closed convex subspace particular theorem following consequence theorem group amenable trivial equivalently trivial extremely proximal actions let compact closed subset compressible closure space contains singleton equivalently every neighbourhood exists action extremely proximal every closed subset compressible references extremely proximal actions considered include make use following result theorem theorem let compact assume least three points extremely proximal strongly proximal examples extremely proximal actions provided group actions trees hyperbolic spaces aut acts proper invariant subtree finite orbit action minimal extremely proximal acts coboundedly proper geodesic hyperbolic space fixed point fixed pair infinity gromov boundary minimal extremely proximal two situations particular cases following general result believe homeomorphism space hyperbolic exist called endpoints neighbourhoods adrien boudec large enough proposition acts compact space hyperbolic elements common endpoints set endpoints hyperbolic elements dense action minimal extremely proximal proof let open invariant subset density assumption hyperbolic whose attracting endpoint belongs every since open deduce contains existence hyperbolic elements common endpoints ensures fixes point finally action minimal closed subset whose attracting endpoint outside compressible repealing endpoint recent work duchesne monod shows group actions dendrites also source extremely proximal actions recall dendrite compact metrizable space two points extremities unique arc duchesne monod show acts invariant proper subdendrite unique minimal closed invariant subset extremely proximal see proof theorem extremely proximal actions also play prominent role context group actions circle minimal action either conjugated group rotations finite centralizer action quotient circle extremely proximal see ghys margulis mention however examples countable groups action minimal topologically free aware stabilizer urs either known amenable particular know application theorem groups acting circle sequel make use following easy lemma lemma let topological group subgroup compact subset let compact closed subset compressible compressible particular extremely proximal extremely proximal proof assumption exists converges compactness assume converges follows converges continuity uniformly recurrent subgroups generalities uniformly recurrent subgroups let locally compact group urs write exist equivalent fact every contained element every contains element relation order urs see amenable urs lattice embeddings simplicity urs associated closed normal subgroup still denoted particular resp means contained resp contains elements trivial urs mean urs corresponding trivial subgroup warn reader terminology urs corresponding normal subgroup trivial space trivial urs let compact say factor extension exists continuous equivariant map onto continuous equivariant map say almost set dense moreover onto say almost extension recall definition stabilizer urs associated minimal action compact space compact denote stabilizer definition compact denote set points stab sub continuous upper map stab second countability imply dense subset indeed basis topology sub set closed one verifies stab continuous following denote cls sub cls sub cls stands closure ambient space obvious inclusions cls cls denote projections sub sub respectively proposition prop minimal compact almost extension unique minimal closed subsets respectively definition stabilizer urs associated action topologically free trivial remark assumed second countable general longer dense however still possible define stabilizer urs associated minimal action compact space see discussion sequel sometimes use following version proposition proposition let compact let subgroup acting minimally acts minimally unique minimal closed subsets adrien boudec proof let closed since factor acts minimally every exists sub fact belongs forces equal definition follows moreover acts minimally since almost extension minimality preserved taking almost extensions almost closed subset statements since factors established hold envelopes let locally compact group urs definition envelope env closed subgroup generated subgroups definition env smallest closed subgroup sub env note env normal subgroup actually smallest normal subgroup env let discrete group compact domain continuity map stab classical fact consists every exists neighbourhood fixed see lem proof denote set elements fixing neighbourhood lemma let countable discrete group compact minimal following equivalent iii interior hfor particular env generated elements fix interior proof clear equivalent also iii clearly implies also implies iii density finally iii implies since implies iii density set comparable stabilizer urs recall notion disjointness two compact disjoint whenever factors compact via map surjective minimal equivalent saying product remains minimal lem following lemma presents situation easily implies disjointness lemma let minimal compact exists acts minimally disjoint amenable urs lattice embeddings proof clear closed invariant subset minimality exists since acts minimally deduce contains minimality follows equal following proposition used notably proposition proposition let compact minimal write suppose disjoint env particular urs point disjoint proof using notation proposition almost extensions write set pairs assumption clearly easily seen closed proper subset proper subset since factor follows closed subset proper since almost contradicts disjointness therefore means fixed every hence env action urs paragraph still denote locally compact group compact given urs study properties action elements space proof following lemma easy verification leave reader lemma compact sub fixes point closed subset sub particular following definition makes sense definition let compact urs say fixes point lemma let compact closed invariant subset urs exists fixing fixes point proof assumption exist converges limit point exists compactness fixes upper stabilizer map lemma implies following lemma compact containing unique minimal closed ginvariant subset xmin proximal urs fixes point fixes point xmin proposition let compact extremely proximal urs either fixes point act minimally adrien boudec proof exist closed subset invariant may apply lemma space subspace point deduce fixes point stands closure sub recall given compact set subgroups lemma let compact assume closed subgroup acts minimally exists env proof since acts minimally closure contains according proposition since sub sub closed subset sub deduce sub particular env definition let urs say comes extremely proximal action exists compact minimal extremely proximal shown discrete group urs coming extremely proximal action urs must relatively large respect see precise statement appropriate assumptions imply every urs cor following proposition goes opposite direction considering urs larger proposition let urs comes extremely proximal action let urs env proof let compact minimal extremely proximal fix assume act minimally according proposition implies urs fixes point since moreover satisfy assumption deduce contradiction therefore acts minimally since moreover exists position apply lemma conclusion follows noted proposition false without extreme proximality assumption general plenty urs env lemma let urs comes extremely proximal action env acts minimally proof let compact minimal extremely proximal let env without loss generality may assume point since otherwise nothing prove ensures acts extreme proximality must act minimally see lemma therefore also proposition remark extreme proximality assumption removed lemma indeed true general given urs remains urs env indeed explained minimal subshift two letters amenable urs lattice embeddings gives rise urs lamplighter group contained chabauty space sub base group particular env lies inside abelian group follows env acts trivially proposition let urs comes extremely proximal action assume point action gives rise urs moreover comes faithful extremely proximal action action faithful proof write definition argue contradiction suppose applying proposition deduce env acts trivially env also acts minimally lemma deduce must point contradiction shows arguing proof lemma see normal subgroup acts minimally since point particular acts remark proposition implies far interest lies inside urs associated minimal extremely proximal action space loss generality assuming sub see also remark amenable urs recall say urs amenable every amenable following lemma already appeared prop lemma urs amenable proof since amenable must fix point compact prob unique minimal subspace prob since gboundary lemma fixes point proposition let compact minimal amenable let disjoint env acts trivially particular env never disjoint proof fact env must act trivially follows applying lemma proposition since amenable group boundary second statement follows proposition says admits amenable urs whose envelope never disjoint conclusion satisfactory concerns depends choice space although hope get better conclusion full generality next result play important role section remove dependence extreme proximality assumption recall introduction say boundary indivisible two never disjoint adrien boudec proposition assume admits amenable urs comes extremely proximal action let either env acts trivially assume env boundary indivisible proof since amenable lemma assume according proposition env exactly means env acts trivially action factors action assumption latter amenable boundaries follows trivial contradiction therefore stabilizer urs since moreover point otherwise would amenable fact boundary indivisible follows proposition countable group furstenberg urs stabilizer urs associated action furstenberg boundary refer proof following properties proposition let countable group furstenberg urs following hold amenable every amenable urs moreover amenable invariant aut proposition let countable group let env envelope furstenberg urs acts minimally proof conjugation action normal subgroup env induces map aut since invariant aut proposition particular moreover action clearly minimal since already case therefore amenable urs follows since larger amenable urs hand closed subset sub consisting amenable subgroups domination property applied must equality follows remark env fact comes faithful extremely proximal action equivalent saying faithful extremely proximal direct implication consequence proposition converse follows proposition gives intrinsic reformulation assumption theorem inside chabauty space extremely proximal actions hausdorff denote set elements acting trivially say action every open set need following easy lemma amenable urs lattice embeddings lemma assume action let open set solvable proof assume subgroup whose action let open subset assumption exists nontrivial may find open set disjoint commutator coincides therefore provided follows induction term derived series action particular never trivial solvable section consider following setting discrete group compact action faithful minimal extremely proximal order avoid trivialities assume least three points unless specified otherwise remaining section assumed satisfy goal derive various properties group used later sections lemma let homeo subgroup normalized acts minimally fix probability measure proof assume exists closed since compressible normalized wee see fixed point set points entire minimality trivial argument shows absence probability measure since extremely proximal action also strongly proximal theorem section terminology topologically free see definition understood viewed discrete group therefore action topologically free means exists acts trivially open subset lemma action topologically free microsupported proof let open subset let element open set acts trivially let element acts trivially outside definition let subgroup generated elements fix interior remark countable group also equal envelope urs lemma recall monolith mon intersection normal subgroups say monolithic mon proposition assume action topologically free following hold adrien boudec commutators fix fix interior generate monolithic one mon normal subgroup trivial centralizer action extremely proximal simple group virtually simple finite index finite abelianization proof denote subgroup generated set act trivially common open set show every fixing open sets commutator belongs since generated elements show abelian hence inclusion clear first note trivial lemmas therefore acts minimally according lemma may find open set since fix construction since deduce product three elements hence belongs desired shall show normal subgroup contains since normal subgroup prove monolith classical commutator manipulation see lemma exists open set contains derived subgroup let fixing open set supported inside contained since normal elements generate hence conclusion normal subgroup therefore otherwise intersection would abelian would contain previous paragraph contradiction action extremely proximal according monolith since characteristic normal hence contains simple virtually simple clearly necessary normal subgroup finite index conversely condition holds action extremely proximal lemma simple definition let topological space let group acting open set wandering translates pairwise disjoint say wandering wandering proposition let exist open set free subgroup wandering proof following glasner consider pairwise disjoint open sets elements let follows argument reduced word letters sends complement inside subgroup generated free amenable urs lattice embeddings upon reducing necessary may find open set induction word length shows letter respectively lies respectively inside particular empty since disjoint wandering proposition retain notations wreath product embeds every open subset dense proof let proposition let subgroup generated since wandering conjugates pairwise commute follows isomorphic statement extreme proximality group isomorphic subgroup hence conclusion argument following proof borrowed proposition setting action topologically free faithful linear representation proof let open subset lemmas may find finitely generated subgroup inside choose small enough follows proposition finitely generated group isomorphic subgroup since residually finite admits faithful linear representation malcev theorem fortiori true recall subgroup group commensurated conjugates commensurable two subgroups commensurable intersection finite index beginning argument proof following proposition already appeared idea extend classical techniques normal subgroups certain commensurated subgroups proposition retain notation assume action topologically free commensurated subgroup exists element admitting wandering open set contains monolith mon proof let admitting wandering open set shall first prove contained let let also since wandering follows commutator trivial outside coincides therefore commutator trivial outside coincides since elements actually coincide everywhere since commensurated exists belongs applying previous argument deduce belongs order prove statement enough prove contained every closed subset according proposition let proper closed subset minimality extreme proximality adrien boudec fix choose integer belongs set wandering contained first paragraph since proof complete proposition assume compact action faithful minimal extremely proximal topologically free exists free subgroup every commensurated subgroup containing monolith mon proof let free subgroup conclusion proposition commensurated subgroup particular contains element admitting wandering open set proposition mon shows every commensurated subgroup containing mon intersects trivially corollary assume compact action faithful minimal extremely proximal topologically free locally compact amenable group whose connected component lie group exists injective homomorphism proof argue contradiction assume embeds let open subgroup containing cocompact subgroup commensurated subgroup commensurated exists contain mon according proposition may find free subgroup particular contains group discrete subgroup contradicts amenability therefore mon contained every choice since compact open subgroups form basis van dantzig theorem follows mon actually lies inside since connected lie group group aut linear map aut induced conjugation action injective restriction proposition therefore map must vanish mon means mon actually lies inside center particular mon abelian contradicts proposition proofs theorems subgroups paragraph consider following property definition let topological group closed subgroup say exists acts minimally noted prevent amenable instance action thompson group circle boundary action abelian subgroup consisting rotations acts minimally examples may found among groups acting trees considered stabilizer vertex amenable subgroup acting minimally ends tree sequel mainly focus case normal generally belongs urs see proposition contrast amenable urs lattice embeddings previous examples normal subgroup never amenable normal amenable subgroup acts trivially recall furman showed prop see also prop normal subgroup locally compact group always exists acts naturally raises question whether normal subgroup know answer question case discrete groups easily settled see situation groups seems delicate recall following result furstenberg see theorem let topological group denote homeo action exists homomorphism aut homeo inn inn aut group inner automorphisms particular normal subgroup group map aut coming conjugation action induces action factors note result readily answers question discrete groups showing normal subgroups exactly normal subgroups indeed space acts minimally theorem however argument carry arbitrary groups general continuous proposition let locally compact group closed normal nonamenable subgroup assume least one following holds true open direct factor cocompact exists urs point invariant aut closed cocompact amenable subgroup proof condition ensures image open therefore given theorem continuous deduce holds acts minimally finally verification case straightforward since aut continuous aut acts continuously sub weakly subgroups paragraph consider following weakening notion definition let topological group subgroup say weakly whenever convex compact fixes point irreducible following properties readily follow definition proposition let subgroups weakly adrien boudec normal subgroup weakly equivalent coamenable iii amenable weakly amenable continuous dense image weakly coamenable weakly weakly weakly weakly weakly proof convex compact points point irreducible convex fix nonempty fix empty since normal fix zorn lemma fix contains irreducible convex since fix empty shows weakly proofs iii similar verifications leave reader remark natural wonder whether weak coamenability implies weak view given show answer negative general correspondence irreducible convex compact gboundaries weak admits following characterization proposition subgroup weakly every probability measure fixed proof follows theorem following shows weak naturally appears boundary indivisible groups see also proposition proposition let boundary indivisible locally compact group closed subgroup uniformly recurrent weakly proof write closure sub urs assumption let acts minimally let fixes probability measure show trivial since fixes point prob prob strongly proximal theorem fixes point lemma exists follows acts minimally therefore lemma disjoint since boundary indivisible possible trivial proof theorem paragraph shall give proof theorem introduction make use following result proposition furstenberg let locally compact group closed subgroup finite covolume amenable urs lattice embeddings completeness repeat argument prop proof write prob consider closed subspace show set closed subspace fix probability measure consider prob projection onto first factor induced operator closed hence compact subspace prob prob continuous closed prob strong proximality theorem must intersect unique minimal closed subspace one every therefore prob implies every easily follows remark case cocompact strong proximality action also follows applied action prob minimality follows disjointness theorem assume admits amenable urs comes extremely proximal action env let locally compact group containing closed subgroup finite covolume boundary indivisible generally locally compact group sequence topological group homomorphisms either dense image embedding closed subgroup finite covolume boundary indivisible particular whenever maps continuously dense image product one factor must amenable proof since amenable comes extremely proximal action env group boundary indivisible proposition proposition property boundary indivisible inherited closed subgroups finite covolume indeed disjoint gboundaries also boundary proposition hence must trivial since boundary indivisible shows since boundary indivisibility passes dense continuous images inherited closed subgroups finite covolume follows finally last statement boundary boundary indivisible previous paragraph one factor must trivial exactly means amenable theorem adrien boudec remark proof theorem obtain boundary indivisible property deduced proposition turn relies notably proposition note order argument developed seems matter sense arguments applied seem applicable directly group indeed know whether group theorem falls setting proposition know whether stabilizer urs actually believe might false general note point proof theorem introduction complete indeed fact group theorem boundary indivisible well statement theorem statement follows proposition following remark explains comment introduction remark theorem provides instances countable groups lattices group form aut assumptions theorem satisfied see section cocompact subgroup aut acting minimally cocompact hence uniformly recurrent subgroup since however aut shows conclusion statement theorem weakly strengthen saying proof theorem recall topological group subgroup containing elements open centralizer note contains elements discrete conjugacy class particular contains discrete normal subgroups recall also elliptic radical largest normal subgroup every compact subset generates relatively compact subgroup closed characteristic subgroup say two groups commensurable compact kernels exist open finite index compact normal subgroup isomorphic following slightly complete theorem introduction theorem let countable group whose furstenberg urs comes faithful extremely proximal action assume env let locally compact group containing lattice group commenurable compact kernels consider following properties env finitely generated env finite index admits finitely generated subgroup finite centralizer env finite index env finite abelianization imply product two groups amenable urs lattice embeddings implies continuous morphism dense image product locally compact groups one factor compact proof simplicity give proof general case follows lines course may assume env since otherwise nothing prove according proposition particular monolithic mon env env simplicity proof write env mon assume continuous dense image denote projection show one factor must compact upon modding maximal compact normal subgroup identity component intersects trivially since finite normal subgroup proposition may also assume compact normal subgroup implies particular connected lie group assumption apply theorem says one factor say must amenable apply corollary tells map injective restriction definition deduce assume holds finite index lattice contained closed normal subgroup therefore deduce cocompact compact subgroup since also dense compact deal case identity without loss generality may assume projections dense proofs two cases share common mechanism given following easy fact lemma exists subgroup whose centralizer contains open finite compact indeed since must intersect open subgroup along lattice follows compact fortiori start case consider normal density note compactly generated view assumption finitely generated since group abelian therefore form compact group follows group admits discrete cocompact normal subgroup extension free abelian group characteristically simple group trivial elliptic radical group also trivial elliptic radical since compactly generated compact open normal subgroup finite index see lem deduce compact open elliptic radical since connected group compact elliptic radical deduce compact elliptic radical compact group also normal therefore mod assume trivial open since centralizes belongs adrien boudec centralizes therefore trivial proposition therefore open intersects dense subgroup trivially follows trivial discrete subgroup observe centralized normalized hence normal discrete normal subgroup therefore lies since finitely generated centralizer actually open moreover subgroup normal since normal clearly contain hence trivial proposition therefore apply lemma obtain conclusion deal let minimal compact faithful extremely proximal proposition actually easy case action also minimal extremely proximal lemma moreover associated stabilizer urs remains equal also furstenberg urs proposition satisfies assumptions case theorem enough prove result additional assumption case thanks proposition follows abelian density projection group also abelian hence lies center therefore normalized dense subgroup follows normal particular conclusion follows applying lemma subgroup finite groups acting trees amenable urs groups acting trees paragraph locally finite tree acts continuously isometries assumption locally finite essential results admit appropriate generalizations finite trees using compactification prop recall minimal proper invariant subtree general type finite orbit following wellknown essentially goes back tits see also proposition details proposition action aut minimal general type action minimal extremely proximal theorem therefore implies following result corollary let aut locally compact group whose action continuous minimal general type assume end stabilizers amenable envelope assume embeds subgroup finite covolume whenever maps continuously dense image product one factor must amenable conclusion corollary implies particular whenever embeds finite covolume product two groups following example largely inspired shows group nonetheless product two groups amenable urs lattice embeddings example let field laurent series finite field let aut automorphism group acts tree amenable stabilizers boundary action extends continuous action satisfies assumptions corollary nevertheless embeds diagonally product aut closed subgroup finite covolume since compact unimodular need following fact subtree fixator mean subgroup fixing pointwise proposition let aut countable group whose action minimal general type amenable env subgroup generated fixators proof since extremely proximal also strongly proximal theorem amenable stabilizers deduce proposition according lemma subgroup env env generated elements whose fixed point set interior since form basis topology statement follows going proof corollary make following observation remark acting action minimal general type action topologically free virtual simplicity equivalent finite index finite abelianization subgroup generated fixators see statement proposition proof corollary view proposition assumptions imply furstenberg urs comes faithful extremely proximal action fact means action topologically free observation virtual simplicity equivalent env finite index finite abelianization first statement corollary therefore follows theorem case second statement theorem groups prescribed local action next paragraphs illustrate results previous sections family groups acting trees contains instances discrete groups purpose paragraph recall definition give brief description known properties groups denote set cardinality tree vertex set edge set denoted respectively fix coloring neighbouring edges different colors every aut every action star around gives rise permutation denoted called local permutation permutations satisfy identity every aut adrien boudec given permutation group sym group introduced burger mozes group automorphisms aut closed cocompact subgroup aut definition given sym denote group automorphisms aut finitely many indeed subgroup aut follows note make following observation future reference remark follows definition element fixing edge uniquely written belongs fixes one two defined sequel always assume preserves see lem relevance property context groups satisfy following properties see group dense locally compact group particular sym dense subgroup aut admits locally compact group topology defined requiring inclusion continuous open action continuous proper soon endowed topology group compactly generated stabilizers vertices stabilizers ends respectively locally elliptic locally elliptic particular amenable discrete group acts freely group therefore finitely generated group stabilizers vertices stabilizers ends respectively locally finite locally finite acts freely groups instances groups obtained general construction described sec precisely variation provides discrete groups continuous furstenberg urs later stabilizer urs associated action boundary tree groups act particular case groups furstenberg urs explicitly described see proposition corollary sequel whenever use letters always mean permutation groups set contains preserves following denote subgroup generated fixators edges subgroup index two preserving bipartition following result also obtained prop supplements simplicity results obtained index simple subgroup found explicitly appropriate assumptions permutation groups proposition group simple subgroup finite index transitive generated point stabilizers amenable urs lattice embeddings proof conditions necessary prop conversely assume transitive generated point stabilizers prop index two particular compactly generated monolith simple open cor show finite index according remark also subgroup generated fixators therefore proposition commutator subgroup abelianization therefore finitely generated abelian group generated torsion elements since generated locally elliptic subgroups fixators edges therefore abelianization finite follows finite index boundaries paragraph use results previous sections order study boundaries discrete groups following result shows several properties set boundaries governed permutation groups rigidity phenomena occur mild conditions permutation groups theorem assume acts freely write following equivalent subgroup generated point stabilizers two orbits isomorphic one iii env every boundary indivisible need preliminary results proving theorem lemma assume acts freely envelope furstenberg urs equal proof write according proposition env subgroup generated fixators therefore inclusion env clear converse inclusion also holds true remark equality follows view lemma proposition led consider quotient particular study amenable end denote subgroup generated point stabilizers write since normal action set orbits factors free action proposition group isomorphic group viewed permutation group acting freely set orbits moreover transitive one proof let orbits freely identify set orbits integers every unique denote view tree cayley graph free coxeter group rank namely group defined generators relators adrien boudec adding relations form whenever obtain free coxeter group rank cayley graph regular tree degree surjective map two elements image one write xaj xbj words colors iaj ibj since inverse equal word obtained reversing order lemma two vertices projection distance even say sequence colors word concatenation palindromes even length lemma every every vertex image depend denote corresponding element trivial proof adjacent vertices color edge first statement follows connectedness fact trivial clear morphism according first statement vanishes fixators edges note set edges inherits natural coloring integers lemma natural morphism aut ker proof shall first define action set vertices let let two vertices sequence colors vertices sequence colors element defined lemma one iaj shows particular satisfy condition lemma holds means every vertex formula vertex action fact tree structure preserved clear note every local permutations equal every vertex one particular image lies inside shall prove ker let fixing edge also fixes edge moreover one lemma follows previous paragraph local permutations trivial implies ker conversely let element ker prove note since trivial one local permutations let vertex lemma sequence colors gives rise sequence concatenation palindromes simplicity treat case palindrome general case consists repeating amenable urs lattice embeddings argument case let vertices note midpoint since palindrome one easily checks elements belongs stabilizer fixes vertex obtained successively folding geodesic onto starting midpoint order bring back invoke following easy fact whose verification left reader lemma let fixing vertex apply lemma deduce belongs desired last thing remains proved statement lemma image equal fact always belongs already observed converse inclusion observe since acts transitively vertices already case enough check image contains vertex since acts freely map isomorphism therefore enough see action star around realized element indeed case see lemma finish proof proposition remark image precisely transitive also transitive two orbits vertices one orbit edges therefore splits free product remark case allowed proposition conclusion also holds groups proposition naturally leads isolate following three situations keep previous notation number orbits case segment length one trivial line case splits two disjoint intransitive trivial generated translation length transitive sym virtually free group since acts vertex transitively trivial edge stabilizers theorem says properties stated hold true sufficient condition instance acts transitively acts primitively recall permutation group every normal subgroup acts transitively theorem also applies beyond case permutation groups example situation giving rise case fixed point acts transitively complement examples giving rise case instance obtained taking sym sym acting naturally adrien boudec letters subgroup generated cycle order proof theorem follows lemma proposition discussion following proof iii clear iii proposition guaranteed proposition fact point finally assume hold least three orbits write proposition group subgroup finite index free rank least exist fortiori boundary acts trivially since also acts minimally follows disjoint contradicting therefore property implies property proof complete weakly subgroups paragraph show subgroups groups satisfy following dichotomy proposition assume acts freely transitively acts primitively subgroup either locally finite hence amenable weakly need following lemma lemma assume acts primitively take two subgroups furstenberg urs proof write recall prop furstenberg urs consists subgroups set elements acting trivially neighbourhood given show subgroup generated must equal take vertex geodesic let edges containing pointing towards colors denote subgroup consisting elements fixing every since primitive generated point stabilizers implies every element may written product elements fixing either defined containing defined containing since arbitrary also neighbour geodesic conclusion follows since two neighbouring vertices subgroups always generate cor proof proposition write let subgroup equivalently whose action general type proposition show fixes probability measure argue contradiction assume fixes probability measure according theorem therefore proposition exist almost extension factor map let prob set write closed subset prob since action strongly proximal since factor prop amenable urs lattice embeddings deduce contains dirac measures let prob measure must supported set follows supported set points implies upper stabilizer map since fix point may find another argument shows also acts trivially support lemma subgroups generate index two therefore point support cardinality two absurd since acts minimally assumption remark assume acts freely transitively acts primitively write let general type proper subtree instance one could take subgroup generated two hyperbolic elements sufficiently far apart axis see argument proof theorem weakly proposition remark mention primitive following proof corollary minor modifications one could prove every factors onto would provide alternative proof proposition lattice embeddings groups section study discrete groups embed lattices locally compact groups purpose paragraph twofold first apply previous results article family groups deduce properties general locally compact groups containing group lattice corollary second explain groups embed lattices locally compact wreath products content remark maybe worth pointing instances lattice embeddings groups already appeared indeed appropriate assumptions permutation groups inclusion discrete cocompact image cor corollary assume acts freely transitively generated point stabilizers let locally compact group containing lattice conclusions corollary hold proof assumptions imply virtually simple proposition corollary applies remark setting corollary although lattice product happens exist groups embeds discrete subgroup injective dense projection factor instance permutation groups set diagonal embedding property soon see lem adrien boudec locally compact wreath products paragraph introduce terminology used sequel let set group subgroup denote set functions finitely many note group definition group acting permutational wreath product product acts extreme situations correspond respectively restricted unrestricted wreath product shall write restricted wreath product also simplicity sometimes say wreath product instead permutational wreath product compact group locally compact group acting continuously group locally compact group product topology moreover locally compact compact open natural locally compact group topology defined requiring inclusion continuous open see sec remaining article shall interested study certain lattices locally compact groups remarks order lemma let assume lattice lattice normalizes lattice contain lattice necessary contains lattice proof first statement see lem second statement observe lattice intersection lattice since open subgroup compact projection discrete hence lattice recall various notions irreducibility lattice direct product groups general whether notions coincide depends context refer detailed discussions setting wreath products use following terminology definition lattice irreducible lattice projection group definition implies neither finite index subgroups form lemma lemma group contain lattice lattice irreducible proof lattice discrete projection subgroup contains lattice follows intersects subgroup open along lattice amenable urs lattice embeddings remark course admits lattice holds interestingly finite groups fails admit lattice provided infinite instance case element power consequently lattices irreducible lemma arbitrary proof remark claim condition actually implies infinite discrete subgroup every finite write subgroup vanishing assume discrete subgroup finite assumption easily seen imply subgroup intersects therefore finite index finite noted existence irreducible lattice forces however force see interesting examples already arise finite trivial proof theorem let denote set integers set functions finite support support set also write image sometimes use notation function consider graph whose set vertices set pairs belongs edges emanating vertex two types type connected neighbour share exactly one vertex type connected function obtained changing value exactly one vertex note since neighbours cardinality every vertex neighbours type neighbours type graph almost wreath product complete graph vertices tree see let group permutations write action stabilizer obviously isomorphic abuse notation denote particular viewing subgroup always implicitly mean subgroup acting definition denote wreath product aut groups form considered denote snd aut endow topology sets form form basis neighbourhoods belong basis identity respectively aut defines totally disconnected locally compact group topology see prop note case somehow particular adrien boudec discrete subgroup restricted wreath product aut aut proposition group acts automorphisms graph preserving types edges moreover action faithful continuous proper transitive set vertices proof group subgroup unrestricted permutational wreath product aut latter group faithful action set functions given group preserves fixes almost surely belongs projection onto aut induces action set consider diagonal action words fix let neighbour type share vertex vertex common formula neighbour type type may write one two vertices follows neighbour type shows action graph automorphisms preserves types edges lemma let let stabilizer aut stabilizer vertex compact open subgroup proof fixes exactly means fixes fixes fact action continuous proper follows lemma transitivity set vertices easy verification consider free product two cyclic groups order acting tree one orbit edges two orbits vertices denote cyclic subgroup generated cycle set remark split morphism onto whose kernel acts two orbits vertices free rank therefore splits lemma acts freely transitively vertices proof clear image vertex element transitivity freeness follow fact actions properties amenable urs lattice embeddings explain groups act graphs sequel denote two permutation groups preserves orbits denote index fix bijection sent class action coset space induces group homomorphism lies inside write note also denote proposition let sym index map group morphism injective continuous closed cocompact image proof map finitely many indeed snd fact group morphism follows cocycle identity satisfied local permutations indeed injectivity clear since composition projection aut aut injective preimage open subgroup subgroup open definition topology follows map continuous also intersection aut easy check latter open subgroup indeed closed subgroup follows closed fact cocompact follow proposition proposition sequel simplicity also write image particular speaking action graph always refer action defined proposition restricted means acts action confused standard action coming inclusion aut proposition let sym index adrien boudec group acts cocompactly transitive group acts transitively vertices stabilizer vertex stabilizer particular action proper therefore acts freely transitively group acts freely transitively vertices proof show every vertex since preserves vertices form since number orbits finite equal one transitive statement follow argue induction cardinality support nothing show assume let maximizes distance among vertices let edge emanating toward belongs let color also denote two defined contains every denote edge containing color defined containing assumption permutation group preserves subgroup transitive follows previous decomposition exists every choose consider unique element aut whose local permutations every every easy verification check automorphism possibly one local permutations note fixes construction write claim support cardinality since fixes moreover also every acts trivially finally choice every since fixes still every proves claim conclusion follows induction statement follows lemma last statement follows fact acts freely acts freely propositions lemma imply theorem introduction note acts freely transitively explicit description generating subset group whose associated cayley graph fix edge whose color whose vertices denote let set fixing every belongs generates cay graph isomorphism moreover neighbours type resp type vertex labeled elements resp amenable urs lattice embeddings end article observing possible variations definition graph complete graph vertices let wreath product graphs sometimes also called lamplighter graph vertex set edge either adjacent index group acts given previous arguments graph carry graph proposition let sym index acts properly cocompactly graph reason considered graph instead obtain assumption acts freely free action set vertices case stabilizer vertex finite note might interesting investigate whether generalized wreath products graphs could provide kind interesting groups automorphisms yet another possibility take vertex set declaring edge share vertex every graph larger degree namely results proved remain true case one may check graph may thought higher dimensional versions graphs references bader caprace gelander mozes lattices amenable groups bader furman boundaries rigidity representations lyapunov exponents proceedings icm bader furman sauer structure arithmeticity lattice envelopes math acad sci paris breuillard kalantar kennedy ozawa unique trace property discrete groups burger mozes finitely presented simple groups products trees acad sci paris math groups acting trees local global structure inst hautes sci publ math lattices product trees inst hautes sci publ math burger monod continuous bounded cohomology applications rigidity theory geom funct anal bartholdi neuhauser woess horocyclic products trees eur math soc jems benoist quint lattices lie groups lie theory bader shalom factor normal subgroup theorems lattices products groups invent math caprace simple locally compact groups appear proceedings european congress mathematics cornulier fisher kashyap lamplighter groups new york math adrien boudec caprace monod isometry groups curved spaces discrete subgroups topol decomposing locally compact groups simple pieces math proc cambridge philos soc lattice two groups arithmetic israel math relative amenability groups geom dyn cornulier locally compact wreath products caprace simplicity superrigidity twin building lattices invent math caprace reid wesolek approximating simple locally compact groups dense locally compact subgroups dahmani guirardel osin hyperbolically embedded subgroups rotating families groups acting hyperbolic spaces duchesne monod group actions dendrites curves dymarz envelopes certain solvable groups comment math helv dyubina instability virtual solvability property virtually groups internat math res notices eskin fisher whyte rigidity solvable groups pure appl math special issue honor grigory margulis part coarse differentiation spaces cayley graphs ann math coarse differentiation rigidity sol lamplighter groups ann math elek uniformly recurrent subgroups finitely generated groups erschler generalized wreath products int math res art eymard moyennes invariantes unitaires lecture notes mathematics vol york frisch schlank tamuz normal amenable subgroups automorphism group full shift furstenberg disjointness ergodic theory minimal sets problem diophantine approximation math systems theory poisson boundaries envelopes discrete groups bull amer math soc boundary theory stochastic processes homogeneous spaces harmonic analysis homogeneous spaces proc sympos pure vol xxvi williams williamstown amer math providence rigidity cocycles ergodic actions semisimple lie groups margulis zimmer bourbaki seminar vol lecture notes vol springer york furman rigidity locally compact targets geom funct anal minimal strongly proximal actions locally compact groups israel math ghys groups acting circle enseign math glasner topological dynamics group theory trans amer math soc compressibility properties topological dynamics amer math amenable urs lattice embeddings proximal flows lecture notes mathematics vol york gruenberg residual properties infinite soluble groups proc london math soc guivarc croissance polynomiale des fonctions harmoniques bull soc math france glasner weiss uniformly recurrent subgroups recent trends ergodic theory dynamical systems contemp vol amer math providence harpe simplicity reduced groups bull lond math soc hewitt ross abstract harmonic analysis vol second grundlehren der mathematischen wissenschaften fundamental principles mathematical sciences vol york structure topological groups integration theory group representations jenkins growth connected locally compact groups functional analysis jolissaint robertson simple purely infinite actions funct anal kawabe uniformly recurrent subgroups ideal structure reduced crossed products kennedy characterizations kalantar kennedy boundaries reduced discrete groups reine angew math boudec groups acting trees almost prescribed local action comment math helv amenable radical invent math boudec matte bon subgroup dynamics groups homeomorphisms ann sci ecole norm sup appear boudec wesolek commensurated subgroups tree almost automorphism groups groups geom dyn appear losert structure groups polynomial growth math laca spielberg purely infinite boundary actions discrete groups reine angew math malcev isomorphic matrix representations infinite groups rec math mat sbornik margulis discrete subgroups semisimple lie groups ergebnisse der mathematik und ihrer grenzgebiete vol berlin margulis free subgroups homeomorphism group circle acad sci paris math matte bon rigidity graphs germs homomorphisms full groups matte bon tsankov realizing uniformly recurrent subgroups monod popa groups von neumann algebras math acad sci soc monod shalom cocycle superrigidity bounded cohomology negatively curved spaces differential geom montgomery zippin topological transformation groups interscience publishers new nekrashevych finitely presented groups associated expanding maps pays valette libres dans les groupes automorphismes arbres enseign math adrien boudec radu new simple lattices products trees projections appendix caprace raghunathan discrete subgroups lie groups new yorkheidelberg ergebnisse der mathematik und ihrer grenzgebiete band rattaggi computations groups acting product trees normal subgroup structures quaternion lattices proquest llc ann arbor thesis technische hochschule zuerich switzerland construction acad sci paris math shalom rigidity commensurators irreducible lattices invent math tits sur groupe des automorphismes arbre essays topology related topics georges rham springer new york vorobets notes schreier graphs grigorchuk group dynamical systems group actions contemp vol amer math providence wise curved squared complexes aperiodic tilings finite groups proquest llc ann arbor thesis university uclouvain irmp chemin cyclotron belgium cnrs pures france address
| 4 |
multisensor poisson filtering uncertain sensor states dec markus christopher lindberg karl henk wymeersch typical multitarget tracking mtt scenario sensor state either assumed known tracking performed based sensor relative coordinate frame assumption becomes violated mtt sensor vehicular radar mounted vehicle target state represented global absolute coordinate frame important consider uncertain sensor location mtt furthermore multisensor scenario multiple sensors observe common set targets state information one sensor utilized improve state another sensor paper present poisson mtt filter models uncertain sensor state multisensor case addressed asynchronous way measurements incorporated sequentially based arrival new sensor measurements targets observed well localized sensor reduce state uncertainty another poorly localized sensor provided common subset features observed proposed mtt filter low computational demands due parametric implementation numerical results demonstrate performance benefits modeling uncertain sensor state feature tracking well reduction sensor state uncertainty multisensor scenario compared per sensor kalman filter scalability results display linear increase computation time number sensors features present ntroduction intelligent transportation systems general autonomous driving particular require accurate position information measurements provided various sensors allow infer vehicle state position velocity well information surrounding environment instance global navigation satellite system gnss receiver provides absolute position whereas radar sensor provides relative position respect sensor origin furthermore vehicles access local dynamic map ldm containing static features landmarks dynamic features pedestrians cyclists etc part map system fully aware surrounding environment dynamic features need estimated tracked time using vehicles sensors thus allowing enrich vehicle ldm order incorporate mobile features ldm contains map features described global coordinate frame location uncertainty sensors used track dynamic features needs wymeersch department electrical engineering chalmers university technology gothenburg sweden frohle henkw lindberg zenuity gothenburg sweden considered vehicle state uncertainty location pose vehicles communicate wireless channel vehicles communication road infrastructure road side unit rsu communication using ieee cellular communication information exchange vehicles make local ldm information available neighbors allowing enrich ldms situational awareness ldm information shared also fused improve every ldm special case observe overlapping set dynamic features information one vehicle utilized increase location accuracy vehicles vice versa note context measurements performed different traditional cooperative localization approach problem vehicular localization using locally observed features unknown observation feature correspondence aggregated rsu interpreted mtt problem mtt varying number mobile features targets tracked using sensors example radars lidars cameras thereby typically assumed state observing sensor known although true general assumption motivated fact sensor state uncertainty negligible comparison sensors measurement accuracy sensor state uncertainty significant needs modeled mtt order negative impact feature tracking performance paper consider case mtt uncertain sensor state potentially multiple sensors varying sensor state uncertainty enable accurate feature tracking model sensor state uncertainty mtt filter main contributions asynchronous parametric tracking filter uncertain sensor state information fusion tracking information local sensor tracking information numerical simulation results demonstrating performance filter multisensor vehicular scenario application example demonstrate proposed filter used transfer location information well localized vehicle poorly localized vehicle mtt hence positioning accuracy poorly localized vehicle greatly improved compared using local kalman filter gnss measurements alone legend cooperative vehicle mobile feature rsu communication link fig urban scenario two vehicles cooperating rsu six mobile features motivation paper consider urban intelligent transportation system scenario consisting cooperating vehicles illustrated fig vehicle equipped sensor allowing determine absolute position gnss receiver sensor retrieve relative positions mobile features present environment radar absolute position measurements denoted gnss measurements measurements taken features denoted measurements due sensor used obtain measurements general known feature gave rise measurement gnss set measurements every vehicle transmitted synchronized manner rsu centralized filter run track feature well vehicle states information utilized rsu sent back vehicles increase situation awareness related work mtt known sensor state many mtt filters proposed track mobile features using sensors sensor state known tracking mht filter builds growing hypothesis tree data association needs pruned limit computation complexity joint probability data association jpda filter finds likely feature state information reduced update step single gaussian per feature last years mtt filters based random finite set rfs statistics fisst avoid inherent originally developed gained much attention probability hypothesis density phd filter propagates first moment rfs density time poisson pmb filter approximates global joint product local marginal similar jpda filter derivation pmb filter based standard single target measurement models without using probability generating functional functional derivatives presented furthermore connection labelled filter shown density special case density labeled targets therefore seen special case pmb filter gaussian mixture bernoulli tracker known sensor locations developed compared particle filter implementation multistatic sonobuoy fields target state update multiple sensors achieved sequential sensor updates factor graph based approach using fisst proposed variant jpda filter multiscan scenario considered filter realized running loopy belief propagation containing cycles based implementation pmb filter presented implementation used performance comparison based mtt found based pmb filter implementation scales exponentially increasing number features vehicles perform local feature tracking send track information wireless channel central fusion center fusion performed taking care measurements arise utilizing shared communication media problem arises decision local tracks fuse mahalanobis distance employed metric mtt uncertain sensor state contrast mtt simultaneous localization mapping slam based methods determine sensor state mapping features environment proposed slam methods assume static features walls street signs slam methods excel maintaining correlation features reduction complexity achieved fastslam feature state conditioned sensor state tracked kalman filter sensor state rfs based approach slam problem proposed target state conditioned sensor location tracked phd filter following mtt uncertain single sensor location derived slam problem using fisst point process theory simulation results shown implementation problem sensor uncertainty bernoulli filtering one target using single sensor addressed since scenario restricted single sensor suboptimal approach presented sensor state updated measurements independent target state target tracking information used update sensor location similar based approach considered urban scenario number features assumed priori known variant slam considered indoor environments using radio signal measurements features static source locations non signal propagation paths transmitted radio wave notation paper organization scalars described letters vectors bold letters matrices sets bold letters cardinality set denoted set operator denotes disjoint set union means vehicle state reserved letter feature state letter measurements letter identity matrix size denoted vector remainder paper organized follows section gives background knowledge rfs section iii introduces problem formulation system models section details proposed mtt filter uncertain sensor state numerical results given section conclusions drawn section background rfs section describe useful properties rfs stated otherwise source random finite set formulation according rfs based methods developed conduct statistical inference problems variables interest observations form finite sets tracking address two major challenges interest number targets present scene unknown measurements invariant ordering correspondence unknown rfs valued random variable described discrete probability distribution family joint probability densities yielding sum spans permutation functions rfs density permutation invariant set integral function variable defined dxn bernoulli process probability existence probability density function pdf rfs density otherwise rfs density process rfs rik rik poisson point process ppp intensity function rfs density exp inner product remark independent rfss note rfs rfs ppp called pmb density state estimation rfs density common way estimate set states bernoulli process rfs density comparing probability existence existence threshold rth rth target said exist pdf rits state estimated mean iii roblem ormulation ystem odels first present problem formulation vehicle feature dynamics followed gnss measurement models communication model problem formulation goal filter runs rsu track features states vehicles every discrete time step incorporation sensor measurements gnss measurements time step therefore interested joint posterior distribution feature vehicle states every time step vehicle feature dynamics vehicle state motion follows independent markovian processes vehicle state rnx vehicle time step statistically modeled linear model denotes matrix error covariance matrix single feature state rnf survives next time step following independent identically distributed iid markovian process survival probability feature state motion follows iid markovian processes statistically modeled linear model denotes model process noise error covariance matrix statetransition matrices well error covariance matrices assumed equal among features note vehicle feature state motion independent following drop subscript indexing states measurements whenever context allows measurement models time step vehicle obtains two different kind measurements measurements vehicle state reference frame measurements measurements features onboard sensor without loss generality assume sensors state equal vehicle state thus uncertain vehicle location implies uncertain location sensor gnss measurement gnss measurement rmg vehicle time statistically modeled likelihood function linear observation model linear observation matrix error covariance matrix measurement let set measurements tracking sensor susceptible measurement noise missed detections false detections examples sensors include camera radar lidar consequently denote observation matrices error covariance matrix note state correspondence denoted general known needs inferred measurements let measurement likelihood single feature rfs constant valid pdf depending specific sensor hand fov may different needs adapted communication model assume every vehicle able communicate obtained measurements gnss rsu instantaneously without errors implies time number vehicles communicating rsu vary incorporation realistic channel model performance impact point future work oisson ulti ernoulli filtering uncertain sensor state denotes set false alarm measurements due clutter modeled ppp intensity denotes set detected features let measurement state dimension obtained sensor vehicle feature time modeled likelihood function linear observation model follows measurement model note due probability detection depends vehicle state well feature state instance limited sensor fov affects probability feature detection based distance vehicle feature remark case sensor able detect features within radius rmax probability detection defined rmax otherwise sake brevity present linear system measurement models case true system dynamics measurement model mild nonlinear linearization steps performed similar steps taken extended kalman filter ekf unscented kalman filter ukf proposed filter remains valid unaltered note proposed filter predicts small order tens milliseconds assumption vehicle feature states evolving independently reasonable little interaction among within prediction horizon section formulate proposed pmb filter uncertain sensor state consider tracking scenario subject section may multiple features section proceed asynchronous multisensor case allowing track multiple features vehicles uncertain vehicle state section tractable gaussian density approximation proposed filter given vehicle state pdf time step indicated subscript pdf predicted current time step updating measurement indicated subscript posterior pdf stated without subscript similar definitions hold feature rfs density proposed filter developed within bayesian framework alternating prediction update steps operating rfs prior rfs density rfs transition density predicted density rfs measurement likelihood measurement set problem definition stated section interested joint posterior density vehicle features considering measurements current time step proceed development proposed filter within framework single vehicle prior joint density form prior pdf vehicle state prior pmb density latter density written terms ppp intensity undetected features features hypothesized exist never detected def prior rfs density detected features ppp density undetected features intensity undetected features interested low computational complexity method compute posterior joint density every discrete time step incorporation sensor measurements posterior density remain form prior joint density prediction step vehicle state existing feature rfs predicted joint density predicted vehicle state pdf given equation state transition pdf described prior pdf similarly predicted feature state pmb density calculated transition rfs density prior pmb density predicted intensity undetected features predicted ppp density given birth intensity denoted feature transition pdf described feature survival probability denoted denotes prior intensity rfs density detected features denotes set existing features measurement update otherwise predicted pdf feature prior pdf feature probability existence feature eqn denotes prior probability existence prior pdf probability feature survival measurement update step updating joint density two types different measurements gnss measurements involves application bayes theorem following describe update calculations using different type measurements update vehicle state measurement let measurement related vehicle state unrelated set features example could gnss inertial measurment unit imu measurement given predicted vehiclefeature density bayes theorem updated density words vehicle state density updated measurement feature set density unaffected update independent form retained note update step omitted absence measurements pure slam application update cluttered set feature measurements let set measurements subject measurement model section indexed let space das predicted assignment measurement source either background clutter new feature one existing features indexed therefore partition disjoint subsets called index remark due standard mtt assumption features generate measurements independent example let three measurements two features one valid partition one possible associations meaning measurement associated feature feature detected measurements associated previously detected feature measurements either clutter new features index cell contains one feature index association least one cell least two feature indices zero likelihood violates independence assumption due point feature assumption feature generates one measurement time step association least one cell least two measurement indices zero likelihood violates point feature assumption index cell contains feature index let denote corresponding feature index index cell contains measurement index let denote corresponding measurement index measurements assigned feature associated background help bayes rule updated joint density denotes weight denotes vehicle state posterior stated appendix let assume weights given undetected feature density detected feature density stated appendix equation factorize density remains form prior contrary many dependencies feature state rfs vehicle state means existing tracking frameworks applied directly introduce significant increase computational complexity overcome approximate functions right hand side need found towards end make following approximations vehicle feature dependent probability detection dxdf dxdf approximation updated density becomes given appendix observe undetected feature density depends undetected feature rfs independent stochastic variables remains dependencies detected features vehicle remove dependency vehicle state detected feature density map vehicle state uncertainty onto measurement uncertainty done averaging measurement likelihood vehicle state uncertainty detected feature density association becomes independent vehicle state leading approximation approximated updated feature set density given appendix updated joint density approximated form vehicle state pdf independent feature rfs feature density vehicle state allows state weights given appendix note poisson mixture pmbm density denotes hypothesis posterior vehicle state detected feature state weighted reduced pmb density using variational approximation presented based marginal probabilities apply latter approach reduction form marginal algorithm used results single hypothesis per detected feature described bernoulli process per vehicle described pdf well intensity undetected feature described ppp means summation space vanished retaining form scenario using multiple vehicles uncertain state expected probability detection undetected feature predictive distributions expected probability detection detected feature alternative stronger approximation would estimated feature state section similarly point discussed pmb filtering single vehicle uncertain state gnss measurements used achieve feature tracking described section sensors mounted several vehicles consider multisensor case furthermore depending infrastructure sensors time synchronized take measurements time step synchronized measurements sensor arrives timestamped time sensor acquires measurement independent sensors let single rfs modeling features state vehicles uncertain vehicle state set vehicles taking measurement time step given vehicle provides vector gnss measurement measurements furthermore let multisensor case pmb filter uncertain sensor state follows unisensor case proposed section joint density predicted updated gnss measurements thereby predicted density becomes updating joint density gnss measurement results multisensor case becomes order obtain low complexity implementation describe vehicle state pdf gaussian pdf mean parameter covariance matrix similarly rfs density process feature described bernoulli random variable gaussian pdf parameters description system models section iii express prediction update steps proposed filter closed form low computational complexity whose steps described next prediction step predicted vehicle state pdf use incorporating gnss measurements proceed incorporating measurements unisensor case resulted approximated joint density multisensor case density becomes gaussian density approximation notation means time step vehicle state pdf time step predicted state pdf updating measurement intensity undetected features modeled consisting newborn features weight pdf birth mean covariance matrix undetected features survived current time step prior parameters predicted parameters involves marginalization vehicle prior pdf containing vehicles provide measurement according note used prior joint density furthermore space increases increase number communicating sensors predicted terms complexity increase significant increase possible feature associations remark several different approaches exist tackle problem tractable manner instance employing sequential measurement updates performing variational inference solving parallel basis employ sequential measurement update strategy limit size space subsequent sensors benefit updated vehicle feature information preceding sensors application example section means update joint density measurements well localized vehicle certain vehicle state results improvement feature tracking performance prior information features low update joint density measurements poorly localized vehicle uncertain vehicle state allows reduce uncertainty vehicle state prior information features high hps predicted density detected features stated predicted using single feature bernoulli parameters single feature pdf calculated similarly update step joint state density computed updating predicted density gnss measurement kalman update step vehicle state pdf given matrices defined updated joint density computed updating predicted density measurement note depending time difference gnss measurements may used instead prior joint density order calculate vehicle measurementstate likelihood used feature likelihood used needed state likelihood given written terms closed form denotes moorepenrose pseudo inverse proof see appendix vehicle likelihood given marginalized detected feature marginalized undetected feature written terms closed form described bernoulli component gaussian pdf memory footprint bytes needed store cov rnf storing vehicle state requires bytes state vehicle described gaussian pdf parameters cov rnx without pruning bernoulli components memory footprint proposed filter bytes multisensor approach section rsu receives gnss measurement measurement vehicle performs filter update computation rsu broadcasts vehicle state estimates either whenever new measurement processed based fixed schedule information detected tracked features required vehicles pdfs need transmitted well section umerical esults consider scenario similar one outlined apply proposed state tracking filter presented section equals feature pdf feature pdf proof proof analogous proof feature likelihood difference unknown instead computational complexity memory footprint communication demand computational complexity dominated matrix inversion needed update feature vehicle densities state updating joint vehiclefeature density using gnss measurement requires matrix inversion scales update component measurement scales consequently whole measurement set computational complexity hence update joint vehiclefeature density measurement set scales last term comes vehicle state update time step size undetected features rfs increases new born targets number existing increases new per measurement using bernoulli component per track existing hypotheses plus one missed detection computed algorithm reduces single hypothesis track pruning low probability existence allows keep number tractable hypothesis setup state vehicle time step position velocity vehicle dynamics follow linear constant velocity model described denoting kronecker product state feature time denoted comprised cartesian position velocity similar vehicle state maximal five features present noted otherwise furthermore feature dynamics follow model parameters used vehicles generate challenging scenario initialize feature states features run model forward backward time similar sec first feature enters scene second present features stay alive remaining simulation time vehicle feature trajectories shown fig observation matrix gnss measurement model vehicle assume low location uncertainty vehicle high location uncertainty corresponding vehicle high quality gnss receiver one low quality gnss receiver single sensor case vehicle present multisensor case vehicles present noted otherwise measurement model follows dimension vehicle feature dimension fig vehicle feature trajectories fig measurements top panel dimension bottom panel time step fig measurements shown time step including clutter measurements following set initial undetected feature intensity diag cover ranges interest feature state feature birth intensity set average number false alarms per scan uniform spatial distribution rmax parameter rmax furthermore probability survival probability detection asses feature tracking performance use optimal assignment ospa metric parameter order vehicle tracking performance assessed terms root mean square error rmse discussion first discuss impact uncertain vehicle state feature tracking performance using single vehicle consider multisensor case section show scaling results terms numbers features vehicles tracked impact uncertain vehicle state feature tracking performance features vehicle trajectories outlined fig measurements fig fig feature state ospa plot time step observe peaks high ospa value new feature enters scene peaks due cardinality mismatch feature rfs estimate true feature set furthermore high ospa value around time step point time features closely spaced together measurement variance resulting challenging scenario fig cardinality feature rfs plotted time around time step filter overestimates feature rfs cardinality may caused clutter measurements note different realizations produce slightly different outcome feature ospa value tendency behavior feature appearance effect spatially close agrees findings known sensor vehicle state furthermore observe fig ospa low time steps features spatially separated already present scene mtt filter able produce feature estimates low error fig feature state ospa averaged time steps plot different values gnss measurement increase gnss measurement variance variance leads increased vehicle state uncertainty effect increase average feature ospa ospa increase consists two components first increased feature state estimation error due higher value second results features staying spatially close together together feature state measurement uncertainty longer period time around time step hence challenging effect increased feature ospa regime figure average feature ospa without modeling present vehicle state uncertainty plotted using conventional pmb filter observe modeling present vehicle state uncertainty negative effect feature tracking performance fig average feature state ospa plotted different values measurement noise variance observe higher noise variance leads increased ospa value single feature state estimation error increases becomes challenging note results fig fig averages realizations tracking performance rmse vehicle state plotted time step small fig vehicle low location uncertainty fig vehicle high location uncertainty high benchmark results centralized kalman filter plot well known augmented state vector contains vehicle feature states furthermore tracking performance using local plot local performs filtering individual vehicle state separately using rmse feature ospa fig feature ospa cardinality cdf proposed conventional vehicle vehicle vehicle vehicle vehicle vehicle avg feature ospa estimate true fig feature cardinality proposed local central known proposed local central known rmse fig cdf plot vehicle state rmse fig average feature ospa different values gnss measurement noise variance set variance avg feature ospa fig average feature ospa different values measurement variance gnss noise variance set local central known proposed rmse fig vehicle state rmse vehicle local central known proposed fig vehicle state rmse vehicle gnss measurements estimate feature states note performance local considered performance vehicle state estimation since measurements considered observe fig vehicle low gnss measurement noise three filter methods deliver similar performance reason due high accuracy gnss measurements lot information improve vehicle state provided feature tracking feature tracking error high vehicle state tracking error vehicle updating gnss measurement fig cumulative distribution function cdf rmse plot low rmse vehicle using three different filters observed well moving focus vehicle observe rmse local much higher compared central caused high noise gnss measurements due low rmse vehicle state relevant position information system transfered vehicle vehicle via features utilizing measurements cases rmse vehicle proposed filter compared local despite great improvement proposed filter local achieve performance central rmse reason difference central knowledge correct knows true number features present ignores clutter measurements furthermore tracks present correlations features vehicles modeled time features fig average computation time per time step different number present features number vehicles set two time vehicles incorporated either manner sensor measurements aggregated state non manner asynchronous update steps executed whenever sensor measurements arrive central node simulation results showed unisensormultitarget tracking scenario known sensor state tracking performance assessed ospa distance metric equivalent william pmb filter scenario present vehicle state uncertainty proposed filter showed superior feature tracking performance conventional pmb filter due modeling type additional uncertainty tracking scenario feature information well localized vehicle sensor allows significantly reduce vehicle state uncertainty previously poor localized vehicles improvement possible joint observation subset present features supported simulation results ppendix vehicle state posterior fig average computation time per time step different number vehicles number features set five proposed filter proposed filter needs infer estimate number features currently present needs appropriately handle clutter measurement set scaling results fig average computation time per time step plot simulation two vehicles different number present features observe computation time increases linearly number present features increases scaling result different based implementation pmb filter known vehicle state authors reported exponential increase computation time fig average computation time per time step plot simulation five features different number vehicles computation time increases linearly number vehicles increases furthermore investigated average computation time per time step different values gnss measurement variance measurement variance simulation scenario five features two vehicles average computation time remained constant around measurement variance average computation time linearly increased increased onclusions paper presented poisson filter tracking uncertain sensor states two different kind measurements observations sensor state observations features used obtain accurate feature sensor state tracking proposed parametric filter implementation scales linearly number features sensors information multiple sensors vehicle state posterior proportional vehicle prior pdf times measurement likelihood map feature uncertainty measurement likelihood updated undetected detected feature density updated joint density undetected feature density average predicted vehicle state marginalize subsets equal eqn bernoulli parameters eqns detected feature density zmc zmc first product considers cases measurement associated existing features second product considers cases measurement associated existing feature last line considers case measurement associated background clutter undetected feature zmc zmc zmc approximated updated feature density weights updated undetected detected feature density approximation joint density approximation undetected feature density weight global association hypothesis stated eqn zmc proof known define proof state likelihood zmc detected feature density consequently zero otherwise three products consider similar cases solve help rule table results eqs eferences approximated updated feature set density approximated updated feature set density density leonard teller berger campbell fiore fletcher frazzoli huang karaman perceptiondriven autonomous urban vehicle journal field robotics vol cadena carlone carrillo latif scaramuzza neira reid leonard past present future simultaneous localization mapping toward age ieee transactions robotics vol kim chong qin shen cheng liu ang cooperative perception autonomous vehicle control road motivation experimental results intelligent robots systems iros international conference ieee kim qin chong shen liu ang frazzoli rus multivehicle cooperative driving using cooperative perception design experimental validation ieee transactions intelligent transportation systems vol meyer hlinka wymeersch riegler hlawatsch distributed localization tracking mobile networks including noncooperative objects ieee transactions signal information processing networks vol march soatti nicoli garcia denis raulefs wymeersch enhanced vehicle positioning cooperative joint sensing passive features ieee international conference intelligent transportation systems itsc oct wymeersch lien win cooperative localization wireless networks proceedings ieee vol hoang denis slock breaking gridlock spatial correlations ieee cooperative positioning ieee transactions vehicular technology vol willett tian tracking data fusion handbook algorithms storrs ybs publishing streit luginbuhl probabilistic tracking naval underwater systems center newport tech mahler statistical information fusion artech house gaussian mixture probability hypothesis density filter ieee transactions signal processing vol singh doucet sequential monte carlo methods multitarget filtering random finite sets ieee transactions aerospace electronic systems vol oct cantoni analytic implementations cardinalized probability hypothesis density filter ieee transactions signal processing vol williams marginal filters rfs derivation mht jipda member ieee transactions aerospace electronic systems vol williams svensson poisson mixture filter direct derivation implementation arxiv preprint phung labeled random finite sets bayes tracking filter ieee transactions signal processing vol reuter dietmayer labeled multibernoulli filter ieee transactions signal processing vol ristic angley suvorova moran fletcher gaetjens simakov gaussian mixture bernoulli tracker multistatic sonobuoy fields iet radar sonar navigation meyer braca willett hlawatsch scalable multitarget tracking using multiple sensors belief propagation approach international conference information fusion scalable algorithm tracking unknown number targets using multiple sensors ieee transactions signal processing vol kropfreiter meyer hlawatsch sequential monte carlo implementation marginal filter information fusion fusion international conference ieee berg fusion tracking using asynchronous delayed data master thesis department signals systems chalmers university technology gothenburg sweden chong mori barker chang architectures algorithms track association fusion ieee aerospace electronic systems magazine vol liggins chong kadar alford vannicola thomopoulos distributed fusion architectures algorithms target tracking proceedings ieee vol bailey simultaneous localization mapping part ieee robotics automation magazine vol mullane adams approach bayesian slam ieee transactions robotics vol brekke kalyan chitre novel formulation bayes recursion filtering aerospace conference ieee ieee julier gning bernoulli filtering moving platform information fusion fusion international conference ieee ristic farina tutorial bernoulli filters theory implementation applications ieee transactions signal processing vol lindberg wymeersch cooperative localization vehicles without measurements ieee wireless communications networking conference april leitinger meyer tufvesson witrisal factor graph based simultaneous localization mapping using multipath channel information communications workshops icc workshops ieee international conference ieee simon optimal state estimation kalman infinity nonlinear approaches john wiley sons wan van der merwe unscented kalman filter nonlinear estimation adaptive systems signal processing communications control symposium ieee ieee arulampalam maskell gordon clapp tutorial particle filters online bayesian tracking ieee transactions signal processing vol williams efficient variational approximation best fitting filter ieee transactions signal processing vol williams lau multiple scan data association convex variational inference arxiv preprint williams lau approximate evaluation marginal association probabilities belief propagation ieee transactions aerospace electronic systems vol schuhmacher consistent metric performance evaluation filters ieee transactions signal processing vol loeliger introduction factor graphs ieee signal processing magazine vol
| 3 |
implementation tetris model atri rudra university buffalo suny atri jan jimmy dobler university buffalo suny jdobler abstract solving sat problems important area work paper discuss implementing tetris algorithm originally designed handling natural joins exact model counter sat problem tetris uses simple geometric framework yet manages achieve fractional bound design allows handle complex problems involving extremely large numbers clauses model counters perform well yet still performs strongly standard sat benchmarks achieved following objectives first found natural set model counting benchmarks tetris outperforms model counters second constructed data structure capable efficiently handling caching data tetris needs work course algorithm third modified tetris order move theoretical environment one performs well practice particular managed produce results keeping within single order magnitude compared solvers benchmarks outperform solvers multiple orders magnitude others research supported part grant nsf introduction sat prototypical problem sat well cousin sat great interest computational complexity completeness turn great tool model wide host practical problems led explosion sat solvers try solve practical instances sat sat exploiting structure instances paper assume importance designing sat sat solvers given refer reader book chapters gomes sabharwal selman sat solvers also known model counters gomes sat solvers details common technique dpll procedure search procedure algorithm makes guesses assignments one variable time determines stage whether produces conflict uses information learn new clauses get closer finding satisfying assignment recently database literature work abo khamis connected dpll procedure computing natural joins particular presented tetris algorithm computes natural join beyond theoretical guarantees special case tetris also recovers recent optimal join results abo khamis showed tetris dpll procedure pointed one main step algorithm exactly resolution step ubiquitous sat solvers given close ties sat solvers dpll left open following intriguing possibility tetris implemented sat solver model counter compete solvers contributions main result paper show tetris indeed implemented model counter competitive model counters actual datasets presented nice geometric framework reason algorithms compute natural join query simplicity arose inefficiencies matter implementing tetris model counter present issues tackle give quick overview tetris fundamental idea rather working create output join directly instead attempts rule large sections cross product joined tables initially tetris given set sets whose union set incorrect solutions problem tetris solving words solution problem must member union efficiently querying set sets adding intelligently various times adding new exclusion whenever output point found tetris able rule increasingly large sets potential solutions ruled possible solutions terminates outputs list solutions tackle following three issues theoretical presentation tetris point tetris needs keep track union potential solutions ruled used simple trie data structure keep track union however loses factors proves detrimental practical performance deal design new data structure essentially compresses consecutive layers traditional trie one mega inspired used simd instructions emptyheaded speed implementations optimal join algorithms set compression manner lends speedup via simd instructions analysis tetris data complexity implies could afford use exponential size join query time algorithm find appropriate ordering explore different variables sat instances longer assume number variables constant hence obtain optimal ordering using brute force algorithm deal designing heuristics take structure tetris account mentioned earlier tetris like dpll procedure performs sequence resolutions theoretically store outcomes resolutions performs however practical efficiency use heuristic decide resolution results cache ones discard experimental results promising natural sat benchmarks based counting number occurrences small subgraphs large graph created implementation tetris least two orders magnitude cases three orders magnitude faster standard model counters sharpsat cachet dsharp also compared tetris model counters standard sat benchmarks tetris either comparable slower theoretical implications paper deals experimental validation theoretical result believe highlights certain theoretical questions worth investigating database community highlight favorite ones correspond three main contributions extending tetris beyond join queries work shown tetris used solve problem beyond original natural join computation recently optimal join algorithms shown powerful enough solve problems host areas csps maxsat prominent example probabilistic graphical models logic also see followup work beyond results far seemed theoretical novelty however given paper demonstrates viability tetris practice work opens tantalizing possibility extending theoretical results tetris problems captured result even maxsat would interest practice computing orderings efficiently mentioned earlier theoretical results tetris assumes required ordering among variables computed exponential time however applications sat well areas probabilistic graphical models assuming question item answered need compute orderings approximately good polynomial time thus avenue theoretical investigation come polynomial time algorithm compute ordering prove guarantees loss performance case tetris access optimal ordering heuristics developed paper might prove good starting points investigation would like point importance efficiently computing variable orderings studied lot database literature recent work generalized hypertree decompositions well known equivalent variable elimination orderings could potentially useful towards goal tradeoff recent results optimal algorithms compute natural joins compute joins functional dependencies focus exclusively time complexity however highlighted work prudent space usage fact benefits actual performance point also indirectly highlighted shown resolution schemes cache intermediate results strictly less powerful context computing natural join however believe systematic theoretical study tradeoff time space needed compute natural join attractive route pursue begin section introducing fundamental concepts necessary understand sat problems details tetris giving example tetris would handle toy example move section analysis major contributions afterwards continue experimental results section discuss related work field section background section introduce concepts necessary understand tetris functions introduce concept resolutions walk tetris would handle simple input sat boxes begin defining several key terms ideas recall sat problem consists series boolean variables joined together series clauses problems generally presented conjunctive normal form cnf simplification wherein entire formula written series ands set disjunctive clauses one example would solution satisfying assignment sat problem assignment true false variables boolean formula satisfied clauses satisfied next consider idea boxes algorithm interpret sat problems box structure number variables original sat problem define set output space potential outputs elements set boxes exists within hypercube along dimension value value extends along full length edge reason simple corresponds false true length edge henceforth use refer edges length thus form following definition definition box notation box takes form observe definitions consider every assignment box important later goal find set points within output space contained see definition boxes point termed output point goal algorithm working boxes find output points definition containment box said contain another box points true equivalently box contains box however one key difference two representations clause cnf formula essentially subproblem wherein least one variable assignment must match value clause assignment possibly satisfying boxes exact opposite true assignment matches value boxes dimensions reject assignment words consider geometric visualization boxes assignments fall within box rejected hence next step devise means convert given sat problem cnf form boxes format tetris understand follows observation important step simply negation cnf formula rest bookkeeping exact algorithm see algorithm algorithm conversion cnf boxes cnf clause negate clause set set variables present clause insert database see definition let consider following toy example cnf problem example first step negate clause give disjunctive normal form dnf next convert boxes replacing variable negation add missing variables conversion three clauses become figure starting boxes corresponding sat clauses boxes technically speaking strictly corners depict boxes depict edges surfaces drawn purpose visual clarity point time insert boxes data structure let list fundamental operations data structure must able perform definition tetris data structure tetris data structure shall able perform following operations insert input box inserted output contains input box seeing structure contains output containing box see definition getallcontainingboxes input box seeing structure contains output set containing boxes return details data structure implementation section resolution come concept resolution key aspect tetris sat solvers resolution defined cnf clauses boxes let begin former let consider two clauses cnf example specifically see two similar clauses differ term negated one therefore resolve two clauses removing term taking remaining variables case gives remove original two clauses cnf problem insert new clause place significant simplification similarly resolve two clauses exactly one pivot point mean variable appears clauses negated one instance looking back example also resolve form case would able remove original two clauses would gained information let formally define process definition resolution clauses two clauses resolved exists exactly one variable pivot point resolution two clauses since boxes simply another representation problem follows resolution performed boxes well first require must exist exactly one variable one box true box call pivot variable output set variable variable one boxes set output box one boxes set output box variables resolution two also see two possible resolutions example resolution two coplanar parallel edges square depicted figure resolution askew edges edge depicted figure formal definition resolution operator henceforth see definition figure resolution vertex square equivalent resolved clause exactly analogous requirement resolve pivot point clause version figure resolution vertex edge equivalent resolved clause definition resolution boxes two boxes resolved exists exactly one true false resolved box equal defined follows undefined observe resolution boxes resolution clauses identical lemma resolution boxes additional restriction exactly one variable must true one box false exactly equivalent resolution sat clauses proof let clauses resolving assume wlog pivot point resolved clause boxes equivalent starting clauses otherwise defined similarly respect resolution boxes defined otherwise let calculate box equivalent output resolution shown inspection definition reveals exactly equivalent since problem arbitrary equivalent inputs produced equivalent outputs two operations must equivalent tetris introduces one additional restriction resolution definition resolution boxes tetris two boxes resolved exactly one spot words demand pivot variable final variable therefore tetris perform resolution figure perform resolution figure see ordering variables determines whether resolution even possible makes determining global ordering variables key issue mentioned earlier theoretical implication address later section general tetris performs resolution pairs recently found boxes let location last variable box must either true false false store box future use true take box resolve stored box value whose last variable false guarantee production box last variables value details works along reasoning pairs always found see section tetris let return example last left inserting three clauses data structure loosely defined definition formal definition database details allows set boxes tetris knows quickly efficiently queried see section one simply assume structure additionally resolve first two boxes leaving third untouched figure therefore database contain exactly boxes see figure furthermore prepare empty array boxes size used later purpose array store retrieve boxes wish resolve boxes database established time perform tetris proper basic idea simple pick point output space call probe point recall point box determine whether box database contains point one store box additional data structure referred cache functions identically main database probe new point box contains list point solution furthermore add point cache along way perform resolution order create new larger boxes process continues entirety output space covered single box point must found every output point done algorithm details noted algorithm originally presented recursively present iteratively purposes speed allows backtracking words backtrack one layer time algorithm advance box probe point note global variable contains last variable set variable return previous branching point take right true branch else last variable set variable return recent level branched left set last variable branch right replace variable repeatedly branch left let consider algorithm behaves regards earlier example pick first probe point giving situation illustrated figure first scan local cache boxes contain point however since first probe point cache trivially empty next scan database contains boxes corresponding clauses original sat algorithm general tetris sat establish variable ordering build database using algorithm empty array size array implicit nonempty advance advance see algorithm probe point past else nonempty boxes advance advance probe point past else add output containing box output point advance advance probe point past location last variable variable ordering store recent box given depth else resolve box corresponding box problem database happens contain two containing boxes reasons become clear shortly operation choose output insert box next task advance probe point lies beyond box proceed according algorithm idea think set possible probe points tree performing search representing paths representing rightbranching paths continue along search find point covered recently discovered box takes note database fetched would able advance probe point far finally insert containing box array location since first variable future use know insertion rather trying resolve box value first variable false takes situation depicted figure scan time probe point find containing box scan find containing box insert box cache advance probe point time although containing box features first location determine location insert based location last variable insert time find containing boxes either cache database therefore found output point see figure illustration add output set add box cache marks point found juncture find last variable location time true therefore probe point cache database probe point map figure initial state database cache location first probe point search output space manner tracked using map right simply union boxes created initial sat problem set initial value currently empty probe point cache database probe point map figure state database cache probe point first round algorithm probe point found box added box advanced reached point contained box turned box box corresponding orange vertex map array empty elsewhere extract box stored previously resolve box know legal resolution scanning output space fashion means retreating right branch box containing corresponding left branch must able contain right branch final variable set instead follows final variable must one pivot point two therefore perform resolution example resolved outputs box furthermore store box location since box ends false index continue forth probe points neither found either therefore output points final variables index inserted index recovering box resolve form box time output resolution ends true recover box index take resolution two boxes giving ends resolve box found back beginning waiting slot form box box completely covers output space therefore algorithm knows found possible output points terminates see figure illustration probe point cache database probe point map figure state database cache probe point first output point found getting first found box probed output point finding output point box resolved aforementioned box produce also added note box could produced juncture tetris contains orange dot contains purple dot empty output points cache database probe point map figure state database cache probe point entire output space covered output point discovered boxes added cache produced chain resolutions eventually resulted production box note longer probe point nothing left probe improvements discuss major additions introduced tetris order handle cnf inputs increase practical efficiency include new data structure work heuristically determining global variable ordering selectively caching certain boxes data structure compression data structure original tetris paper simply states trie suffice achieve asymptotic runtime guarantees true simple trie still leaves much desired attempts implement tetris simple matter produced system significantly slower model counters contribution design novel system tries takes advantage nature problem space improve runtime memory usage data structure description described database must allow variable store three values false true therefore immediate approach use base data structure however sat instances routinely hundreds variables results extremely deep problem space requires lot time probe next step compress multiple layers single node queried single instruction end first come means enumerate possible boxes definition let bijective function set boxes onto integers example one way follows let assigned false true box numerical value gives bijective ternary numeration observe exactly one box namely empty box three boxes equal nine boxes equal boxes equal boxes equal require single node within trie able record box lengths additionally must able store children possible boxes equal therefore compacting four logical layers single layer within database result trie store possible boxes sum five aforementioned values possible children refer collection variables cluster definition cluster cluster set variables database handles single operation default cluster contains variables course raises new issue checking database contains given input string exist sixteen children must checked since replaced even greater number boxes could contained cluster may contain input string creates need way quickly efficiently determine containing box exists cluster create list children searched trie take inspiration emptyheaded relational database engine utilize simd example let consider simplified version data structure contains two layers suppose known box boxes found necessitate traversing child clusters see depicted figure determine whether data structure contains box since cluster contains two layers clusters depth look first two variables input box determine input therefore let consider using find lookup table two bitstrings corresponding input first lists set boxes exist within cluster would contain second line figure second bitstring much set children truncated two variables contain additionally cluster stores two bitstrings boxes marks boxes contained cluster children marks child nodes cluster boxes bitstring specifically top line figure boxes marked box equivalent children bits marked corresponding box prefixes exactly mapping used implementation follows intersection two pairs bitstrings set boxes child prefixes present data structure contain input single operation suffices calculate cluster cluster layer layer layer layer layer figure simd operation shown associated clusters box marked blue box layer corresponds variable layer empty boxes three variables checkmarks mark boxes found containing boxes central branch created box left branch created child created corresponding box inserted child cluster right branch created similarly create child corresponding inserting child cluster drawn database see single flattened cluster see figure let inspect output particular example find matched box child prefix point faced interesting choice suppose sake argument child eventually leads containing box input check containment algorithm return one box considers best containing box one algorithm choose general choose box example choose box reason simple box space covers space covers output space cover additionally tail end box know immediately advance probe point considerably hand middle algorithm must boxes every time scans point contained within hypothetical box repetition costly would rather avoid let consider algorithm would handle input simply using traditional tries words consider would happen cluster size standard algorithm would query input box first value find know could check branch branch since generally preferred would take branch would match value branch proceed third layer would match value branch find containing box return way taken three comparisons find containing box furthermore box found version lower quality would preferred find box instead hand let consider occurs using full clusters figure case single operation immediately finds three containing boxes therefore database boxes input containing boxes figure algorithm takes logical stored bitstrings top bitstring corresponding input row order produce list containing boxes children bottom boxes bitstring contains single mark box second line contains potential boxes would contain addition boxes exist include many output operation bits set corresponding box formed traditional variation terms number comparisons terms quality output summary box stored single bit vector last bits unused record whether given child exists lookup table used find vectors corresponding possible outputs former two concatenated latter two compared using single operation seen output operation must intersection potential containing boxes children found lookup table ones actually exist hence calculating box quickly find box containing exists quickly generate exact list children examine additionally practice turns certain value shows far often sequence notably sequence accepts every single possible input string therefore event layer contains child words fact possible skip entire layer practice produces significant savings computational costs memory usage contributes towards theoretical implication let formally define data structures algorithms used implementation definition data structure data structure trie node trie called cluster top level consists pointer root cluster cluster covering first four variables ordering perform following operations insert algorithm contains algorithm getallcontainingboxes algorithm definition cluster cluster database contains two bitstrings boxes children bitstrings identifying sets boxes children respectively along integer depth informs cluster depth operation bitstring calculates intersection bitstring bitstring retrieved lookup table lists set boxes children contain potentially contain respectively box value setting sets specific bit referring exactly box cluster corresponds four layers standard trie definition index box index box location last variable box cluster figure clusters figure flattened single node four variables cluster node would stored actual database note far compressed drawn trie figure cluster would depth three bits set boxes bits set children example index box algorithm insert given box insert cluster consisting call insertcluster algorithm called root cluster algorithm insertcluster input cluster depth depthto insert box insert cluster consisting location final cluster depth depth nonempty return containing box already data structure stop else set depth else depth create child cluster location depth depth depth set depth call insertcluster cluster indexed depth return insert algorithms algorithms recursively traverse clusters find appropriate location data structure set appropriate bit along way check containing boxes immediately cease operation one found furthermore child cluster contains box inserting contains cluster along path cluster exist create algorithm contains given box find containing box cluster consisting put containscluster algorithm called root cluster return put algorithm containscluster input cluster depth depththat checked containing box box check containment cluster consisting depth nonempty least one box intersection return else depth nonempty least one child intersection children scan order increasing index nonempty return else return contains algorithms algorithms check database see contains box contains box therefore traverse along clusters path first checking see containing boxes exist find one return box immediately cease checking none exists perform search children could potentially contain containing box process continues either containing box found else search space exhausted determined containing box exists algorithm getallcontainingboxes given box find containing boxes cluster consisting put getallcontainingboxescluster algorithm called root cluster return put algorithm getallcontainingboxescluster input cluster depth depth checked box find containing boxes cluster consisting depth nonempty least one box intersection else depth nonempty least one child intersection children else return getallcontainingboxes algorithms algorithms similar contains algorithms however two key differences first contains terminates soon found single containing box getallcontainingboxes continue secondly returns set containing boxes rather one hence name regards behaves exactly contains global variable ordering point simply assuming boxes must order variables exactly order appear original sat formulas words box must correspond must correspond however need case reorder variables greatly improve runtime system original tetris paper cites importance variable ordering however assumes exists exponential time algorithm compute optimal variable ordering justifiable context join problems small compared size database sat problems unacceptable furthermore computing optimal ordering hope improve upon result indeed even approximating ordering intractable nevertheless initial experimentation tetris made clear impactful choice slight variation ordering result large difference runtime thus turned various heuristics intuitions order find quick effective means generate ordering works well practice thereby contribute theoretical implication first let define terms use discussion various ordering strategies definition degree degree variable number clauses variable part example example degree degree degree definition closeness variables said close exists clause includes fewer terms clause closer two variables said specifically closeness two variables equal divided size smallest clause containing variables minus example clause would closeness clause would closeness clauses part sat problem would still closeness first clause smaller size second definition interconnectedness interconnectedness cluster sum ness values pairs variables cluster example using example compose cluster interconnectedness would since general note two strategies found improve performance given global variable ordering first tetris handling variables early tends improve performance see reason must consider nature algorithm scattering variables throughout ordering forces algorithm branch frequently means testing inclusion containing boxes algorithm must scan possible branches highly inefficient instead focus branches much possible beginning hope algorithm progresses layers divergent choices true inclusion check handled quickly let proceed see first introduce example placing variables early ordering proves effective let consider following sample problem example consider sat formula equivalent box problem using ordering would begin database containing boxes degree degree degree let consider algorithm would attack problem used naive first ordering would immediately note two things first boxes final variable means algorithm never find box allows skip multiple probe points unless use resolution create new box happens property fact occur additionally consider possible probe points track many comparisons algorithm would need make point assuming cluster covers single variable instead four see algorithm always calculate set intersection cluster depth perform set intersection cluster depth probe points perform set intersection cluster depth probe points let contrast ordering moves front back give ordering database contains boxes time box back specifically therefore probe point finds box algorithm advance past entirely additionally still perform set intersection cluster depth time time cluster depth perform set intersection depth time numbers ignore skipped one probe points entirely course simple example illustrates principles cause strategy effective larger datasets local interconnectedness direct result system described section box multiple variables within cluster recovered single operation therefore maximizing interconnectedness blocks provides advantage illustrate let consider following example example cluster containing variables rather use naive strategy keeping variables ordered first cluster contains second cluster contains therefore clusters interconnectedness find boxes transcend cluster boundary words always take least two comparisons find either boxes however gone ordering instead box corresponding would entirely contained within first cluster box corresponding would entirely contained within second cluster therefore box would entirely contained within cluster cluster would interconnectedness box could recovered single comparison saves large number comparisons long term ordering algorithms two major methods employed order achieve aims first descending degree sort directly achieved goal practice acceptable job goal additionally constructed three variations method first naive degree descent simply order variables according degree using algorithm algorithm naive degree descent ordering given set variables variable degree sort descending return second optimally grouped degree descent forms possible groups four variables finds greatest possible interconnectivity among groups selects groups greatest interconnectivity basis combined degree group four using algorithm proved effective slow algorithm runtime algorithm optimally grouped degree descent ordering given set variables variable degree possible sets four variables nonempty max max determine maximum possible interconnectedness remaining groups max groups max interconnectedness select group sum degrees variables greatest append group ordering remove grouping contains one variables group selected return necessitated creation third subtype heuristically grouped degree descent ordering ordering works groups four creating group first node chosen highestdegree remaining variable remaining three variables algorithm picks variable highest interconnectedness nodes already chosen group four breaking inevitable ties based degree result algorithm algorithm compute ordering significantly faster optimal ordering tetris run ordering runs competitive optimal ordering algorithm heuristically grouped degree descent ordering given set variables variable degree nonempty max else max max calculate variable best interconnectedness already chosen variables max break ties based degree return additionally employ treewidth tree decomposition introduced essence idea minimize width search tree domain corresponds increasing locality local interconnectedness variables naturally good job interconnectivity decent job placing variables early also experimented minfill ordering described ordering sets elimination order node eliminated node whose removal makes smallest impact overall graph ordering proved effective similar applications found perform poorly tetris table see various orderings performed practice representative graphbased benchmarks instance treewidth sort outperformed others dataset wikivotes dataset created using snap dataset see section ordering caused tetris timeout notably see heuristically grouped degree descent takes slightly longer process input compared naive degree descent significantly less time optimally grouped degree descent however runtime suffer significantly going optimal ordering heuristic one selective insertion original tetris paper calls insertion every box created resolution process inserted database proved inefficient practice frequently result huge increase number branches algorithm must scan trying find output point without notably improving quality containing boxes found therefore insert boxes contain suitably high percentage best results generally come requiring slightly less percent layers composed entirely table posted runtime dataset showing relationship number require storing box runtime tetris performance suffers extreme dataset wikivotes runtime ifferent rderings ordering load time seconds naive degree descent heuristically grouped degree descent treewidth minfill optimally grouped degree descent naive degree descent heuristically grouped degree descent treewidth minfill optimally grouped degree descent runtime seconds timeout table performance various ordering schemes two datasets one see ordering best datasets indeed best ordering one worst insertion ratio see section set tests settings optimal performance resulting insertion ratio close hence regard theoretical implication find decreasing space complexity furthermore improve runtime experimental results compare cnftetris tetris designed solve cnf problems model counters order compare contrast ability tackle model counting problems model counting problem simply put given cnf formula output number satisfying solutions formula since tetris originally designed handle database joins natural problems algorithm solve corresponding sat problems simply determine whether solution exists model counting problem allows solver simply find number solutions without finding solutions cnftetris fact output solutions admittedly poses disadvantage compared solvers comparing datasets cnftetris runs faster spite compare results sharpsat dsharp cachet model counters due recognition model counters tests performed using single thread processor ram additionally include two types datasets first derived join problems graphs sort problems tetris originally designed solve cnftetris good job second set selection standard model counting benchmarks various competitions held past several years model counters trained solve problems serve apt second set benchmarks cnftetris compete nsertion atio runtime ratio time seconds table comparison insertion ratios time solve dataset tests used treewidth ordering similar behavior observed ordering strategies graph results compare contrast various solvers performed model counting problems created graphs dataset generation cnf graph datasets created using publicly available snap datasets graph datasets consists set vertices set edges connecting vertices datasets natural problem arose social networks others anonymized data corners internet use data run various queries instance determine many triangles exist graph goal convert problems equivalent cnf problem use cnftetris model counters solve vertex first assigned unique binary encoding using log bits furthermore increase number bits log bits times size data structure looked graph instance performing triangle query dataset log bits used encoding henceforth let represent size query bits correspond variable cnf encoding problem essence repetitions runtime query base graph wikivotes facebook wikivotes facebook firstr cnftetris various atasets sharpsat dsharp cachet loadtime runtime runtime timeout timeout timeout timeout speedup runtime timeout timeout timeout timeout speedup runtime timeout timeout timeout timeout speedup table table shows comparative results various solvers cnf datasets created using various snap graphical datasets sat datasets runtimes seconds timeout set seconds tests used insertion ratio heuristic degree descent ordering cnftetris wikivote contains variables clauses facebook contains variables clauses contains variables clauses clause data approximately number clauses sat datasets variables clauses variables clauses variables clauses variables clauses variables clauses variables clauses cnftetris loadtime refers time determine variable ordering insert boxes database runtime time find satisfying solutions represents vertex triangle next encode absent edge pair vertices edge edge set graph total boolean formulas query formulas query formulas corresponds one three edges triangle observe possible satisfying solution sat problem select edge exist therefore assignment matches one formulas variables must rejected equivalently accepting assignment must match least one variable inverse naturally leads cnf definition create repeat encoding possible set vertex pairings vertex always written vertex adding additional clauses reject edges would vertex vertex simplifying resolutions also performed possible therefore created cnf problem output query original problem exists solution use problem instance input tetris model counters let examine example instance consider simple example graph depicted figure using triangle query order encode must first calculate binary encodings vertices respectively flip bits giving construct three cnf clauses corresponds first second third edge triangle first corresponding corresponding similarly second third additionally insert clauses forbidding bad orderings points words making sure count separate triangles tetris run cnf input attempts recover number triangles uses probe point assuming naive ordering probe point corresponds inverted binary representations three vertices triangle let consider happens since selected edges correspond missing edge know three clauses must satisfied since edge index order know additional clauses added also accept input therefore tetris add probe point output list continues probe points tetris found triangles figure sample graph four vertices used example encode sat formula along additional clauses ensure count triangles allow run query sat problem results analysis seen table solvers find problems difficult cnftetris solves quickly queries take seconds cnftetris wind taking hours competition cnftetris running nearly thousand times faster problems largely due extremely high number clauses relative number variables along fact clauses contain large number variables factors present many standard sat benchmarks instance average clause many sat benchmarks contains two three variables average clause thirty sat benchmarks rarely ten times many clauses variables system forced tackle environment number clauses exponentially larger number variables note solvers comparing use unit propagation techniques order count models see section details increased number clauses directly corresponds increased work solvers nongraph results section discuss cnftetris performed compared solvers standard model counting benchmarks datasets datasets combination datasets satlib datasets samplecount benchmarks model counting taken international joint conference artificial intelligence chose use ais datasets several reasons first datasets terminates reasonable amount time solvers allowing find interesting comparisons secondly due existence versions dataset use insight whether tetris scaling efficiently size dataset additionally featured datasets gave insights implementation strengths weaknesses results analysis table shows tetris competitive dsharp cachet many datasets indeed factor separates either solver ais datasets difference engineering work alone easily overcome significantly space sharpsat factor average believe distance insurmountable largest gaps exist datasets cnftetris roughly factor worst competition factor compared sharpsat reason simple datasets contain pure variables words exist variables clauses never appears important piece information one must utilized cnftetris current state know however since know problem expect able quickly efficiently attack issue related work work builds tetris developed abo khamis work authors introduced tetris algorithm geometrically solving database join problem turn built work minesweeper nprr leapfrog algorithms tetris generalization furthermore tetris considered version dpll algorithm clause learning dpll evolution earlier algorithm variable chosen every stage assigned either true false algorithm uses unit propagation order simplify clauses assumptions techniques solver assigns value variable every clause inspected see assignment creates unit clause clause one variable see resolutions performed process continues conflicting clause clause violated assignments found point algorithm forced backtrack clause learning versions introduced solver takes opportunity determines went astray adds new clause cache negation errant assignment backtracks took place proceeds opposite direction reasoning cnftetris form algorithm follows aforementioned method converting sat clauses boxes see algorithm since two representations exactly equivalent operation performed one representation translated operation hence every single operation tetris performs boxes execution must correspond exactly set operations original clauses instance contains operation matches idea conflicting clause containing box found consider finding box rejects current probe point tential output point meanwhile conflicting clause rejects potential satisfying assignment much way furthermore course tetris algorithm tentatively assigns variable either true false proceeds along assumption contradiction found learning additional clauses possible resolution process containing box found synthesized resolution advance probe point accordingly essence backtracking earliest decision point choosing opposite direction dpll clause learning therefore exactly dpll algorithm clause learning added restriction fixed global variable ordering tetris additionally utilizes logic system similar systems utilized database schemes zaniolo systems three values true false unknown however three values considering summarized true false causes number key differences instance true unknown equivalent unknown true equivalent true similarly true unknown equivalent true true equivalent much work done creating sat solvers let briefly discuss solvers comparing work first let consider cachet solver originally released minor compatibility updates continuing recent version came next sharpsat first released sharpsat significantly eclipsed contemporary solvers sharpsat maintained time recent release finally come dsharp recently released three competitors dsharpwas introduced order efficiently compile cnf problems decomposable negation normal form language work allowed function model counter utilize version use released solvers common including cnftetris core form dpll algorithm clause learning indeed almost modern sat sat solvers differences come terms efficiency solver uses different array techniques order effectively cache recover learned clauses determine variable ordering identify clause conflicts cachet authors focused adding component caching capabilities top existing sat solver zchaff theoretical grounds introduced caching involved storing subproblems local cache clauses would cachet later juncture thereby reducing redundant calculations course algorithm viewed analogous cnftetris stores learned boxes local cache checks containing boxes examining original database subproblem meanwhile could thought box high percentage variables set however one key difference nature cached components cachet due algorithm functions must regularly prune cache siblings would otherwise cause undercount number models cnftetris contrast needs perform pruning naturally determine exact number models without additional work sharpsat built work cachet adding new ideas boolean constraint propagation also known failed literal rule unit propagation heuristics used sharpsat identify failed literals greater efficiency done cachet however fixing variable order cnftetris simplifies process ultimately means finds conflicting boxes fundamentally different manner sharpsat provides room cnftetris outperform sharpsat dsharp much like sharpsat built cachet uses sharpsat core component authors perform dnnf translation use properties decomposability determinism perform model counting though differences allow outperform pure dpllbased solvers benchmarks since system still uses sharpsat core component still shares many advantages disadvantages comparison cnftetris seen competing solvers viewed evolutions along single line cnftetris throw baby bathwater cnftetris still continues implement classic dpll algorithm represent distinct deviation line challenging assumptions necessity allowing global variable ordering much complex data storage scheme necessary order accommodate necessitated much work order implement also shown vast promise acknowledgments would like thank mahmoud abo khamis hung ngo christopher zhang helpful discussions references cachet http cessed international joint conference artificial intelligence dataset collection http cessed series problems http accessed sharpsat marc thurley https accessed christopher aberger susan kunle olukotun christopher emptyheaded relational engine graph processing proceedings international conference management data sigmod conference san francisco usa june july pages mahmoud abo khamis hung ngo christopher atri rudra joins via geometric resolutions beyond proceedings acm symposium principles database systems pods pages new york usa acm mahmoud abo khamis hung ngo atri rudra faq questions asked frequently proceedings acm symposium principles database systems pods san francisco usa june july pages mahmoud abo khamis hung ngo dan suciu computing join queries functional dependencies proceedings acm symposium principles database systems pods san francisco usa june july pages armin biere marijn heule hans van maaren clause learning sat solvers pages martin davis george logemann donald loveland machine program commun acm july martin davis hilary putnam computing procedure quantification theory journal acm jacm rina dechter constraint processing morgan kaufmann publishers san francisco usa fischl gottlob pichler general fractional hypertree decompositions hard easy cases arxiv november carla gomes henry kautz ashish sabharwal bart selman satisfiability solvers handbook knowledge representation pages carla gomes ashish sabharwal bart selman model counting handbook satisfiability pages rudolf halin graphs journal geometry federico heras javier larrosa albert oliveras minimaxsat efficient weighted solver artif intell res jair manas joglekar rohan puttagunta christopher ajar aggregations joins annotated relations proceedings acm symposium principles database systems pods san francisco usa june july pages jure leskovec andrej krevl snap datasets stanford large network dataset collection http june marx approximating fractional hypertree width acm trans algorithms april matthew moskewicz conor madigan ying zhao lintao zhang sharad malik chaff engineering efficient sat solver proceedings annual design automation conference pages acm christian muise sheila mcilraith christopher beck eric hsu dsharp fast compilation sharpsat canadian conference artificial intelligence pages springer hung ngo dung nguyen christopher atri rudra towards instance optimal join algorithms data indexes corr hung ngo ely porat christopher atri rudra optimal join algorithms extended abstract proceedings acm symposium principles database systems pages acm hung ngo christopher atri rudra skew strikes back new developments theory join algorithms sigmod record tian sang fahiem bacchus paul beame henry kautz toniann pitassi combining component caching clause learning effective model counting marques silva karem sakallah new search algorithm satisfiability proceedings international conference design pages ieee computer society marc thurley models advanced component caching implicit bcp international conference theory applications satisfiability testing pages springer todd veldhuizen leapfrog triejoin simple optimal join algorithm arxiv preprint todd veldhuizen triejoin simple optimal join algorithm proc international conference database theory icdt athens greece march pages carlo zaniolo database relations null values proceedings acm symposium principles database systems pods pages new york usa acm
| 8 |
neural affine grayscale image denoising sep sungmin cha taesup moon college information communication engineering sungkyunkwan university suwon korea tsmoon abstract propose new grayscale image denoiser dubbed neural affine image denoiser neural aide utilizes neural network novel way unlike neural network based image denoising methods typically apply simple supervised learning learn mapping noisy patch clean patch formulate train neural network learn affine mapping gets applied noisy pixel based context formulation enables supervised training network labeled training dataset adaptive network parameters using given noisy image subject denoising key tool devising neural aide devise estimated loss function mse affine mapping solely based noisy data result algorithm outperform recent methods standard benchmark datasets moreover method nicely overcome one drawbacks supervised learning methods image denoising namely supervised trained model mismatched noise variance mostly corrected long matched noise variance step introduction image denoising one oldest problems image processing various denoising methods proposed past several decades wavelet shrinkage field experts based approach wnnm epll csf etc paper propose new image denoiser dubbed neural affine image denoiser neural aide utilizes neural network novel way method inspired recent work discrete denoising novel devised train denoiser solely based noisy data extend approach data case devise novel estimated loss function based noisy data unbiased estimate true mse investigating devised estimated loss function formulate train neural network learn affine mapping gets applied noisy pixel based context formulation enables supervised training network labeled training dataset adaptive network parameters using given noisy image subject denoising experimental results extensively show made subtle design choices developing algorithm furthermore show neural aide significantly outperforms strong baselines standard benchmark test datasets notations problem setting denote clean grascale image pixel corrupted independent additive noise result noisy pixel continuous noise variables independent necessarily identically distributed gaussian moreover standard processing grayscale image denoising normalize treat real numbers importantly following universal setting discrete denoising treat clean image individual image without probabilistic model treat random generally denoiser denoted denoting reconstruction location function noisy image standard loss function used grayscale image denoising measure denoising quality error mse denoted conventionally mse compared using peak psnr defined estimated loss function affine denoiser paper consider denoiser form stands entire noisy image except namely reconstruction location affine function form noisy symbol slope intercept parameters affine function functions surrounding pixels hence separete parameters learned data location presenting concrete form denoiser first consider following lemma lemma consider case suppose denoiser form unbiased estimate notation stands expectation given clean symbol remark note true mse evaluated clean symbol known estimated loss evaluated soley noisy symbol affine mapping noisy variance thus plays key role adaptively learning neural affine denoiser shown next section proof simple algebra following equalities follows follows replacing follows simply rearranging terms thus lemma lemma also show denoisers form exi exi holds since become constant given noise independent exi stands conditional expectation given clean symbol noisy symbols note estimated loss function similar also used filtering problem neural aide neural affine image denoiser neural affine denoiser proposing neural affine image denoiser neural aide considers denoiser form stands noisy image patch context size surrounding include thus patch hole center define neural network takes context input outputs slope intercept parameters location denote weight parameters neural network learned process described later sections get clear arguments specific form denoiser enables learning parameters supervised learning labelled training data adaptive given noisy image note put constraint slope intercept affine function output network nonnegative constraint would appear apparent experimental results also makes intuitive sense denoiser tries estimate interval hence nonnegative slope intercept parameters suffice nonnegativity constraint realized neural network applying log activation function final output layer neural network rest network architecture ordinary neural network relu activation functions depicted figure two sharp differences neural aide neural network based denoisers first schemes take full noisy image patch including center location input network network trained directly infer corresponding clean image patches contrast neural aide trained first learn affine mapping based figure arthe noisy image patch hole context learned chitecture neural mapping applied obtain recostruction difference aide enables development estimated loss function lemma adaptive training process described next section principle learning mapping first applying mapping noisy symbol denoising filtering utilized second unlike schemes reconstructions somehow aggregated generate final denoised image neural aide simply generates final reconstructions thus need step aggregate multiple number reconstructed patches simplifies denoising step furthermore since neural network neural aide estimate two parameters affine mapping context neural aide make much efficient usage data simpler model compared networks schemes need estimate full adaptive training noisy image first describe network parameters adaptively learned given noisy image without additional labelled training data denoting output element neural network context define objective function neural network minimize ladaptive using estimated loss function defined lemma training process using identical ordinary neural network learning start randomly initiallized use backprogagation variants sgd updating parameters formulation may seem similar training neural network regression problem namely solely obtained noisy image analogously thought label pairs supervised regression unlike regression tries directly learn mapping input target label network learns affine mapping context apply estimate unobserved clean symbol fact depends given noisy image assumed makes learning adaptive rationale behind using following shown estimated loss unbiased estimate true expected given context therefore minimizing may result network produces slope intercept parameters minimize true mse reconstrunctions corresponding affine mappings formulation training neural network parameters solely based noisy data inspired recent work discrete denoising training done denoise noisy image used training applying affine mapping location denoting learned parameter minimizing reconstruction location neural aide becomes neural aide supervised training adaptive formulation gives effective way adaptively training denoiser based given noisy image specific form denoiser makes possible carry supervised adaptive training step collect abundant clean images various image sources world wide web corrupt assumed additive noise variance generate correspoding noisy images labelled training data size stands noisy image patch size location includes noisy symbol clean symbol correspond subtle point unlike usual supervised learning may directly learn mapping remain using neural network defined learn minimizing lsupervised note training process minimizing done usual backpropagation variants sgd objective function converges sufficient iteration weight updates denote converged parameter given noisy image denoise update adaptively minimizing ladaptive starting adaptively ladaptive converges denoise converged parameter capability adaptively supervised trained weight parameter unique characteristic neural aide differentiates neural denoisers experimental results compared denoising performance proposed neural aide several denoising methods including mlp epll wnnm csf data experimental setup supervised training generated labelled training set using images available public datasets images images taken set berkeley segmentation dataset remaining images taken pascal voc dataset pascal voc images resized match resolution berkeley segmentation dataset corrupted images additive gaussian noise tested multiple noise levels namely built separate training set size noise level total number training data points dataset thus million evaluated performance denoisers standard test images barbara boat couple hill house lena man montage peppers standard berkeley images network fully connected layers nodes layer showed best result among tried models relu used activation functions used adam optimizer train network supervised training trained network epochs halved learning rate every epochs starting adaptive also trained epochs halved learning rate every epochs starting use regularization methods training moreover context data subtracted values make input network get centered around note affine mappping gets applied still original scale experiments used keras version tensorflow version backend nvidia gpu geforce cuda library version training neural aide section systematically show reasoning behind choosing context size empirical justification nonnegative contraint outputs validity combination supervised adaptive adaptive training noisy image first carried adaptive training solely given noisy image described section given noisy image randomly initialized weight parameters neural network trained objective function training image denoised figure shows psnr results standard test images varying values output activation functions linear positive log sigmoid noise level figure see adaptive training alone still result decent denoiser although psnr gap exists compared shown table see tend best context size adaptive training moreover choice output activation functions turns important discussion given activation function next section supervised training adaptive since limitation adaptive training alone apparent carried supervised training section took images berkeley segmentation dataset trained network varying values shown figure denoising noisy image done identically applying learned affine mapping noisy pixel note case carried experiments linear activation function see supervised training result much higher psnr values adaptive training already close also performance seems get saturated around experiments used encouraged result moved adaptively weight parameters minimizing objective function image initialized parameters learned supervised learning subtle issue regarding activation function describe comes difference among models huge adaptive training random initialization supervised training training images figure adaptive supervised training results standard test images figure trained supervised learning models linear positive output activation functions using images adaptively parameters given noisy image montage image figure show distributions slope intercept paramters model outputs given image shows change psnr value process adaptive figure see trained supervised learning linear output activation function values lie interval however image figure show many negative values produced linear activation readily seen examining form hinder negative values constraint shown figure negative values affine mapping sometime big effect process final denoising performance case psnr increases significantly supervised model however case montage figure suspect negative values sometimes hurt denoising performance greatly contrast put nonnegativitiy contstraint neural network observe stable process observed figure thus results neural aide uses positive activation function figure shows adaptive process standard images supervised model trained full training set images figures see learning done appropriately psnr improve also tested sigmoid activation result less montage montage montage montage psnr values adaptive figure distribution values montage supervised training linear lin positive pos activation functions distributions obtained models epoch psnr values psnr objective function figure psnr objective function value standard images quantitative evaluation standard images table summarizes denoising results compared recent standard images various noise levels show mean standard deviation psnr values baseline methods downloaded codes authors webpages ran code noisy images thus numbers compared fairly mlp could run selected noise levels stands neural aide supervised trained images models supervised learning best model terms epoch chosen based psnr thus practical model chosen heuristic rule stop training loss becomes smaller otherwise epochs table see significantly outperforms baselines average except wnnm difference mean psnr wnnm almost negligible tend smaller variance terms psnr wnnm comparing definitely see adaptive effective also noise level low improvement gets larger furthermore comparing mlp another neural network based denoiser uses much data points million exmample larger model confirm model efficiently uses data psnr mean std mean std mean std mean std mean std mlp epll wnnm table psnr comparsions standard benchmark images figure shows competitive comparison baselines figure plots number images psnr better baseline methods see method mostly outperforms baselines competitively including wnnm one main drawbacks mlp neural networks trained separately noise levels mismatch significantly hurts denoising performance supervised training neural aide also done similar way figure show adaptive effective overcoming limitation figure shows psnr results mismatched models row normalized psnr matched case diagonal element psnr values clearly see sensitivity psnr mismatch values show significant gaps compared diagonal values row hand figure shows psnr values mismatched supervised models adaptively correct clearly see psnr gaps mismatched supervised models significantly closed adaptive gives significant edge mlp competitive comparison psnr psnr figure competitive comparison baselines psnr mismatched psnr mismatched correct standard berkeley images table shows psnr results standard berkeley images clear see outperforms baseline methods including wnnm significant margins mlp epll wnnm table psnr comparisons standard berkeley images concluding remarks devised novel neural network based image denoiser neural aide algorithm devised different principle methods result show simple adaptive affine model neural aide learns differently pixel significantly outperform many strong baselines also adaptive neural aide successfully overcome mismatch problem serious drawback neural network based methods future work would like thoroughly carry experiments even noisier regime also since algorithm require noise gaussian additivity noise assumed would try types noise laplacian noise furthermore extending framework noise multiplicative noise would another interesting direction finally theoretical anayses method based information theory learning theory would another direction worth pursuing references dabov foi katkovnik egiazarian image denoising sparse transformdomain collaborative filtering ieee trans image processing simoncelli adelson noise removal via bayesian wavelet coring icip roth black field experts ijcv mairal bach ponce sapiro zisserman sparse models image restoration iccv zhang zuo feng weighted nuclear norm minimization applicaitons image denoising cvpr zoran weiss learning models natural image patches whole image restoration iccv schmidt roth shrinkage fields effective image restoration cvpr moon min lee yoon neural universal discrete denosier nips weissman ordentlich seroussi verdu weinberger universal discrete denoising known channel ieee trans inform theory moon weissman universal fir mmse filtering ieee transactions signal processing burger schuler harmeling image denoising plain neural networks compete cvpr xie chen image denoising inpainting deep neural networks nips weissman ordentlich weinberger merhav universal filtering via prediction ieee trans inform theory martin fowlkes tal malik database human segmented natural images application evaluating segmentation algorithms measuring ecological statistics iccv kingma adam method stochastic optimization iclr
| 1 |
convolutional neural networks histopathology image classification training using networks brady morteza shivam kimia lab university waterloo canada mathematics computer science department amirkabir university technology tehran iran oct bwkieffe tizhoosh explore problem classification within medical image based feature vector extracted deepest layer convolution neural networks used feature vectors several structures including networks transfer learning evaluate performance deep features versus cnns trained specific dataset well impact transfer learning small number samples experiments done kimia dataset consists histopathology training patches tissue texture classes along test patches evaluation result shows networks quite competitive training scratch well seem add tangible improvement justify additional training observed considerable improvement retrieval classification accuracy inception structure image retrieval medical imaging deep learning cnns digital pathology image classification deep features vgg inception ntroduction amid transition traditional pathology digital pathology scanners replacing microscopes rapidly capturing tissue characteristics digital formats opens new horizons diagnosis medicine hand need store thousands thousands specimens large physical archives glass samples relief many hospitals limited space hand acquiring image specimen enables systematic analysis collaborations possibilities last least diagnosis pathology arguable final frontier disease diagnosis however like technology digital pathology comes challenges imaging generally generates gigapixel files also require digital storage easy analyze via computer algorithms detection segmentation identification tissue types huge digital images pixels appears quite daunting task computer vision algorithms looking computer vision community emergence deep learning vast possibilities recognition classification seems lucky coincidence intend address obstacles digital pathology diverse deep architectures trained large set images imagenet project faces wild database perform difficult tasks like object classification face recognition results impressive one may objectively speak computational revolution accuracy numbers mid high become quite common deep networks trained millions images tested recognize unseen samples spite progress one observe applications deep learning digital pathology hast fully started yet major obstacle appears lack large labelled datasets histopathology scans properly train type neural networks requirement may still missing years come hence start designing training deep nets available datasets training scratch artificially increase number images data augmentation certainly obvious action also use nets trained millions images extract deep features last possibility could slightly train finetune nets adjust nature data use feature extractors classifiers paper investigate usage deep networks kimia via training scratch feature extraction results show employing network trained images may viable option background recent years researchers shown interest leveraging techniques digital pathology images images pose unique issues due high variation rich structures large dimensionality lead researchers investigate various image analysis techniques application digital pathology dealing large rich structures within scan researchers attempted segmentation local global scales example researchers conducted works segmentation various structures breast histopathology images using methods thresholding fuzzy clustering adaptive thresholding varying levels success applying methods histopathological images often desired computer aided diagnosis cad method adopted use image retrieval cbir system work done propose various cbir systems cad multiple groups recently hashing appear proceedings intern conf image processing theory tools applications ipta nov montreal canada methods employed image retrieval among hashing methods kernelized supervised hashing considered effective recently radon barcodes investigated potential method creating cbir utilized cnns relatively small mammography dataset achieve classification accuracy roc auc whereas handcrafted features able obtain accuracy currently interest using networks accomplish variety tasks outside original domain great interest medical tasks often lack comprehensive labeled data train deep network thus groups leveraged networks trained imagenet database consists million categorized images classes groups reported general success attempting utilize networks medical imaging tasks study explore evaluate performance cnn imaging data specifically used feature extractors without fine tuning digital pathology task process use emulate cases large dataset available besides extensive training may destroy network already learned values patch subsequently normalized patches finally downsized fed cnn architecture following steps first obtained patches scan based purely homogeneity threshold randomly sampled patches class leading much smaller training set patches selection patches training set viewed within fig fig shows testing samples relatively balanced kimia dataset whereas training set rather imbalanced different size frequency specimens main reasons imbalance accuracy calculation accuracy measures used experiments adopted chosen results papers could compared ntot testing patches psj belong sets psi looking set retrieved images experiment accuracy defined iii data data used train test cnns kimia consisting whole scan images wsis manually selected scans depicting diverse body parts distinct texture patterns images captured tissuescope bright field using lens image one determine resolution checking description tag header file instance resolution magnification resolution magnification dataset offers training patches manually selected test patches size locations test patches scans removed whitened mistakenly used training color staining neglected kimia dataset patches saved grayscale images kimia dataset publicly ntot accuracy defined total accuracy defined incorporating accuracy measurements resulting problem becomes much difficult attempting obtain acceptable results ethods experiment run using architecture networks provided keras python package utilizing network analyze effectiveness network using feature extractor transferring network weights medical imaging domain patch selection create kimia dataset scan divided patches pixels size overlap patches background pixels bright pixels set white ignored using homogeneity measure patch homogeneity selection criterion every patch homogeneity less ignored high threshold ascertains patch significant texture pattern ignored set patches scan randomly sampled patches selected used protocols deep network optimal setup varies applications however using network applying domains yielded better performing models decided final convolutional block block within final two inception blocks within would single fully connected layer http http appear proceedings intern conf image processing theory tools applications ipta nov montreal canada fig selection patches training scan within kimia dataset patches pixels size top left bottom right appear proceedings intern conf image processing theory tools applications ipta nov montreal canada fig instance distribution training set left testing set right kimia size followed output layer size chosen replace default fully connected layers found give better results optimizer used follows logic learning rate chosen small momentum used large selected ensure drastic changes within weights network training would destroy already learned keras data augmentation api used generate extra training samples network trained total epochs accuracy longer changing batch size softmax classification layer fully connected layers pretrained bottleneck features attached convolutional layers training final two inception blocks performed resulting networks transfer learned used classify test patches class activation mappings cams network randomly selected test patches viewed fig esults results experiments summarized table stated results quite similar training scratch using network feature extractor network delivering comparable results kimia whereas results similar model outperforming feature extractor produced best results minimally updating weights network time consuming task one may prefer utilize however one may prefer using training scratch net requires extra effort produces similar results linear svm cnn feature extractor using provided implementation specified architectures within keras network first used feature extractor without feature extractor last fully connected layer network prior classification used extracted used feature vector networks trained domains different image categories hence used classifier used deep features train linear support vector machine svm classification python package well libsvm used train svm classifiers linear kernel numpy scipy leveraged manipulate store data experiments iscussions surprising find simply using features network trained images see fig deliver results comparable network considerable effort resources trained scratch domain focus histopathology well simpler approach even able achieve noticeable accuracy increase overall performance kimia dataset another surprising effect transfer learning via able provide improvement compared extracting deep features network without change learned weights whereas improvement immediate perhaps obvious reaction finding enough samples millions histopathological images would use proper computational devices efficient training cnn would perhaps deliver cnn classifier proposed network kimia dataset using keras library convolutional layers first separated top fully connected layers training patches fed model create set bottleneck features initially new fullyconnected layers features used initialize weights fully connected mlp consisting one dense relu layer softmax classification layer next fully connected model attached convolutional layers training convolutional block except last block performed adjust classification weights similarily network fully connected layers replaced one dense relu layer appear proceedings intern conf image processing theory tools applications ipta nov montreal canada table comparing results training form scratch reported using deep features via network change classification network best scores highlighted bold scheme train scratch features net features net approach fig sample images imagenet project one may object using features learned images order classify highly sensitive images histopathology medical diagnosis however experiments kimia dataset shows features extracted images expressive enough compete networks trained histopathology images scratch source http deep network architecture well suited problem overly simplistic fully connected network however previously discussed problem given kimia dataset indeed hard problem likely due high variance different patches within given scan variability validated looking results fig two columns contain patches distinct patterns unique features cam first column shows network responds strongly unique structures within label strongly final patch whereas presented completely different patterns second column network responds strongly areas typically ones embody inner edges within sample shows evidence model least begun learn higher level fig activation maps using randomly selected patches kimia testing data patches within column class labels per column respectively activation maps created using keras visualization toolkit algorithm red areas influence label prediction best results clearly better transfer learning although statement supported comparable empirical evidence remains speculation sensitive field like medical imaging difficult train cnn case likely due number factors relative lack image data effect scaling patch use within appear proceedings intern conf image processing theory tools applications ipta nov montreal canada structures within individual patches investigation different architectures would likely improve upon results would aggressive augmentation sawyer iii dunnmon lam xiao rubin optimizing visualizing deep learning classification breast tumors corr vol online available http pan yang survey transfer learning ieee transactions knowledge data engineering vol oct shin roth gao nogues yao mollura summers deep convolutional neural networks detection cnn architectures dataset characteristics transfer learning ieee transactions medical imaging vol may deng dong socher imagenet hierarchical image database computer vision pattern recognition cvpr ieee conference ieee girshick donahue darrell malik convolutional networks accurate object detection segmentation ieee transactions pattern analysis machine intelligence vol jan bar diamant wolf greenspan deep learning training used chest pathology identification proc spie vol lecun bottou bengio haffner learning applied document recognition proceedings ieee vol nov babaie kalra sriram mitcheltree zhu khatami rahnamayan tizhoosh classification retrieval digital pathology scans new online available http simonyan zisserman deep convolutional networks image recognition corr vol szegedy vanhoucke ioffe shlens wojna rethinking inception architecture computer vision proceedings ieee conference computer vision pattern recognition chollet keras https tajbakhsh shin gurudu hurst kendall gotway liang convolutional neural networks medical image analysis full training fine tuning ieee transactions medical imaging vol may pedregosa varoquaux gramfort michel thirion grisel blondel prettenhofer weiss dubourg vanderplas passos cournapeau brucher perrot duchesnay machine learning python journal machine learning research vol chang lin libsvm library support vector machines acm transactions intelligent systems technology vol software available http van der walt colbert varoquaux numpy array structure efficient numerical computation computing science engineering vol online available http jones oliphant peterson scipy open source scientific tools python online accessed online available http seltzer improved bottleneck features using pretrained deep neural networks twelfth annual conference international speech communication association kotikalapudi contributors https selvaraju cogswell das vedantam parikh batra visual explanations deep networks via localization see https zeiler fergus visualizing understanding convolutional networks cham springer international publishing online available http vii onclusions retrieval classification histopathological images useful challenging tasks analysis diagnostic pathology whole scan imaging wsi generates gigapixel images immensely rich details exhibit tremendous interand variance feature extractor transferlearned network able offer increases classification accuracy kimia dataset compared cnn trained scratch comparatively low performance latter could due architecture well suited problem lack sufficient number training images inherent difficulty classification task highly variable histopathology images work would warrant using different architectures comparison aggressive data augmentation potentially increasing size training samples used kimia dataset however feature extractor models able compete methods reported literature therefore show potential improvements acknowledgements authors would like thank huron digital pathology waterloo canada continuing support eferences gurcan boucheron madabhushi rajpoot yener histopathological image analysis review ieee reviews biomedical engineering vol naik doyle feldman tomaszewski madabhushi gland segmentation computerized gleason grading prostate histology integrating domain specific information miaab workshop karvelis fotiadis georgiou syrrou watershed based segmentation method multispectral chromosome images classification engineering medicine biology society embs annual international conference ieee ieee petushi garcia haber katsinis tozeren computations histology images reveal gradedifferentiating parameters breast cancer bmc medical imaging vol zhang liu dundar badve zhang towards largescale histopathological image analysis image retrieval ieee transactions medical imaging vol feb liu wang jiang chang supervised hashing kernels ieee conference computer vision pattern recognition june tizhoosh barcode annotations medical image retrieval preliminary investigation image processing icip ieee international conference ieee tizhoosh zhu chaudhari mehdi minmax radon barcodes medical image retrieval international symposium visual computing springer khatami babaie khosravi tizhoosh salaken nahavandi medical image classification image retrieval ieee canadian conference electrical computer engineering ccece april
| 1 |
dec triangulated equivalences reconstruction classifying spaces hiroki matsui abstract algebra algebraic geometry modular representation theory commutative ring theory study algebraic objects associated triangulated categories topological spaces paper consider relationship triangulated categories topological spaces precise explore necessary conditions derived equivalence noetherian schemes stable equivalence finite groups singular equivalence commutative noetherian rings using associated topological spaces introduction common approach many branches algebra including algebraic geometry modular representation theory commutative ring theory assign algebraic object scheme finite group commutative noetherianring triangulated category perfect derived category dperf stable module category mod singularity category dsg topological space underlying topological spaces proj sing studying triangulated category topological space aim grasp structure original algebraic object motivation natural ask kind relationship exists algebraic objects triangulated categories dperf mod dsg topological spaces proj sing paper consider question precisely following question let algebraic objects corresponding triangulated categories corresponding topological spaces respectively implication hold mathematics subject classification key words phrases triangulated category triangulated equivalence classifying space classifying support data scheme finite complete intersection author partly supported jsps fellows hiroki matsui introduce notion classifying space triangulated category see definition prove following result gives machinery answer question theorem theorem let essentially small triangulated categories classifying spaces respectively implication holds key role prove theorem played support theory triangulated categories tensor triangulated categories support theory developed balmer powerful tool show reconstruction theorem since focus triangulated categories without tensor structure need invent support theory without tensor structure algebraic geometry let scheme derived category perfect complexes called perfect derived category denoted dperf case spec affine well known original scheme reconstructed dperf dperf indeed two commutative rings perfect derived categories equivalent isomorphic see ric proposition hence dperf dperf spec spec topological spaces however result longer holds schemes fact exist lot schemes dperf dperf see muk perf perf triangulated equivalence said derived equivalent section shall prove underlying topological spaces certain class schemes reconstructed perfect derived categories theorem theorem let noetherian schemes open subschemes affine schemes implication dperf dperf topological spaces holds theorem recovers noetherian rings affine scheme typical example scheme punctured spectrum local ring application theorem obtain derived equivalence yields equality dimensions modular representation theory modular representation theory finite groups studied various contexts algebraic viewpoint finite group studied group algebra stable module category mod field whose characteristic divides order mod triangulated category consisting finitely generated modulo projectives hand cohomology ring gives approach study finite group topological aspect isomorphic cohomology ring classifying space see ben chapter instance second main result section following triangulated equivalences reconstruction classifying spaces theorem theorem let resp field characteristic resp let resp finite resp implication mod mod proj proj topological spaces holds exists triangulated equivalence mod mod say stably equivalent application theorem stable equivalence yields equal commutative ring theory let left noetherian ring singularity category definition verdier quotient dsg modr introduced buchweitz buc mod stands category finitely generated left modr bounded derived category singularity categories deeply investigated motivations che ste tak connected homological mirror symmetry conjecture orlov one important subjects representation theory rings classify rings certain category equivalence example left noetherian rings said morita equivalent mod mod abelian categories derived equivalent mod mod triangulated categories singularly equivalent dsg dsg triangulated categories well known equivalences following relations morita equivalence derived equivalence singular equivalence complete characterizations morita derived equivalence already obtained mor ric singular equivalence quite difficult characterize even case commutative rings indeed examples singular equivalences commutative noetherian rings known furthermore known examples singular loci rings homeomorphic thus natural ask following question question let commutative noetherian rings singular loci homeomorphic singularly equivalent section show question affirmative certain classes commutative noetherian rings precise shall prove following theorem theorem theorem let commutative noetherian local rings locally hypersurfaces punctured spectra assume either complete intersection rings rings maximal ideal implication dsg dsg sing sing topological spaces holds hiroki matsui say ideal commutative ring sequence decomposable moreover prove singular equivalence localizes using homeomorphism organization paper follows section introduce notions support data classifying support data given triangulated category develop support theory without tensor structure finally prove theorem section connect results obtained section support theory tensor triangulated categories study reconstructing topologies balmer spectra without tensor structure using method prove theorem section prove theorem give examples commutative rings singularly equivalent throughout paper categories assumed essentially small two triangulated category resp topological spaces notation resp means equivalent triangulated categories resp homeomorphic unless otherwise specified support theory without tensor structure section discuss support theory triangulated categories without tensor structure throughout section denotes triangulated category shift functor first let recall basic definitions used section definition let topological space triangulated category say sober every irreducible closed subset closure exactly one point say noetherian every descending chain closed subspaces stabilizes say subset closed specialization namely element belongs closure contained note union closed subspaces say additive full subcategory thick satisfies following conditions closed taking shifts closed taking extensions triangle belong iii closed taking direct summands two objects direct sum belongs subcategory denote thickt smallest thick subcategory containing introduce notion support data triangulated category definition let triangulated category support data pair topological space assignment assigns object closed subset satisfying following conditions triangulated equivalences reconstruction classifying spaces triangle support data naturally appear various areas algebras example let commutative noetherian ring dsg define singular support ssupp sing dsg sing ssuppr support data dsg indeed follows ail theorem lemma ssuppr closed subset sing ssuppr satisfies condition definition remained conditions clear localization functor dsg dsg exact assume gorenstein denote category maximal cohenmacaulay modules satisfying extir integers recall stable category category whose objects set morphisms given homr homr consists maps factoring free stable category structure triangulated category see hap moreover natural inclusion induces triangle equivalence dsg buc thus obtain support data sing suppr using equivalence supp ssupp sing let noetherian scheme dperf define cohomological support suppx dperf suppx suppx finite union supports coherent modules hence closed subspace moreover suppx support data dperf localization exact details please see tho let field characteristic finite group divides order case gorenstein rings define stable category mod mod also triangulated category denote odd direct sum cohomologies coefficient structure noetherian ring using cup product consider homogeneous prime spectrum proj denote support variety finitely generated closed space proj pair proj becomes support data mod details please refer ben chapter remark actually examples support data satisfy following stronger condition hiroki matsui definition let full subcategory say satisfies remark closed taking direct summands example full subcategory full subcategory test objects see definition let fix following notations notation let triangulated category topological space set thick subcategories thu thick subcategories containing object spcl specialization closed subsets nesc subsets nec closed subsets irr irreducible closed subsets let support data thick subcategory specializationclosed subset one easily check subset thick subcategory therefore obtain two maps respect inclusion relations definition let support data say classifying support data respect noetherian sober space maps restrict mutually inverse bijections thu nesc case say classifying space respect say simply classifying support data resp classifying space mean classifying support data resp classifying space respect remark classifying support data classifies thick subcategories containing indeed map nesc thu injective image thus obtain correspondence spcl particular satisfies condition remark obtain correspondence spcl every classifying support data automatically satisfies following realization property triangulated equivalences reconstruction classifying spaces lemma let classifying support data respect closed subset object proof since noetherian sober space may assume assumption one hence element obtain implies definition classifying support data respect contains object conclude let give two notations definition let say thick subcategory object thickt denote pthu set thick subcategories say thick subcategory thickt pthu implies denote irru set thick subcategories following lemma shows using classifying support data respect also classify thick subcategories thick subcategories lemma let classifying support data respect correspondence thu nesc restricts correspondences pthu irru nec irr proof note thickt therefore injective map thu nesc induces well defined injective map pthu nec surjectivity already shown lemma next show second correspondence thu one thickt hiroki matsui hand nesc one thickt applying equality get thickt let irreducible closed subset assume thickt pthu equality obtain equality thickt since irreducible hence shows conversely take thick subcategory assume closed subsets equality get thickt since therefore thus irreducible observations show second correspondence lemma show following uniqueness result classifying support data respect proposition let classifying support data respect homeomorphic proof first note topological space natural map irr bijective sober define maps composites irr irru irr irr irru irr well defined mutually inverse bijections lemma fix one hence particular belongs therefore conversely argument shows applying inclusion obtain therefore thus conclude since noetherian equation means closed map similarly also closed map following theorem main result section theorem consider following setting triangulated categories triangulated equivalences reconstruction classifying spaces respectively classifying support data respect respectively suppose triangle equivalence homeomorphic proof assumption induces correspondence thu thu object set easily verify pair support data furthermore becomes classifying support data respect indeed thu nesc obtain equalities get equalities thus give mutually inverse bijections thu nesc consequently obtain two classifying support data respect hence homeomorphic proposition comparison tensor triangulated structure section discuss relation support theory discussed section support theory tensor triangulated categories recall tensor triangulated category consists triangulated category together symmetric monoidal tensor product unit object compatible triangulated structure precise definition please refer hps appendix example let noetherian scheme dperf tensor triangulated category denotes derived tensor product let field finite group mod tensor triangulated category throughout section fix tensor triangulated category begin recalling basic definitions used support theory tensor triangulated categories definition full subcategory called thick tensor ideal thick subcategory closed action subcategory denote smallest thick tensor ideal containing hiroki matsui thick subcategory define radical denotes tensor product lemma radical thick subcategory always thick tensor ideal thick tensor ideal called radical satisfies thick tensor ideal called prime satisfies denote spc set prime thick tensor ideals balmer support defined sppm spc set spc topological space closed basis sppm call balmer spectrum let topological space say subset thomason subset union closed subsets whose complements denote thom set thomason subsets note thom spcl say support data tensorial satisfies tensorial support data called simply support data radical thick tensor ideal every subset say tensorial support data classifying noetherian sober space correspondence radical thick tensor ideals spcl balmer showed following celebrated result theorem lemma theorem pair spc spp tensorial support data correspondence radical thick tensor ideals fspp gspp thom spc remark topological space noetherian every subset thomason therefore theorem shows spc spp classifying tensorial support data provided spc noetherian recall tensor triangulated category rigid functor right adjoint every object strongly dualizable natural map isomorphism rigid spc spp satisfies stronger condition lemma assume rigid support data spc spp satisfies condition remark triangulated equivalences reconstruction classifying spaces proof take object spp corollary positive integer hand hps lemma belongs thickt positive integer since every object strongly dualizable therefore using induction conclude note tensorial classifying support data classifying tensorial support data indeed tensorial classifying support data obtain equalities following lemma gives criterion converse implication fact lemma let classifying tensorial support data suppose rigid following equivalent correspondence spcl classifying support data every thick subcategory thick thickt proof lemma theorem theorem satisfies condition remark therefore means conditions remark assumption every thick subcategory form subset hand radical thick tensorial support data assumption thick subcategory thickt thick tensor ideal thus belongs thickt note strongly dualizable family strongly dualizable objects forms thick subcategory hps theorem therefore every object thickt strongly dualizable thus object belongs hps lemma proposition shows every thick tensor ideal radical hand thick subcategory one easily verify subcategory thick containing thus obtain thickt hence thick discussion conclude every thick subcategory radical thick shows implication following corollaries direct consequences lemma proposition theorem corollary let rigid tensor triangulated category assume balmer spectrum spc noetherian thickt classifying support data homeomorphic spc corollary let rigid tensor triangulated categories hiroki matsui spc spc noetherian generated unit objects equivalent triangulated categories spc spc homeomorphic next consider applications corollaries tensor triangulated categories appeared example thomason showed following classification theorem thick tensor ideas dperf theorem tho theorem let noetherian scheme suppx classifying tensorial support data dperf application corollary reconstruct underlying topological spaces certain class schemes perfect derived categories without tensor structure theorem let noetherian schemes open subschemes affine schemes derived equivalent homeomorphic particular topologically determined properties dimensions numbers irreducible components noetherian schemes preserved derived equivalences proof first let remark functor dperf dperf right adjoint rhomox dperf dperf dperf moreover dperf rigid note scheme structure sheaf ample thus every thick subcategory dperf thick tensor ideal tho proposition applying corollary obtain result remark let noetherian schemes already remarked introduction affine derived equivalence dperf dperf implies isomorphic schemes theorem dperf dperf equivalent tensor triangulated categories isomorphic schemes next consider stable module categories group rings finite groups case following classification theorem given algebraically closed field general theorem bcr bik let field characteristic finite group divides order support data proj classifying tensorial support data mod applying corollary classifying tensorial support data obtain following result theorem let resp field characteristic resp resp finite resp stably equivalent proj proj homeomorphic proof mod functor mod mod right adjoint homk mod mod addition mod rigid moreover pgroup one simple module therefore mod thickmod applying corollary done triangulated equivalences reconstruction classifying spaces recall finite group definition sup quillen qui showed dimension cohomology ring equal thus invariant stable equivalences corollary let theorem assume stable equivalence remark let field characteristic lin corollary exists stable equivalence lin corollary exists stable equivalence morita type necessary condition singular equivalences recall commutative noetherian rings said singularly equivalent singularity categories equivalent triangulated categories known examples singular equivalences following example dsg dsg regular dsg dsg periodicity yos chapter let algebraically closed field characteristic set dsg dsg remark singular equivalences singular loci sing sing homeomorphic fact cases clear consider case sing spec spec spec sing first last equalities known jacobian criterion let give definitions appearing statement main theorem section definition let commutative noetherian local ring say ideal sequence decomposable local ring said complete intersection regular local ring sequence completion isomorphic say hypersurface take sequence length local ring said locally hypersurface punctured spectrum hypersurface every prime ideal hiroki matsui following theorem main result section theorem let commutative noetherian local rings locally hypersurfaces punctured spectra assume either complete intersection rings rings maximal ideal singularly equivalent sing sing homeomorphic ring satisfying condition theorem theorem shows sing ssuppr classifying support data dsg therefore statement theorem follows theorem therefore problem case ring satisfying condition theorem takahashi tak classified thick subcategories dsg containing residue field using singular locus sing singular support ssuppr would like apply theorem also case problem whether condition containing residue field preserved stable equivalences show later condition actually preserved singular equivalences local complete intersection rings discuss replacing residue field categorically defined object first let recall notion test module definition let noetherian ring say finitely generated test module finitely generated torr pdr example noetherian local ring syzygy residue field test module commutative noetherian rings admitting dualizing complexes gorenstein rings another characterization test modules theorem cdt theorem let commutative noetherian ring admitting dualizing complex test modules nothing finitely generated satisfying following condition finitely generated extnr idr motivated theorem introduce following notion definition let triangulated category say test object object homt denote full subcategory consisting test objects following lemma shows consider notion test object generalization notion test module lemma let gorenstein ring one test module triangulated equivalences reconstruction classifying spaces proof theorem show mod extr satisfy idr fix maximal finitely generated since gorenstein maximal one therefore get isomorphisms exti positive integer therefore get isomorphisms hom extn denotes dimension thus done since free finite injective dimension let recall several classes subcategories modules definition additive subcategory mod called resolving satisfies following conditions closed extensions exact sequence mod belong closed kernels epimorphisms exact sequence mod belong iii contains projective finitely generated denote resr smallest resolving subcategory mod containing additive subcategory mod called thick satisfies property exact sequence mod belong third finitely generated denote thickr smallest thick subcategory mod containing lemma let triangulated category object thickt contains test object also test object proof take object homt set homt one easily verify thick subcategory assumption contains test object contains thus must zero hence test object next proposition plays key role prove main theorem proposition let local complete intersection ring finitely generated following equivalent test module resr thickr thickdb mod thickdsg thickcm hiroki matsui proof notice resr resr thickr thickr thickdb mod thickdb mod thickdsg thickdsg test module hence may assume maximal resr thickr thickdb mod mod first inclusion directly follows definition second equality given theorem moreover composition functor mod dsg sends inverse image thickcm thickdb mod therefore implications hold true furthermore using lemma lemma implication follows thus remains show implication assume test module recall complexity cxr finitely generated dimension support variety associated see details cdt proposition maximal complexity namely cxr codim thanks prime avoidance lemma take sequence length set artinian complete intersection ring cxr cxr codimr codim moreover one acka denotes algebraic closure follows fact closed subvarieties affine space acka hence theorem belongs thickdb mod result get thickdb mod mod thickdb mod mod thickr second equality uses theorem since thickr resr corollary deduce resr using tak lemma gathering tak theorem theorem lemma proposition obtain following proposition proposition let noetherian local ring satisfies condition theorem sing ssuppr classifying support data dsg respect dsg satisfies condition theorem sing ssuppr classifying support data dsg proof theorem almost done proof theorem use proposition theorem let remark test objects preserved singular equivalences remark hypersurface ring triangulated category dsg becomes pseudo tensor triangulated category tensor triangulated category without unit shown implicitly paper two hypersurfaces singular equivalence preserves tensor products sing sing homeomorphic indeed sing reconstructed dsg using pseudo tensor triangulated structure triangulated equivalences reconstruction classifying spaces since theorem gives necessary condition singular equivalences generate many pairs rings singularly equivalent let start following lemma lemma let local complete intersection ring isolated singularity integer ring local complete intersection ring locally hypersurface punctured spectrum sing homeomorphic spec proof course local complete intersection ring natural inclusion induces homeomorphism spec spec one easily check spec therefore locally hypersurface punctured spectrum sing spec corollary let local complete intersection rings isolated singularities assume spec spec homeomorphic integers one dsg dsg particular dsg dsg denotes trivial extension ring commutative ring proof lemma obtain satisfies condition theorem sing spec homeomorphic spec sing thus conclude dsg dsg theorem second statement follows isomorphism following corollary says equivalence fails ring corollary let regular local ring assume isolated singularity one dsg dsg proof sing spec spec sing different dimensions hence homeomorphic last paper show singular equivalence localizes lemma let gorenstein local ring prime ideal full subcategory dsg dsg thick triangle equivalence dsg dsg proof using triangle equivalence dsg may show triangle equivalence note localization functor triangulated since ker thick subcategory induces triangulated hiroki matsui functor thus verify dense fully faithful dense let take finite free presentation rpn rpm viewed entries write aij aij cokernel coker aij finitely generated since maximal obtain isomorphisms shows functor dense faithful let morphism given fraction morphisms mapping cone belongs assume homrp isomorphism homr homrp homr since isomorphism mapping cone morphism belongs thus shows faithful iii full let morphism isomorphism homr homrp morphism since mapping cone obtain morphism shows full corollary let complete intersection rings locally hypersurfaces punctured spectra singularly equivalent homeomorphism sing sing singularly equivalent sing proof lemma may consider category let triangle equivalence take homeomorphism sing sing given proposition theorem construction satisfies supps suppr sing moreover following diagram commutative tht tht fsupp nesc sing nesc sing map defined respectively let element sing set sing specializationclosed subset sing establish two claims triangulated equivalences reconstruction classifying spaces claim gsuppr proof claim let since one suppr thus suppr hence gsuppr next take gsuppr suppr means belong suppr therefore hence claim sing proof claim one easily check order isomorphism respect inclusion relations since sing unique maximal element sing sing also unique maximal element shows two claims obtain gsuppr gsupps gsupps second equality comes commutative diagram last equality shown proof claim consequently triangle equivalence induces triangle equivalences acknowledgments author grateful supervisor ryo takahashi many supports helpful comments references avramov buchweitz support varieties cohomology complete intersections invent math ail avramov iyengar lipman reflexivity rigidity complexes commutative rings algebra number theory balmer presheaves triangulated categories reconstruction schemes math ann balmer spectrum prime ideals tensor triangulated categories reine angew math bass murthy grothendieck groups picard groups abelian group rings ann math ben benson representations cohomology cohomology groups modules cambridge stud adv math cambridge university press bik benson iyengar krause stratifying modular representations finite groups ann math bcr benson carlson rickard thick subcategories stable module category fund math buc buchweitz maximal modules gorenstein rings unpublished manuscript http carlson iyengar thick subcategories bounded derived category finite group trans amer math soc cdt celikbas dao takahashi modules detect finite homological dimensions kyoto math che chen singularity category algebra radical square zero doc math dao takahashi radius subcategory modules algebra number theory hiroki matsui hap happel triangulated categories representation theory finite dimensional algebras london math soc lecture note series cambridge university press hps hovey palmieri strickland axiomatic stable homotopy theory mem amer math soc iyama wemyss singular derived categories terminalizations maximal modification algebras adv math krause stevenson note thick subcategories stable derived categories nagoya math lin linckelmann stable equivalences morita type selfinjective algebras math zeit mor morita duality modules applications theory rings minimum condition sci tokyo kyoiku daigaku sect muk mukai duality application picard sheaves nagoya math nasseh takahashi local rings maximal ideal preprint orlov equivalences derived categories surfaces math sci olrov triangulated categories singularities model proc steklov inst math qui quillen spectrum equivariant cohomology ring ann math ric rickard morita theory derived categories london math soc ste stevenson subcategories singularity categories via tensor actions compos math tak takahashi classifying thick subcategories stable category modules adv math tho thomason classification triangulated subcategories compos math yos yoshino modules rings london mathematical society lecture note series cambridge university press cambridge triangular spectrum matrix factorizations singular locus proc amer math soc graduate school mathematics nagoya university furocho chikusaku nagoya aichi japan address
| 0 |
milp based heuristics eternity puzzle oct fabio salassaa wim vancroonenburgb tony wautersb federico della crocea greet vanden bergheb politecnico torino digep corso duca degli abruzzi torino italy leuven department computer science codes gebroeders smetstraat gent belgium abstract present paper considers hybrid local search approach eternity puzzle unsigned rectangular edge matching puzzles general original linear programming milp formulation novel formulation presented problem although presented formulations remain computationally intractable medium large sized instances serve basis developing heuristic decompositions large scale neighbourhoods side product formulation new instances published academic research community two reasonably well performing constructive methods presented used determining initial solution local search approach experimental results confirm local search improve results obtained constructive heuristics quite competitive state art procedures keywords edge matching puzzle hybrid approach local search introduction eternity puzzle eii commercial edge matching puzzle square tiles four coloured edges must arranged grid corresponding author email address tony wauters preprint submitted october tile edges matched addition complete solution requires grey patterns appear subset tiles matched outer edges grid illustration complete solution small size puzzle provided figure figure solution eternity edge matching puzzle size image generated eternity editor http accessed january eii puzzle created christopher monckton released toy distributor tomy july along puzzle release large cash prize million usd announced awarded first person could solve puzzle expected competition attracted considerable attention many efforts made tackle challenging problem yielding interesting approaches results however complete solution ever generated meanwhile final scrutiny date cash price december passed leaving large money prize unclaimed eii puzzle belongs general class edge matching puzzles shown many approaches edge matching puzzles available literature constraint programming approaches developed addition metaheuristics backtracking evolutionary methods methods translate problem sat formulation solve sat solvers extensive literature overview topic provided provides survey complexity puzzles present paper introduces novel linear programming milp model novel based formulation puzzles size formulations serve components heuristic decompositions used local search approach remainder paper structured follows section presents milp maxclique formulations section several hybrid heuristic approaches introduced computational results presented section final conclusions drawn section problem formulations mixed integer linear programming formulation novel linear programming model developed eii puzzle problem following notation used puzzle consists square onto tiles need placed index used refer tiles indices denote rows resp columns puzzle board index refers rotation tile means rotated means rotated clockwise etc coefficient ctt resp equal tile colour top resp bottom left right position rotated decision variables milp model defined follows tile placed row column rotation otherwise right edge position unmatched otherwise bottom edge position unmatched otherwise model defined follows min crt clt crt clt cbt ctt cbt ctt ctt cbt clt crt objective function expression minimises number unmatched edges inner region puzzle constraints indicate tile must assigned exactly one position one rotation constraints require exactly one tile must assigned position edge constraints force variables take value tiles positions unmatched similarly constraints vertical edge variables finally constraints ensure border edges matched gray frame colour point constraining objective function zero unmatched edges allowed turns model feasibility problem every feasible solution also optimal however preliminary testing showed latter model relevant small size problem instances milp solver needs stopped prematurely feasibility model solution returned clique formulation eii puzzle decision problem modelled reduces well known decision version clique problem follows given parameter undirected graph clique problem calls finding subset pairwise adjacent nodes called clique cardinality greater equal let nodes graph correspond variables formulation introduced section node thus represents tile given position puzzle given rotation nodes connected iff conflict nodes puzzle possible causes conflicts unmatching colors adjacent positions tile assigned different positions tile assigned position different rotations different tiles assigned position objective find clique size size puzzle puzzle size optimal solution clique number nodes number edges graph density milp number variables number constraints thread time threads time table results obtained maximum clique formulation solved algorithm milp formulation solved cplex small size edge matching puzzle instances comparison milp model clique model applicability milp model clique model investigated follows initial testing performed set small puzzle instances ranging refer section information instances milp model implemented using cplex state art heuristic used solving maximum clique problem kindly provided authors heuristic one parameter number selections computing time algorithm linear respect tested heuristic milp model max clique heuristic tested modern desktop table shows results obtained max clique formulation milp formulation instance report clique formulation number nodes number edges optimal solution namely max number matching edges density best number matching edges average computing time seconds runs number variables constraints reported milp formulation solutions depicted bold optimal results show instances size easily solved using state art maximum clique intel core cpu algorithm instances size could solved completely even algorithm executed higher values runs milp also able solve size however clique formulation significantly faster size upwards note edge matching puzzles correspond large difficult clique instances current max clique solvers able find optimal solution provide corresponding max clique instances instances academic larger size instances hard manage graph file example larger solution approaches milp model clique formulation presented previous section proved computationally intractable medium sized instances size appears restricted execution time limited one day true eii puzzle instance still far beyond grasp models however models serve basis well performing heuristics presented following paragraphs greedy heuristic greedy constructive heuristic developed problem studied heuristic based subproblem optimisation puzzle divided regions considering individual rectangular regions regions consecutively constructed employing variant milp model presented section first introduce notion partial solution subset tiles assigned subset positions given partial solution model modified considers positions region aims assign tiles tiles assigned elsewhere addition restrict rectangular region denoted rmin rmax positions region instances downloaded https generator instances available upon request authors figure illustrates model modified solve region given partial solution example required select available tiles tiles already assigned region way unmatched edges minimised hence order consider region objective function modified follows min remaining tiles must selected assigned region therefore constraints modified follows note inequality indicates tiles selected similarly constraints also suitably modified order take account specific region considered note edge constraints forcing values variables also hold rows columns matching boundaries previously solved regions enables building solution unmatched edges region boundaries partial optimization model applied solve regions sequentially thus constructing final complete solution initially optimised region disjoint region optimised variables corresponding region optimally assigned milp solver algorithm presents pseudocode approach algorithm greedy heuristic require decomposition regions initial partial solution tiles assigned apply milp model region given get new partial solution tiles assigned end return puzzle size differently sized subsets tiles tested assess quality approach preliminary tests regions varying partial solution hrc vrc empty region figure illustration model modified considers positions region given partial solution region tiles size tiles size performed eii puzzle instance preliminary analysis revealed cpu time required iteration greedy heuristic limits subset size tiles roughly corresponds milp variables first region real eii puzzle clearly increased number tiles leads better results however cpu time needed compute optimal solution limiting use hybrid framework backtracking constructive heuristic backtracking version greedy heuristic also developed main idea namely building complete solution constructing optimal regions greedy heuristic backtracking version however restricts optimal value subproblem zero tiles region match internally respect tiles outside region whenever subproblem determined infeasible completely edge matching region constructed procedure backtracks previous region order find new assignment region may afterwards enable constructing feasible assignment next region process repeated backtracks sufficient find complete solution model suitably modified used build partial solutions let current region considered procedure related partial solution corresponding milp model solved whenever lower bound milp model related region detected greater zero optimisation region stopped instead previous region reconsidered order obtain new partial solution value let set variables value solution previous partial solution must cut searching solution following new constraint added model rationale force least one variables set equal zero solution previous region lead zero lower bound current region procedure backtracks searches new solution region due enumerative nature procedure lead incomplete solutions despite long computation times decided limit backtracking procedure fixed time limit greedy heuristic continues complete solution generated backtracking heuristic sketched algorithm recursive method backt rackin heu rist obtained attempts solve current region given partial solution previous region lower bound current region greater method backtracks previous level however lower bound still perfectly matched assignment found heuristic attempts solve next region continue calling recursively puzzle solved shown infeasible given current assignments latter case current partial solution excluded new partial solution constructed different previously excluded partial solution timeout reached method continue best partial solution solve remaining regions greedy heuristic discussed previous section local search approach local search approach developed improve solutions generated constructive heuristics random solution key idea test initial complete solution generated heuristics whether neighbourhood still improve current solution local search method steepest descent search tries improve solution following neighbourhoods border optimisation region optimisation tile assignment tiles swap rotation refer figure illustration regions considered neighbourhoods border optimisation neighbourhood considers placing tiles border tiles inner part fixed decomposition tries find optimal border terms matching edges also considering fixed tiles adjacent inner part correspondingly model modified way inner variables fixed current value border variables change value subproblem corresponds edgematching problem preliminary computational tests indicated related milp model could always solved solutions largest instances algorithm backt rackin heu rist require decomposition regions require current recursive level require partial solution previous level initial empty partial solution excludedsolutions partial solutions leading feasible solutions timeout excludedsolutions ize region return feasible solution current level backtrack else backt rackin heu rist lead feasible solution excludedsolutions excludedsolutions else complete solution return end end end greedy return original eii puzzle generated within little computation time neighborhood considered corresponding milp model solved returning solution least good current solution consisting optimal border respect inner region region optimisation neighbourhood relates optimisation smaller region inside puzzle considers tiles region puzzle correspondingly given current solution model suitably modified way variables outside region fixed current value region variables change value neighborhood also tackled means formulation generating graph containing nodes corresponding assignments specified region however feasible assignments considered nodes conflicting assignments adjacent outside region added graph recall purpose model find complete assignments without unmatched edges however given tiles considered region may feasible find solution case holes left region remaining unassigned tiles assigned related milp region model solved assigned variables fixed value determined maxclique solver neighborhood considered local search procedure samples regions fixed size current solution consideration small sizes model heuristically solved faster milp model therefore neighborhood always addressed means formulation milp formulation used completing solution whenever holes left region tile assignment neighbourhood tiles removed positions diagonally adjacent allowed optimally reinserted thereby minimising number unmatched edges related subproblem corresponds pure bipartite weighted matching problem optimally solvable hungarian algorithm neighbourhood first introduced schaus deville called large neighbourhood wauters developed probabilistic version neighbourhood sets higher probability selecting tiles many unmatched edges latter variant applied present paper separates inner border moves prohibited reassign border pieces inner region vice versa extention neighbourhood also tested particular checkers configuration selected tiles studied tiles board diagonally adjacent denote extension black white local search procedure iterates neighbourhood iteratively changing black white positions solving related bipartite weighted matching problem improvements found finally tiles swap rotation tsr neighbourhood standard local search swap operator case swapping assignment two tiles trying possible rotations well local search procedure exhaustively searches neighbourhood local optimum reached computational results section provides computational results obtained local search approach eternity puzzle well instances used meta eii latter instances serve interesting test set comparison due availability results contest addition best knowledge complete solutions instances publicly available instances used section also originate set tests performed cores intel nehalem cluster ram core running ghz cache computational resources provided dauin hpc cluster used solve different parallel order reduce total time required run tests individual test run single processing core thus parallelism employed algorithms milp models solved using cplex international conference metaheuristics nature inspired computing djerba island tunisia october eternity contest http details see http figure illustration regions involved neighbourhood operations optimizing border optimizing rectangular region optimizing nonadjacent tile assignments optimizing diagonally adjacent tile assignments checkers fashion swapping two tiles possibly rotating tsr parameter name value tiles cols rows table parameter settings table summarizes parameter settings local search approach local search procedure starts either random solution solution obtained constructive heuristics algorithm cycles proposed neighbourhoods following sequence iterations sample size one iteration till local optimum tsr till local optimum finally means formulation iterations rectangular sample size sequence determined experimentally though difference performance sequences limited end step final solution local minimum respect considered neighbourhoods table shows results twenty runs greedy heuristic backtracking heuristic different region sizes problem instances timeout seconds set backtracking heuristic instance seconds instance seconds instance seconds eii instances greedy heuristic executed backtracking heuristic executed regions solved table also shows results constructive methods subsequent optimisation local search heuristic general constructive heuristics generate better results larger regions used clearly affects cpu time needed compute optimal solutions region comparing results two heuristics without local search phase seems backtracking procedure strongly dominate maximum average minimum values greedy one consuming available time dominance tends evident small puzzles region sizes larger instances solutions generated larger regions gap becomes smaller almost cases local search procedure manages improve results constructive heuristics several units indicating initial solutions local optima respect considered neighbourhoods conclude many neighbourhoods complex structure effective improving greedy constructive solutions table shows performance local search procedure starting random initial solution poor quality procedure achieve good quality results instance larger instances easily related size neighborhood quite large respect puzzle size instance able optimize large part puzzle however ratio becomes smaller thus less effective larger instances finally table compares best published results results obtained hybrid local search procedure cpu times refer considered time limits table also reports large test procedure best performing configuration run times within doubled execution time limit larger execution times using cplex ilp solver induce improvements results note entries table missing many approaches deal subset considered instances three studies report results instances tested paper approaches applied real eii game puzzle algorithm reported executed cpu intel xeon computation time hours best score runs equals obtained best score indication provided computer required cpu time algorithm run pentium core quad ghz ram considered eii style problems real puzzle sizes corresponding time limits seconds respectively entries table report best solution obtained runs algorithm addressed instances time limits number runs tested personal computer cpu ram time limits number runs tests performed intel core duo ram algorithm tested instances also eii real game puzzle entry reports best result obtained runs time limit seconds eii finally algorithm tested eii real game puzzle running grid computing system period several explicitly indicated authors results show algorithm competitive state art obtaining top results instance similar time frame algorithms interesting initial solution constructed greedy backtracking heuristics already high quality leaving limited gap optimal solution therefore expect methods may serve basis reaching new top results best result official eii puzzle instance obtained using slipping tile scanrow backtracking algorithm still current grasp however algorithm highly tailored eii puzzle instances used precomputed sequences run course several see http direct comparison approach presented partially misleading among existing approaches shows slightly superior approach however approach become competitive along expected performance improvement milp solvers years clearly solving larger subregions constructive heuristics greedy backtracking lead better initial solutions addition effectiveness local search neighbourhoods expected improve larger regions solved performance improvements allow ilp solvers address instances size even reasonable amount time may safely assumed proposed approach lead improved results competitive state art approaches conclusions present work introduced hybrid approach eternity puzzle milp formulation related puzzle optimisation version total number unmatched edges minimised shown eternity puzzle modelled clique problem providing byproduct work new hard instances maximum clique problem community preliminary testing revealed clear models handle large size instances original eii puzzle quickly become computationally intractable therefore models used basis heuristic decompositions could used hybrid approach greedy backtracking constructive heuristic designed strongly rely capability optimally solving specific region puzzle within reasonable time limit high quality solutions generated using heuristics local search approach also proposed applying set different neighbourhoods local search procedure manages improve upon initial solutions generated constructive heuristics reaches solutions competitive best available results results confirm novel clever use mathematical models effective large size problems solved milp solver believe hybridizing local search approaches mathematical programming techniques matheuristic context key break intractability hard problems eii puzzle references mateu edge matching puzzles hard benchmarks proceedings international conference principles practice constraint programming benoist bourreau fast global filtering eternity constraint programming letters bomze budinich pardalos pelillom maximum clique problem pardalos eds handbook combinatorial optimization kluwer academic dordrecht coelho coelho coelho haddad souza ochi general variable neighborhood search approach resolution eternity puzzle proceedings international conference metaheuristics nature inspired computing meta http submission demaine demaine jigsaw puzzles edge matching polyomino packing connections complexity graphs combinatorics grosso locatelli pullan simple ingredients leading efficient heuristics maximum clique problem journal heuristics heule mjh solving problems satisfiability solvers proceedings second international workshop logic search lash kendall parkes spoerer survey puzzles international computer games association journal kuhn hungarian method assignment problem naval research logistics quarterly gutierrez sanchis evolutionary genetic algorithms constraint satisfaction problem puzzle eternity cabestany sandoval prieto corchado editors systems computational ambient intelligence lncs schaus deville hybridization vlns eternity jfpc francophones programmation par contraintes http eternity verhaard http wang chiang solving puzzles tabu search algorithm proceedings international conference metaheuristics nature inspired computing http submission wauters vancroonenburg vanden berghe approach eternity puzzle journal mathematical modelling algorithms instance start greedy greedy backtracking backtracking greedy greedy backtracking backtracking greedy greedy backtracking backtracking greedy greedy backtracking backtracking greedy greedy backtracking backtracking region size max avg min time avg table results greedy backtracking heuristics without local search using different region sizes objmax objavg objmin optimum eii instance table results local search procedure optimal neighbourhoods starting random solution present paper runs present paper runs wang chiang coelho schaus deville wauters verhaard optimum min min min min min min min min min min min min min min min min min min min min eii instance min min min min table comparison best results approaches available literature execution times presented within parenthesis
| 8 |
mapping images scene graphs structured prediction roei herzig moshiko raboh gal chechik jonathan berant amir globerson feb abstract structured prediction concerned predicting multiple labels simultaneously classical methods like crf achieve maximizing score function set possible label assignments recent extensions use neural networks either implement score function maximization current paper takes alternative approach using neural network generate structured output directly without going score function take axiomatic perspective derive desired properties invariances network certain input permutations presenting structural characterization provably necessary sufficient discuss invariant gpi architectures satisfy characterization explain used deep structured prediction evaluate approach challenging problem inferring scene graph image namely predicting entities relations image obtain results challenging visual genome benchmark outperforming recent approaches introduction structured prediction addresses problem classification label space contains multiple labels example semantic segmentation image pixel assigned label considering labels nearby pixels similar problem task recognizing multiple entities relations image recognizing one entity affects recognition others structured prediction attracted considerable attention applies many learning problems poses unique theoretical applied challenges see taskar chen belanger equal contribution university israel google brain gonda brain research institute university israel correspondence roei herzig roeiherzig moshiko raboh shikorab gal chechick jonathan berant joberant amir globerson typically structured prediction models define score function quantifies well label assignment compatible consistent input setup inference task amounts finding label maximizes compatibility score arg maxy approach separates scoring component implemented parametric model optimization component aimed finding label maximizes score unfortunately general scoring function space possible label assignments grows exponentially input size instance set possible pixel label assignments large even small images thus inferring label assignment maximizes scoring function computationally hard general case alternative approach methods map input structured output black box neural network without explicitly defining score function raises natural question properties invariances must satisfied network take axiomatic approach argue one important property invariance particular type input permutation prove invariance equivalent imposing certain structural constraints architecture network describe architectures satisfy constraints significantly extending expressive power current structured prediction approaches argue respecting permutation invariance important otherwise model would spend capacity learning invariance training time conceptually approach motivated recent work eep ets zaheer asked similar question functions sets evaluate approach tackle challenging task mapping image scene graph describes entities image relations describe model satisfies permutation invariance property show achieves results competitive visual genome benchmark krishna demonstrating power new design principle summary novel contributions paper first derive sufficient necessary conditions deep structured prediction architecture second improve approach challenging problem large dataset complex visual scenes mapping images scene graphs structured prediction structured prediction methods structured prediction define score function reflects degree compatible infer label solving arg maxy see lafferty taskar meshi chen belanger score functions previously used decompose sum simpler functions solving maxy performed local maximization forms basic building block algorithms approximately maximizing one way achieve restrict depend small subset variables renewed interest deep learning led efforts integrate deep networks structured prediction including modeling functions deep networks context score functions singleton pairwise fij initial work used architecture learning local scores independently structured prediction goal chen farabet later works considered architectures inference algorithm part computation graph chen pei schwing urtasun zheng studies used standard inference algorithms loopy belief propagation mean field methods gradient descent belanger methods provide several advantages first allow intuitive specification local dependencies labels like pairwise dependencies translate global dependencies second score function linear parameters linear learning problem natural convex surrogates logloss crf making learning efficient third inference large label spaces often possible via exact combinatorial algorithms empirically accurate approximations however advent deep scoring functions learning longer convex thus worthwhile rethink architecture structured prediction models consider models map inputs outputs directly without explicit score function want models enjoy expressivity predictive power neural networks maintaining ability specify local dependencies labels flexible manner next section present approach consider natural question properties deep neural network used structured prediction term energy function also used precisely many message passing algorithms require functions maximized efficiently permutation invariant structured prediction begin notation focusing structures consists pairwise interactions simpler terms notation sufficient describing structure many problems denote structured label entries approach score defined via set singleton scores pairwise scores fij overall score sum singleton pair scores brevity also denote fij fij inference algorithm takes input set local scores fij outputs assignment maximizing therefore abstractly view inference algorithm blackbox takes input set inputs local scores fij returns label even without explicit score function numerous inference algorithms exist setup including belief propagation mean field aim develop framework deep learning labeling algorithm avoid term inference since algorithm explicitly maximize score function algorithm functions input labels output next ask architecture algorithm follow several definitions graph labeling function function whose input ordered set node features ordered set edge features example array values table values simplicity assume output set labels thought labeling nodes thus inference algorithms like graph labeling functions since take input output set labels however graph labeling functions need correspond inference algorithm algorithm maximizes score function natural requirement algorithm produces result given score function example consider label space containing three variables assume inference algorithm takes input outputs label algorithm given input permuted consistent way defines exactly score function first scenario hence would expect output label permuted namely output inference algorithms including mean field satisfy symmetry requirement mapping images scene graphs structured prediction characterizing permutation invariance figure graph permutation invariance structured prediction graph labeling function graph permutation invariant gpi permuting names nodes maintains output sign design deep learning hence need guarantee invariance input permutations satisfy invariance waste capacity learning training time follows use denote joint set node edge features thought container elements next consider happens graph labeling function graph variables permuted permutation importantly edges case also permuted way consistent node permutation definition let set node edge features given permutation denote new set node edge features given elements node elements permuted according edge elements permuted accordingly follows use notation namely applied set labels yields labels permuted next comes key definition function whose output invariant permutations input graph definition graph labeling function said invariant gpi permutations satisfies figure illustrates desired invariance property says long input describes node edge properties labeling output indeed property would like thus turn characterizing necessary sufficient structure achieving motivated discussion ask structure necessary sufficient guarantee graphpermutation invariant note function takes input ordered set therefore output could certainly differ output achieve permutation invariance intuitively contain certain symmetries example one permutation invariant architecture define function characterization restrictive cover permutation invariant functions next theorem provides complete characterization figure shows corresponding architecture theorem let graph labeling function invariant exist functions proof first show satisfying conditions theorem gpi namely permutation see write using definition second argument clearly invariant sum considers index indices hence elements covered permutation expression therefore equals equality follows thus proved implies graph permutation invariance next prove invariant function expressed namely show define implement permutation invariant function key idea construct second argument contains information graph features including edges originated function consists application black box representation followed extracting label simplify notation assume edge features scalar extension vector case simple involves indexing also assume uniquely identifies node two nodes share node mapping images scene graphs structured prediction figure schematic representation gpi architecture theorem singleton features omitted simplicity first features processed next summed create vector concatenated third representation entire graph created applying times summing created vector graph representation finally processed together feature achieved adding index another feature finally assume function pairwise features achieved adding singleton features pairwise ones let hash function buckets mapping node features index bucket assume perfect achieved large enough define map pairwise features vector size let vector dimension one coordinate recall consider scalar indeed define stores unique bucket node let second argument since distinct stores pairwise features neighbors unique positions within coordinates since contains feature whereas contains feature simply sum since would lose information edges features originated instead define map feature mapped distinct location formally sti outputs matrix zeros except features correspondingp node stored row matrix namely second argument matrix edge features graph including graph structure figure illustration proof construction theorem hash function size input graph pairwise features purple applied application yields vector three dark yellow columns correspond vectors summed obtain three vectors blue matrices outer product see resulting matrix zeros except one row dark blue matrix corresponds summed matrix isomorphic original matrix complete construction set outcome first discard rows columns correspond original nodes reducing dimension use reduced matrix input given assume simplicity need let output set since invariant permutations indeed returns output original input general graphs far discussed complete graphs edges correspond valid feature pairs many graphs however may sparse certain structures example chain graph sequence labeling edges sparse graphs input would pairs rather features corresponding valid edges graph interested invariances preserve graph structure namely automorphisms graph thus desired invariance automorphisms graph easy see theorem holds case one replaces sum neighbors node graph merely introduces another indexing step mapping images scene graphs structured prediction deep graph prediction theorem provides general requirements designing architecture structured prediction given problem one choose specific architecture parameterization instance interesting consider algorithm like belief propagation implemented framework following proof theorem one would use aggregate features would apply features architecture course general construction example could use sketch input graph labeling performed reduced representation survey certain architectures consistent theorem discuss expressive power introducing attention attention powerful architectural component deep learning bahdanau inference algorithms use attention show attention introduced framework intuitively attention means instead aggregating features neighbors node weighs neighbors based relevance example label entity image may depend strongly entities spatially closer implement attention architecture formally learn attention weights neighbors node scale features neighbor also learn different attention weights individual features neighbor similar way let attention mask specifying weight node gives node function arguments dot product standard attention models introduce attention wish form weighting neighboring feature vectors namely figure image top scene graph bottom visual genome dataset krishna scene graph captures entities image nodes blue circles pairwise relations edges red circles example relationships graph include hat dog dog motorcycle using rnns components theorem allows arbitrary functions except input dimensionality specifically functions involve highly expressive recursive computation simulate existing message passing algorithms new algorithms learned data course extended elaborate structures like lstms hochreiter schmidhuber neural turing machines graves leave future work theorem suggests function form graph permutation invariant easy show composing two functions gpi also gpi therefore run iteratively providing output one step part input next step maintain invariance results recurrent architecture employ next section obtain performance scene graph prediction application scene graph classification demonstrate benefits axiomatic approach task inferring scene graphs images problem input image annotated set rectangles bound entities image known bounding boxes goal label bounding box correct entity category every pair entities relation form coherent graph known scene graph scene graph nodes correspond bounding boxes labeled entity category edges correspond relations among entities could spatial functional wearing thus image bounding boxes output variables similar approach applied model attention outputs well graph nodes concept illustrated figure showing image dog motorcycle top corresponding scene achieve form extend single entry defining namely set first elements keep definition next define substitute obtain desired form attention weights neighboring feature vectors mapping images scene graphs structured prediction graph pink box image labeled motorcycle white box labeled dog two boxes correspond two nodes light blue circles figure bottom relation corresponds edge red circle labeled scene graphs typically sparse one view scene graph complete pair unrelated entities connected null edge scene graph represented collection triplets relation two entities like dog motorcycle model model two components label predictor takes input image bounding boxes outputs distribution labels entity relation scene graph predictor sgp takes label distributions predicts consistent label distributions jointly entities relations label prediction module figure receives input image set bounding boxes bbn corresponding image entities figure outputs set entity label probabilities yient bbi box set candidate entity labels set rel relation probabilities bbi bbj another predefined set relation labels unary pairwise potentials later fed sgp module predict entity labels used taking input patch cropped full image according ith bounding box figure used second predict relations using tensor input three channels rgb image two channels binary masks subject entity object entity bounding boxes figure image patch provided network cropped covers subject entity object entity providing two binary masks breaks symmetry subject entity object entity allow network discriminate triplets like man wearing shirt shirt man scene graph prediction module described trivially gpi output variables rel yient predicted independently constructing gpi architecture scene graph predictor harder outline construction entity classification module gpi following theorem features every bounding box features box pairs classify relations added function reuses gpi representation created entity classification input gpi representation easy show entire network gpi let concatenation features spatial features current label probability entity logits figure label predictor entity recognition network network takes image patch cropped based bounding box outputs classification probabilities per label relation recognition network network takes input tensor containing rgb image first channels two binary masks subject object entities remaining two channels final softmax layer spatial bounding box given width height addition used confidences relation logits final softmax layer step sgp apply function receives entity features relation features output updated confidences entities relations composing gpi functions gpi sgp module gpi describe implementation three components network two receives subject features relations features entity features outputs vector size next entity aggregate using attention mechanism described section calculate weights implement layer receives input outputs scalar two network receiving entity features context features outputs aggregated similar attention mechanism entities resulting vector representing entire graph consists classifies entities classifies relations three network size receives input outputs vector one scalar per entity class unlike theorem allow direct access maintains gpi property improved learning practice final output confidence linear interpolation current confidence eatures new confidence controlled learned forget gate output forget eatures relation classifier analogous entity classifier receiving input relation features graph representation mapping images scene graphs structured prediction also explored concatenating word embeddings probable entity class word vectors learned lov pennington captions visual genome krishna experimental setup dataset evaluated approach visual genome dataset krishna consists images annotated bounding boxes entities relations distribution entity classes relations total unique entity classes unique relations allow comparison previous studies dataset newell deng zellers used preprocessed data including train test splits provided dataset average entities relations per image evaluation used entity categories relations newell deng zellers tune also split training data two randomly selecting examples resulting final split sets training procedure trained networks using adam kingma input images resized conform architecture first trained module trained sgp module using best model follows particular chosen values tuned validation set trained relation network loss ratio positive refers labeled relation negative unlabeled performed epochs chose batch size also used data augmentation techniques translation rotation improve results loss function sgp sum cross entropy losses entities relations image loss penalized entities times strongly relations penalized negative relations times weakly positive relations used batch size epochs recurrent application performed steps evaluation defined three different subtasks inferring scene graphs focus two sgcls given bounding boxes entities predict entity categories relations categories predcls given bounding boxes annotated entity labels predict relations following used recall evaluation metric measures fraction correct triplets appear within confident triplets proposed model two evaluation protocols used literature differ whether enforce graph constraints model predictions first protocol requires triplets assign one consistent class per entity relation rules putting one triplet pair bounding boxes also rules inconsistent assignment like bounding box labeled one entity one triplet another entity another triplet second evaluation protocol enforce constraints models baselines compare four variants gpi approach reported results four baselines currently various scene graph subtasks models use data split work leverages word embeddings likelihood predicted relations model passes messages entities relations iteratively refines feature map used prediction ewell eng ixel raph model uses associative embeddings newell produce full graph image ellers eural otif method encodes global context capturing highorder motifs scene graphs gpi attention gpi model attention mechanism instead following theorem simply sum features gpi eighbor attention gpi model using attention neighbors described section gpi ulti attention gpi model except learn different attention weights per feature gpi inguistic gpi ulti attention also concatenating word embedding vector probable entity label see sec results table lists recall recall four variants approach compared three baselines evaluating graph constraints gpi approach performs well inguistic outperforms baselines predcls sgcls table provides similar comparison evaluating without graph constraints inguistic performs best details supplemental material figure illustrates model behavior predicting isolated labels column mislabels several entities corrected joint prediction column column shows system learned attend nearby entities window building closer tree column shows stronger attention learned classes bird presumably usually informative common classes like tree mapping images scene graphs structured prediction figure input image bounding boxes scene graph fails recognize entities building tree relations front instead looking gpi inguistic fixes incorrect predictions window significant neighbor tree entity bird receives substantial attention tree building less informative table test set results evaluation sgc eural otifs attention eighbor atten ulti attention inguistic red table test set results unconstrained evaluation sgc ixel raph attention eighbor atten ulti attention inguistic red related work significant recent interest extending deep learning structured prediction much work semantic segmentation convolutional networks shelhamer became standard approach obtaining singleton scores various approaches proposed adding structure top approaches used variants message passing algorithms unrolled computation graph studies parameterized parts message passing algorithm learned parameters lin recently gradient descent also used maximizing score functions belanger gygli alternative approach deep structured prediction via greedy decoding one label inferred time based previous labels popular applications like dependency parsing chen manning works rely sequential structure table recall predcls relations ranked frequency elation wearing near holding behind sitting front attached hanging riding inguis tic input bilstms effectively applied concept architectural invariance recently proposed eep ets zaheer invariance consider much less restrictive need invariant permutations singleton pairwise features consistent graph hence results substantially different set architectures extracting scene graphs images provides semantic representation later used reasoning question answering image retrieval johnson raposo forefront machine vision research integrating challenges like object detection action recognition detection interactions liao plummer mapping images scene graphs structured prediction conclusion presented deep learning approach structured prediction constrains architecture invariant structurally identical inputs methods approach relies pairwise features capable describing correlations thus inheriting intuitive aspect approaches however instead maximizing score function leads computationallyhard inference directly produce output invariant equivalent representations pairwise terms axiomatic approach extended many ways image labeling geometric invariances shift rotation may desired cases invariance feature permutations may desirable leave derivation corresponding architectures future work finally may cases invariant structure unknown discovered data related work lifting graphical models bui would interesting explore algorithms discover use symmetries deep structured prediction references bahdanau cho bengio neural machine translation jointly learning align translate international conference learning representations iclr belanger david yang bishan mccallum andrew learning structured prediction energy networks precup doina teh yee whye eds proceedings international conference machine learning volume pmlr bui hung hai huynh tuyen riedel sebastian automorphism groups graphical models lifted variational inference proceedings twentyninth conference uncertainty artificial intelligence uai arlington virginia united states auai press url http chen danqi manning christopher fast accurate dependency parser using neural networks proceedings conference empirical methods natural language processing emnlp chen liang chieh papandreou george kokkinos iasonas murphy kevin yuille alan semantic image segmentation deep convolutional nets fully connected crfs proceedings second international conference learning representations chen liang chieh schwing alexander yuille alan urtasun raquel learning deep structured models proc icml farabet clement couprie camille najman laurent lecun yann learning hierarchical features scene labeling ieee transactions pattern analysis machine intelligence graves alex wayne greg danihelka ivo neural turing machines arxiv preprint gygli michael norouzi mohammad angelova anelia deep value networks learn evaluate iteratively refine structured outputs precup doina teh yee whye eds proceedings international conference machine learning volume proceedings machine learning research international convention centre sydney australia pmlr kaiming zhang xiangyu ren shaoqing sun jian deep residual learning image recognition ieee conference computer vision pattern recognition cvpr las vegas usa june kaiming zhang xiangyu ren shaoqing sun jian identity mappings deep residual networks eccv volume lecture notes computer science springer hochreiter schmidhuber long memory neural computation johnson justin krishna ranjay stark michael lijia shamma david bernstein michael image retrieval using scene graphs ieee conference computer vision pattern recognition cvpr kingma diederik jimmy adam method stochastic optimization arxiv preprint arxiv url http krishna ranjay zhu yuke groth oliver johnson justin hata kenji kravitz joshua chen stephanie kalantidis yannis shamma david visual genome connecting language vision using crowdsourced dense image annotations international journal computer vision lafferty mccallum pereira conditional random fields probabilistic models segmenting labeling sequence data proceedings international conference machine learning mapping images scene graphs structured prediction liao wentong yang michael ying ackermann hanno rosenhahn bodo support relations semantic scene graphs arxiv preprint lin guosheng shen chunhua reid ian van den hengel anton deeply learning messages message passing inference advances neural information processing systems cewu krishna ranjay bernstein michael visual relationship detection language priors european conference computer vision meshi sontag jaakkola globerson learning efficiently approximate inference via dual losses proceedings international conference machine learning new york usa acm newell alejandro deng jia pixels graphs associative embedding advances neural information processing systems appear curran associates newell alejandro huang zhiao deng jia associative embedding learning joint detection grouping advances neural information processing systems curran associates pei wenzhe tao chang baobao effective neural network model dependency parsing proceedings annual meeting association computationa linguistics pennington jeffrey socher richard manning christopher glove global vectors word representation empirical methods natural language processing emnlp url http plummer bryan mallya arun cervantes christopher hockenmaier julia lazebnik svetlana phrase localization visual relationship detection comprehensive cues iccv raposo david santoro adam barrett david pascanu razvan lillicrap timothy battaglia peter discovering objects relations entangled scene representations arxiv preprint schwing alexander urtasun raquel fully connected deep structured networks arxiv shelhamer evan long jonathan darrell trevor fully convolutional networks semantic segmentation ieee conference computer vision pattern recognition cvpr taskar guestrin koller max margin markov networks thrun saul eds advances neural information processing systems mit press cambridge zhu choy scene graph generation iterative message passing ieee conference computer vision pattern recognition zaheer manzil kottur satwik ravanbakhsh siamak poczos barnabas salakhutdinov ruslan smola alexander deep sets guyon luxburg bengio wallach fergus vishwanathan garnett eds advances neural information processing systems curran associates zellers rowan yatskar mark thomson sam choi yejin neural motifs scene graph parsing global context arxiv preprint url http zheng shuai jayasumana sadeep bernardino vineet vibhav zhizhong dalong huang chang torr philip conditional random fields recurrent neural networks proceedings ieee international conference computer vision
| 1 |
improved linear time algorithms classical graph problems sankardeep seungbum srinivasa rao dec institute mathematical sciences hbni chennai india sankardeep university siegen siegen germany seoul national university seoul south korea ssrao provide linear time algorithms computing bridges topological sorting strongly connected components improving several recent results elmasry stacs banerjee cocoon chakraborty isaac route also provide another dfs implementation weaker input graph representation assumption without compromising time space bounds earlier results banerjee cocoon kammer mfcs introduction since early days designing graph algorithms researchers developed several approaches testing whether given undirected directed graph vertices edges strongly connected biconnected connected finding cut vertices bridges methods use search dfs backbone design main algorithm classical linear time algorithms due tarjan computes values defined terms every vertex checks conditions using determine whether desired property linear time algorithms well problems see references therein classical algorithms take time words model computation standard word ram model word size bits space aim improve space bounds algorithms without increasing running time motivation related work motivated mainly big data phenomenon among others recently surge interest improving space complexity fundamental linear time graph algorithms paying little penalty running time reducing working space classical graph algorithms generally take bits bits without compromising time towards elmasry gave among others implementation dfs taking time bits space sparse graphs time space bits dfs testing biconnectivity reporting cut vertices testing connectivity reporting bridges paper topological sort paper paper testing strong connectivity paper paper table summary results space bound improved bits keeping linear time banerjee gave among others space efficient implementation performing bfs using bits space linear time improving upon result algorithms graph problems also considered recently results assume input graph represented using adjacency array represented array length entry stores pointer array stores neighbors vertex given memory limited working memory output count space terms number bits workspace used algorithms main goal improve space bounds classical fundamental graph algorithms summarize main results table paper basically complete full spectrum results regarding space bounds problems keeping running time linear algorithms recent space efficient graph algorithm literature due lack space provide sketches proofs testing connectivity finding bridges undirected graph bridge edge removed without removing vertices graph creates components previously graph connected graph least two vertices bridge let denote dfs tree following kammer call tree edge parent full marked back edge descendant strict ancestor half marked full marked exists back edge descendant unmarked otherwise use definition prove following every vertex except root cut vertex exactly least one edges one children either unmarked edge half marked edge root cut vertex exactly least two children based characterization gave time bits algorithm cut vertex main observation give similar characterization bridges essentially using similar implementation also obtain time bits algorithms testing connectivity reporting bridges start following lemma lemma tree edge bridge unmarked proof sketch unmarked descendants reaches strict ancestor deleting would result disconnected graph thus bridge direction easy see bridge unmarked edge state theorem theorem given undirected graph time bits space determine whether connected connected amount time space compute output bridges proof sketch using lemma similar implementation using stack compression tools algorithm provided section kammer modifications prove theorem note space bound theorem improves results sufficiently dense graphs lgo respectively keeping linear runtime see table dfs without cross pointers banerjee subsequently kammer gave bits time implementations dfs improving bounds sparse graphs dfs implementations assume input graph represented using adjacency array along cross pointers undirected graphs every neighbour adjacency array vertex stores pointer position vertex adjacency array see detailed definitions directed graphs emphasize input assumption double space usage compared raw adjacency array worst case follows provide proof sketch dfs implementation taking time space bounds without using cross pointers main theorem follows theorem given directed undirected graph represented adjacency array perform dfs traversal using bits time proof sketch essentially modify proof uses bitvector length one one mapping unary encoding degree sequence mark tree edges subsequently uses cross pointers find parent vertex backtracking well starting next unvisited vertex backtracking note represent parents vertices another bitvector length parallel perform backtracking efficiently could use constant time append structure also constant time grossi along array modifications could get rid cross pointers without compromising running time space bound earlier algorithms testing strong connectivity topological sorting towards giving improved space efficient algorithms strong connectivity topological sorting first improve lemma says following dfs directed graph takes time space output vertices reverse postorder dfs tree taking time space combining lemma classical algorithms obtained bits time algorithms problems improve showing following theorem dfs directed graph takes time space vertices output reverse postorder respect dfs forest taking time space result also solve time using bits space proof sketch use dfs algorithm theorem first mark tree edges array start rightmost leaf vertex dfs tree use operations defined proof theorem carefully traverse tree reverse direction along standard dfs backtracking etc generate reverse postorder sequence using back bone classical algorithms obtain bit time algorithms theorem improves result sparse graphs use dfs algorithm chakraborty modify suitably perform traversal dfs tree reverse obtain following result theorem dfs directed graph takes time space vertices output reverse postorder respect dfs forest taking time space result also solve using time bits references banerjee chakraborty raman improved space efficient algorithms bfs dfs applications cocoon volume pages springer lncs banerjee chakraborty raman roy saurabh tradeoffs dynamic programming trees bounded treewidth graphs cocoon volume pages springer lncs chakraborty raman satti biconnectivity chain decomposition stnumbering using bits isaac volume lipics pages schloss dagstuhl fuer informatik chakraborty satti algorithms maximum cardinality search stack bfs queue bfs applications cocoon hong kong china august pages cormen leiserson rivest stein introduction algorithms chakraborty raman satti biconnectivity applications dfs using bits comput syst elmasry hagerup kammer basic graph algorithms stacs pages grossi ottaviano wavelet trie maintaining indexed sequence strings compressed space pods pages kammer kratsch laudahn biconnected components recognition outerplanar graphs mfcs schmidt simple test inf process tarjan search linear graph algorithms sicomp tarjan note finding bridges graph inf pro
| 8 |
apr moments properties markov process fabio gobbi sabrina mulinacci university bologna department statistics april abstract paper provides conditions markov process introduce particular case gaussian markov process generalizes standard random walk allowing increments dependent jel classification mathematics subject classification keywords markov process copula gaussian process introduction paper analyze temporal dependence properties satisfied discrete times nonstationary markov process temporal dependence relevant since permits verify well theoretical models explain temporal persistency observed financial data moreover also useful tool establish large sample properties estimators dynamic models particular paper analyze property give sufficient conditions ensure property satisfied copula approach univariate time series modelling finite dimensional distributions generate copulas darsow provide necessary sufficient conditions time series markov process recent literature topic mainly focused stationary case chen fan introduce strictly stationary first order markov process generated invariant distribution parametric copula authors show temporal dependence measure purely determined properties copulas present sufficient conditions ensure process based gaussian efgm copulas geometric beare shows markov models generated via symmetric copulas positive square integrable densities geometric many commonly used bivariate copulas without tail dependence gaussian efgm frank copulas satisfy condition chen show clayton gumbel student copula based markov models geometrically ergodic stronger condition geometric paper focus markov processes dependence state variable increment allowed modeled copula particular introduce gaussian markov process generalizes classical gaussian random walk study related moments properties provide conditions wich process paper organized follows section presents general result properties satisfied markov processes section restricts study gaussian case section concludes markov processes properties throughout paper discrete time markov process thanks seminal paper darsow markovianity stochastic process characterized specific requirement copulas representing dependence structure finite dimensional distributions induced stochastic process detailed discussion copulas see nelsen joe cherubini durante sempi must satisfy particular darsow proved equations transition probabilities equivalent requirement copula associated vector consequence since discrete times markov process assume set bivariate copulas representing dependence structure stochastic process two adjacent times given necessarily remind associative notice stationary case considered beare therefore bivariate copulas functions copula lag time paper extend study general case particular analyze temporal dependence problem special attention mixing properties notion introduced volkonskii rozanov attribute kolmogorov given necessarily stationary sequence random variables let ftl ftl let sup second supremum taken finite partitions define following dependence coefficient sup say sequence absolutely regular next theorem give conditions set copulas order guarantee markov resulting process conditions based specific requirements maximal correlation coefficients copulas remind maximal correlation copula given sup refer beare details theorem let markov process let copula associated vector assume absolutely continuous symmetric density uniformly bounded maximal correlation coefficients satisfy sup proof proof follows theorem beare proves similar result stationary markov processes first since stochastic process markovian rewritten terms cumulative distribution functions respectively total variation norm see bradley applying sklar theorem write follows bivariate copulas type absolutely continuous let denote density sup since symmetric joint density uniform margins admits following series expansion terms complete orthonormal sequence eigenvalues form sequence nonnegative real numbers notice proved lancaster max applying get using get therefore sup since uniformly bounded tends zero gaussian markov process assume markov process obtained sequence identically distributed random variables dependent dependence structure modelled copula function process defined stationary however determine distribution thanks operator denoted introduced cherubini tool recover distribution sum two dependent random variables shown cherubini technique may used construction dependent increments stochastic processes like precisely cumulative distribution function may recover cumulative distribution function iterating copula associated equations provide ingredients construct discrete times markov processes according darsow model sort modified version random walk process independence assumption innovations longer required however weakness cases distribution function expressed closed form may evaluated numerically assume innovations gaussian identically distributed zero mean standard deviation copula stationary gaussian copula constant parameter way distribution gaussian specifically section cherubini shown since assumption moreover copula gaussian parameters since limiting behavior standard deviation also analyzed cherubini proved lim otherwise notice case negative correlation increments standard deviation levels explode following restrict analysis case moments autocorrelation function subsection study behavior moments autocorrelation functions process case recall standard random walk model order autocorrelation function tends lag general setting longer true limit order autocorrelation function function following proposition shows proposition let order autocorrelation function tends proof proved section cherubini using fact two gaussian copulas parameter given product parameters copulas involved copula gaussian parameter therefore since easily get result hand innovations longer serially independent random walk case order autocorrelation function approaches limit depends proposition let order autocorrelation function tends proof compute first autocovariance order since fixed get moreover immediate find statement proposition since corr properties gaussian framework density gaussian copula well known maximal correlation coefficient equal absolute value simple correlation coefficient see lancaster therefore according notation theorem following results application theorem holds corollary markov process defined proof firstly notice fact equivalent always verified since assumption thanks since bounded constant smaller satisfied furthermore hard prove thus theorem applies concluding remarks paper provide conditions markov process results represent generalization beare author considers stationary case analysis focused particular case gaussian markov process dependent increments represents generalization standard gaussian random walk particular setting proved order autocorrelation function process converge random walk case quantity depends lag correlation state variable innovation assumed additionally proved process satisfies conditions required references beare copulas temporal dependence econometrica bradley introduction strong mixing conditions vols kendrick press herber city chen fan estimation semiparametric time series models journal econometrics chen efficient estimation semiparametric markov models annals statistics cherubini gobbi mulinacci convolution copula econometrics springerbriefs statistics cherubini gobbi mulinacci romagnoli dynamic copula methods finance john wiley sons cherubini mulinacci romagnoli model speculative price dynamics discrete time journal multivariate analysis darsow nguyen olsen copulas markov processes illinois journal mathematics durante sempi principles copula theory boca raton chapman joe multivariate models dependence concepts chapman hall london lancaster properties bivariate normal distribution considered form contingency table biometrika lancaster structure bivariate distributions annals mathematical statistics nelsen introduction copulas springer renyi measures dependence acta mathematica academiae scientiarum hungaricae volkonskii rozanov limit theorems random functions theor probab volkonskii rozanov limit theorems random functions theor probab
| 10 |
value alignment feb jaime fisac monica gates jessica hamrick chang liu dylan malayandi palaniappan dhruv malik shankar sastry thomas griffiths anca dragan abstract intelligent systems gain autonomy capability becomes vital ensure objectives match human users known problem robotics value alignment key design collaborative robots integrate human workflows successfully inferring adapting users objectives argue meaningful solution value alignment must combine decision theory rich mathematical models human cognition enabling robots tap people natural collaborative capabilities present solution cooperative inverse reinforcement learning cirl dynamic game based cognitive models decision making theory mind solution captures key reciprocity relation human plan actions isolation rather reason pedagogically robot might learn robot turn anticipate interpret human actions pragmatically knowledge work constitutes first formal analysis value alignment grounded empirically validated cognitive models key words value alignment interaction dynamic game theory introduction accelerating progress artificial intelligence robotics bound substantial impact society simultaneously unlocking new potential augmenting transcending human capabilities also posing significant challenges safe effective interaction short term integrating robotic systems environments require assess intentions authors university california berkeley jfisac mgates jhamrick changliu dhm malayandi dhruvmalik anca fisac gates hamrick liu preferences users order assist effectively avoiding failures due poor coordination long term ensuring advanced highly autonomous systems beneficial individuals society hinge ability correctly assimilate human values objectives envision challenges inherently coupled predict improving ability robots understand coordinate human users inform solutions general problem successful value alignment requires moving typical formulations robots account second determines objective words value alignment fundamentally problem cooperative inverse reinforcement learning cirl formulates value alignment game human robot share common reward function human knowledge reward practice solving cirl game requires decision theory dealing system system poses unique challenge humans behave like idealized rational agents however humans excel social interaction extremely perceptive mental states others naturally project mental states beliefs intentions onto robotic collaborators becoming invaluable allies robots quest value alignment coming decades tackling problem crucial building collaborative robots know human users want paper show value alignment possible theory also practice introduce solution cirl based model human agent grounded cognitive science findings regarding human decision making pedagogical reasoning solution leverages two closely related insights facilitate value alignment first extent improving collaborator understanding goals may conducive success people tend behave pedagogically deliberately choosing actions informative goals second robot anticipate pedagogical reasoning interpreting actions human users akin pragmatic listener interprets speaker utterance natural language jointly pedagogical actions pragmatic interpretations enable stronger faster inferences among people result suggests possible robots partake equilibrium ultimately becoming perceptive competent collaborators solving value alignment using cognitive models cooperative inverse reinforcement learning cirl cooperative inverse reinforcement learning cirl formalizes value alignment game briefly present consider two agents human value alignment robot engaged dynamic collaborative task involving possibly infinite sequence steps goal agents achieve best possible outcome according objective however objective known order contribute objective need make inferences actions inverse reinforcement learning irl problem incentive behave informatively becomes helpful hence term cooperative irl formally cirl game dynamic markov game two players described tuple set possible states world sets actions available respectively discrete transition next state conditioned previous state actions set possible objectives cumulative reward function assigning real value every tuple state actions given objective probability measure initial state objective geometric time discount factor making future rewards gradually less valuable pragmatic robots pedagogic humans asymmetric information structures games even static ones generally induce infinite hierarchy beliefs robot need maintain bayesian belief human objectives decide actions reason robot decisions human would principle need maintain belief robot belief turn inform decisions thereby requiring robot maintain belief human belief belief shown optimal pair strategies found cirl game solving partially observed markov decision process pomdp avoids bottomless recursion long agents rational coordinate perfectly start game unfortunately dealing human agents rationality prior coordination nontrivial assumptions finding equivalent tractability result realistic human models therefore crucial using cirl formulation solve problems discover key insight cognitive studies human pedagogical reasoning teacher chooses actions utterances influence beliefs learner aware teacher intention teacher exploit fact learner interpret utterances pragmatically infinite recursion averted finding relation teacher best utterance learner best interpretation exploiting common modeling assumption bayesian theory mind learner models teacher noisily rational decision maker likelier choose utterances note theoretical formulation easily extended arbitrary measurable sets limit analysis finite state objective sets computational tractability clarity exposition fisac gates hamrick liu causing learner place high posterior belief correct hypothesis given learner current belief reality teacher exactly compute learner belief model supposes estimates learner previous responses utterances introduces noise decisions capture estimation inaccuracies framework predict complex behaviors observed human interactions pedagogical utterances pragmatic interpretations permit efficient communication adopt analogous modeling framework value alignment critical difference ultimate objective human explicitly improve robot understanding true objective optimize team expected performance towards objective pedagogic behavior thus emerges implicitly extent robot becomes better collaborator equilibrium solution cirl robot access true objective rather estimates belief assume belief expressed parametrically always true finite set define corresponding finitedimensional parameter space denoting belief reality human directly observe assume compute infer robot behavior model estimation inaccuracies noise policy let represent value function cirl game given objective seeking compute true objective known represents best performance team expect achieve chooses chooses state current belief order solve seek establish appropriate dynamic programming relation game given information structure model human decision making since typically possible people predict robot next action see beginning assume observe turn committing model human decision making psychology econometrics luce choice rule models people decisions probabilistically making choices likely lower utility particular employ common case luce choice rule boltzmann noisy rationality model probability choice decays exponentially utility decreases comparison competing options relevant utility metric case sought captures best expected outcome available actions therefore probability choose action form exp value alignment termed rationality coefficient quantifies concentration choices around optimum becomes perfect rational agent becomes indifferent expression interpreted likelihood action given particular evolution belief given deterministically bayesian update jointly define equation analogous one states pragmatically update based noisily rational pedagogic amounts deterministic transition function belief crucially however relation derived involves yet compute unlike modeled rational agent however knowing true best expectation based current arg max combining state transition measure define bellman equation noisily rational policy given note next action implicitly depends action next turn substituting obtain sought dynamic programming relation cirl problem noisily human pragmatic robot human pedagogic takes actions according takes account actions influence robot belief objective robot pragmatic assumes human actively aware actions convey objective interprets accordingly resulting problem similar pomdp case formulated beliefstate mdp form important difference belief transition depends value function spite complication problem solved backward time dynamic programming bellman update based fixed point encodes equilibrium function therefore policy choosing action belief transition rule interpreting actions evidence suggests people proficient finding equilibria even though uniqueness guaranteed general study disambiguation open research direction assume simplicity optimum unique disambiguation rule exists note imply certainty equivalence assume separation estimation control fully reasoning actions may affect future beliefs fisac gates hamrick liu introduce benchmark domain chefworld household collaboration setting human seeks prepare meal help intelligent robotic manipulator multiple possible meals may want prepare using available ingredients know beforehand one chosen assume tell explicitly team obtains reward intended recipe successfully cooked aware uncertainty take actions give actionable information particularly information expects allow helpful possible task progresses problem ingredients states spinach absent chopped tomatoes absent chopped bread absent sliced toasted recipes correspond joint target states food soup requires tomatoes chopped bread sliced toasted spinach salad requires spinach tomatoes chopped bread sliced toasted slice chop foods tomatoes toast bread simple scenario two recipes solved using discretized beliefstate value iteration presented illustrative example fig wrong initial belief intended recipe standard irl fails communicate recipe pragmatic pedagogic able change belief successfully collaborate make meal addition computed solution games recipes modification pomdp value iteration table cirl equilibrium successfully cook correct recipe time whereas standard irl framework acting expert disregarding inferences succeed half often fig simple collaborative scenario possible objectives human wants soup robot initially believes goal salad even full pomdp formulation reasons literally actions using standard irl assuming behaves knew true objective fails infer correct objective conversely cirl equilibrium views incentivized choose pedagogic actions fix belief needed pragmatic interpretation wait action turn instead adding spinach would preferred pedagogic wanting salad indicates wants soup actions solutions pragmatic achieves value alignment completes recipe value alignment irl cirl boltzmann boltzmann boltzmann rational table comparison expected value equivalently probability success achieved cirl irl chefworld domain four recipes robot begins uniform belief set recipes ran algorithm across different models human behavior namely rational model model various values higher corresponds rational human human highly irrational cirl irl unsurprisingly perform rather poorly however human becomes less noisy cirl outperforms irl significant margin fact pragmaticpedagogic cirl strategy human performs comparably even substantially outperforms irl result human perfectly rational discussion presented analysis value alignment problem incorporates model human decision making theory mind framework cooperative inverse reinforcement learning cirl using analysis derive bellman backup allows solving dynamic game dynamic programming every instant backup rule based equilibrium robot human robot uncertain objective therefore incentivized learn human whereas human incentive help robot infer objective become helpful note type equilibrium recently studied cognitive science literature human teaching learning may unique general may exist two actions two corresponding interpretations leading different fixed points example could press blue red button could interpret asking pick blue red object although might feel intuitive pairing valid well thinks interpret pressing blue button asking red object certainly incentivized press blue wants red case policy consistently pick red object upon press blue button multiple conventions possible human beings tend naturally disambiguate converging salient equilibria focal points accounting phenomenon likely instrumental developing competent robots hand important point although computationally simpler general planning problems pomdps still reducing equilibrium computation solving modified pomdp falls short rendering problem tractable general however finding bellman backup open door efficient cirl solution methods leverage benefit extensive research practical algorithms approximate planning large pomdps references find results work promising two reasons first provide insight cirl games theoretically formulated also practically solved second demonstrate first time formal solutions value alignment depart ideal assumption rational human agent instead benefit modern studies human cognition predict developing efficient solution approaches incorporating realistic human models constitute important fruitful research directions value alignment acknowledgements work supported onr embedded humans muri afosr implicit communication center references amodei steinhardt man christiano concrete problems safety arxiv preprint dragan abbeel russell cooperative inverse reinforcement learning nips tversky kahneman judgment uncertainty heuristics biases science heider simmel experimental study apparent behavior american journal psychology meltzoff understanding intentions others intended acts dev psych baker tenenbaum modeling human plan recognition using bayesian theory mind plan activity intent recognition shafto goodman griffiths rational account pedagogical reasoning teaching learning examples cog psych zamir bayesian games games incomplete information computational complexity theory techniques applications luce individual choice behavior theoretical analysis john wiley sons dragan srinivasa integrating human observer inferences robot motion planning autonomous robots schelling strategy conflict harvard university press mundhenk goldsmith lusena allender complexity markov decision process problems acm silver veness planning large pomdps nips
| 2 |
jun generating massive complex networks hyperbolic geometry faster practice moritz von looz mustafa safa karlsruhe institute technology kit germany email istanbul technical university turkey email ozdayi laue henning meyerhenke friedrich schiller university jena germany email karlsruhe institute technology kit germany email meyerhenke network models play important role algorithm development scaling studies network analysis realistic system benchmarks graph data sets commonly used benchmark model drawbacks concerning realism scaling behavior network properties complex network model gaining considerable popularity builds random hyperbolic graphs generated distributing points within disk hyperbolic plane adding edges points whose hyperbolic distance threshold present paper fast generation algorithm graphs experiments show new generator achieves speedup factors best previous implementation one billion edges generated one minute workstation furthermore present dynamic extension model gradual network change preserving step point position probabilities introduction relational data complex relationships often take form complex networks graphs heterogeneous often hierarchical structure low diameter high clustering degree distribution examples include social networks graph hyperlinks websites protein interaction networks infrastructure routing networks autonomous system level frequently found properties generative models complex network clustering ratio triangles triads community structure heavytailed degree distribution benchmarks developed evaluate system respect floating point operations represent requirements graph algorithms especially heterogeneous datasets complex networks benchmark addresses gap graph benchmark computing uses recursive matrix model generate synthetic networks benchmark instances graphs model efficiently computable suffer drawbacks terms realism example even fixed parameters clustering coefficient shrinks graph size number connected components increases problematic scaling studies interesting model without problem random hyperbolic graphs rhg family geometric graphs hyperbolic plane krioukov introduced graph model showed structure complex networks naturally develops properties hyperbolic geometry generate rhg one randomly samples node positions hyperbolic disk connects two nodes edge probability depending hyperbolic distance special case model edge two nodes added exactly distance threshold subset rhg sometimes called threshold random hyperbolic graphs theoretically could considered unitdisk graphs hyperbolic space resulting graphs show degree distribution adjustable exponent provably high clustering small diameter motivation outline contribution fast generator implementation scales large graph sizes provides sufficient realism necessary create meaningful graph benchmark instances acceptable time previous work able improve quadratic time complexity pairwise probing approach threshold rhgs still superlinear time complexity therefore provide faster generation algorithm paper threshold random hyperbolic graphs section using new spatial data structure key idea divide relevant part hyperbolic plane slabs use bound coordinates possible neighbors slab experiments section show network million vertices edges generated parallel implementation one minute yielding speedup factor best previous implementation graph nodes edges measurements suggest log time hyperbolic plane distance given hyperbolic law cosines complexity proof algorithm optimal expected linear time complexity suggested theoretical paper present work provides fastest implementation date generator code publicly available network analysis toolkit networkit cosh dist cosh cosh sinh cos mentioned briefly section important special case edge added node pair exactly hyperbolic distance points threshold graph family sometimes called threshold random hyperbolic graphs hyperbolic graphs slightly confusingly random hyperbolic graphs consider hyperbolic graphs precise stick threshold random hyperbolic graphs avoid name proliferation many theoretical results special case related work generative models due growing interest complex networks numerous generators exist comprehensive overview would outside scope paper refer interested reader goldenberg survey none models suitable use cases mentioned recursive matrix model received particular attention hpc community due use benchmark rhg generation algorithms previous generators random hyperbolic graphs exist general special case aldecoa present generator general case quadratic time complexity calculating distances sampling edges node pairs von looz use polar quadtrees generate threshold rhgs time complexity log high probability recently von looz meyerhenke extended approach generate general rhgs time complexity bringmann propose geometric inhomogeneous random graphs generalization rhgs describe generation algorithm expected linear time complexity knowledge implementation algorithm available hyperbolic geometry hyperbolic space one three isotropic spaces two common euclidean space spherical space contrast flat euclidean geometry positively curved spherical geometry hyperbolic geometry negative curvature among interesting properties hyperbolic geometry shows exponential expansion space area euclidean circle grows quadratically circle radius area circle hyperbolic plane grows exponentially radius balanced trees number nodes certain distance root also grows exponentially said distance leading suggestion hierarchical complex networks structures might easily embeddable hyperbolic space indeed demonstrate connection hyperbolic geometry complex networks embedding autonomous system internet graph hyperbolic plane enabling locally greedy routing generative model krioukov introduced random hyperbolic graphs generate graph points first distributed randomly within disk radius hyperbolic plane probability density functions point distributions given polar coordinates angular coordinate distributed uniformly radial coordinate given sinh cosh algorithm main idea partition hyperbolic plane concentric slabs section use limit number necessary distance calculations edge creation algorithm point positions sampled sorted angular coordinates stored appropriate slab determined radial coordinates gather neighborhood point iterate slabs examine possible neighbors within since slab limits radial coordinates points contains use also bound angular coordinates possible neighbors slab thus reducing number comparisons running time parameter governs node dispersion determines exponent resulting degree distribution sampling point positions edges added node pair probability given depending hyperbolic distance parametrized temperature data structure let cmax set log ordered radial boundaries cmax define slab area enclosed point contained slab exactly since slabs partition hyperbolic disk resulting degree distribution follows power law exponent given two points polar coordinates log algorithm graph generation min figure graph hyperbolic geometry neighborhood neighbors bold blue vertex hyperbolic circle marked blue choice radial boundaries important tuning parameter experimenting different divisions settled geometric sequence ratio relationship successive boundary values cmax derive value log log plog remaining values follow geometrically figure shows example graph hyperbolic plane together slab neighbors bold blue vertex within hyperbolic circle radius visualization marked blue area considering nodes possible neighbors algorithm needs examine nodes whose angular coordinate input number vertices average degree exponent output gettargetradius vertices cmax set log ordered radial coordinates cmax bmax set log empty sets vertex parallel draw draw density sinh cosh insert suitable end parallel sort points angular coordinates end vertex parallel band getminmaxphi vertex disth add end end end end return algorithm algorithm shows generation average degree exponent first radius hyperbolic disk calculated according desired graph size density line value fixed retaining degrees freedom model thus assume use binary search fixed desired find gives close approximation desired average degree note equation approximation might give wrong results extreme values implementation could easily adapted skip step accept commonly used parameter even accept directly increased usability accept average degree parameter default version gettargetradius function unchanged previous work given values approximation expected average degree given notation vertex positions bands settling disk boundary radial boundaries calculated line defined disk thus partitioned log slabs slab set stores vertices located area sets initially empty line vertex positions sampled randomly polar coordinates lines stored corresponding set vertex put set iff line within set vertices sorted respect angular coordinates lines generation algorithm outside hyperbolic disk case movement inverted node bounces boundary different probability densities center disk outer regions translated movement speed node less likely center thus needs spend less time traversing resulting higher speed implement movement two phases initialization step values assigned node according desired movement movement step node consists rotation radial movement rotation step straightforward addition angular coordinates rotated mod radial movement described algorithm visualization shown figure getminmaxphi neighbors given vertex whose hyperbolic distance let slab neighbor since hyperbolic law cosines conclude cosh cosh cosh sinh sinh cos cosh cosh cosh sinh sinh cos cosh cosh cosh cos sinh sinh cosh cosh cosh cos sinh sinh algorithm radial movement dynamic model input output rnew sinh asinh return gather neighborhood vertex iterate slabs compute slab far angular coordinate possible neighbor deviate line call vertices whose angular coordinates within bounds neighbor candidates since points sorted according angular coordinates quickly find leftmost rightmost neighbor candidate slab using binary search need check neighbor candidate line compute hyperbolic distance add edge distance lines since edges found ends need iterate slabs one direction choose outward implementation line process repeated every vertex line surprisingly running time algorithm dominated range queries lines experiments section suggest running time log complete algorithm seen empirical observation leave mathematical proof future work new node position would outside boundary origin movement reflected set theorem let probability density point positions given polar coordinates let move movement step node movement preserves distribution angular radial distributions move proof since distributions angular radial coordinates independent consider separately introduced radial coordinate sampled distribution density sinh cosh introduce random variables step algorithm denoted upper case letter equivalent additional random variable denotes radial coordinate variables defined sinh asinh let denote density functions variables sinh cosh asinh cosh cosh sinh sinh cosh cosh dynamic model model gradual change networks design implement dynamic version node movement deleting nodes inserting random positions suitable dynamic behavior modeling internet infrastructure sudden site failures additions change social networks happens gradually suitable node movement model needs consistent moving node network may change properties stay expectation since properties emerge node positions probability distribution node positions needs preserved implementation movement happens discrete time steps choose movement directed node moves certain direction time move direction except new position would graphs nodes edges roughly experimental running times fit complexity log running times faster generator appear grow steeply increasing edge count artifact logarithmic plot constant increase relatively larger compared smaller running time thus appears larger logarithmic drawing sinh impl impl impl impl impl impl theoretical fit theoretical fit theoretical fit figure movement step radial coordinates mapped interval sinh coordinate distribution uniform adding transforming coordinates back results correctly scaled movements distributions differ constant addition cosh every cosh steps radial movement reaches limit reflected causing multiplied average thus zero similar argument works rotational step rotational direction unchanged change coordinates balanced addition subtraction whenever interval left leading average zero terms change running time seconds experimental evaluation setup generation algorithm implemented parallelized openmp running time measurements made server ram intel xeon cores ghz hyperthreading enabled use threads memory allocations use malloc implementation intel threading building blocks library code included network analysis toolkit networkit compare performance generate graphs nodes average degrees algorithm presented work implementation von looz validate distribution generated graphs compare implementation implementation aldecoa generate graphs nodes combination parameters calculate several network analytic characteristics averaging runs dynamic model measure time required movement step compare distributions network analytic properties edges figure comparison running times generate networks vertices varying circles represent running times implementation diamonds running times implementation running times fitted equation seconds scaling behavior threads cores shown figure considering edge sampling alone shows strong scaling number physical cores speedup threads hyperthreading speedup increases combining edge lists later networkit graph data structure however requires coordination proves bottleneck parallel edge lists required final step omitted done example benchmark running time figure shows running times generate graphs nodes edges speedup previously fastest implementation increases graph size sparsity reaching distribution generated graphs average degree assortativity degeneracy clustering coefficient size diameter largest components generator total edge sampling speedup factor bader berry kahan murphy riedy willcock graph benchmark search version graph tech chakrabarti zhan faloutsos recursive model graph mining proc siam intl conf data mining sdm orlando siam apr kolda pinar plantenga seshadhri scalable generative graph model community structure siam scientific computing vol sep krioukov papadopoulos kitsak vahdat hyperbolic geometry complex networks physical review vol sep online available http bode fountoulakis probability hyperbolic random graph connected random structures algorithms appear preprint available http gugelmann panagiotou peter random hyperbolic graphs degree sequence clustering extended abstract automata languages programming international colloquium icalp proceedings part ser lecture notes computer science czumaj mehlhorn pitts wattenhofer vol springer online available http bonato survey models web graph combinatorial algorithmic aspects networking ser lecture notes computer science springer berlin heidelberg vol online available http threads figure speedup curves machine physical cores marked vertical line hyperthreading averaged runs one aldecoa shown plots appendix averaged runs network analytic properties show close match distributions two generation algorithms von looz prutkin meyerhenke generating random hyperbolic graphs subquadratic time isaac proc int symp algorithms computation dynamic model implementation allows updating graph without rebuilding scratch moving nodes updating existing graph still faster new static generation distribution generated graphs indistinguishable static model appendix aldecoa orsini krioukov hyperbolic graph generator computer physics communications online available http bringmann keusch lengler geometric inhomogeneous random graphs arxiv preprint conclusions staudt sazonovs meyerhenke networkit tool suite complex network analysis network science appear provided fastest implementation far generate massive complex networks based threshold random hyperbolic graphs running time improvement particularly large graphs realistic densities also presented model extension cover gradual node movement proved consistency regarding probability densities vertex positions static dynamic model serve complex network generators reasonable realism fast generation times even massive networks goldenberg zheng fienberg airoldi survey statistical network models foundations trends machine learning vol anderson hyperbolic geometry ser springer undergraduate mathematics series berlin springer papadopoulos krioukov sustaining internet hyperbolic mapping nature communications september online available http von looz meyerhenke querying probabilistic neighborhoods spatial data sets efficiently arxiv preprint acknowledgements work partially supported german research foundation dfg grant finca grant within priority programme algorithms big data kiwi mitsche bound diameter random hyperbolic graphs proceedings twelfth workshop analytic algorithmics combinatorics analco siam jan online available http references newman networks introduction oxford university press chakrabarti faloutsos graph mining laws generators algorithms acm computing surveys csur vol appendix comparison tion previous degree assortativity degree assortativity degree assortativity degree assortativity clustering coefficient degeneracy degeneracy clustering coefficient vertices largest component figure comparison degree assortativity degeneracy implementation left implementation right degree assortativity describes whether vertices neighbors similar degree value near signifies subgraphs equal degree value structures turn generalization connected components result iteratively peeling away vertices degree assigning vertex core number innermost core contained degeneracy refers largest core number values averaged runs size largest component size largest component diameter largest component diameter largest component vertices largest component diameter largest component diameter largest component max core number max core number figure comparison clustering coefficients size largest component diameter largest components implementation left implementation right values averaged runs appendix consistency dynamic model degree assortativity degree assortativity clustering coefficient degeneracy max core number vertices largest component max core number degeneracy clustering coefficient figure comparison degree assortativity degeneracy graphs nodes one movement step nodes moved sampled randomly distribution graphs node movement shown left node movement right values averaged runs size largest component size largest component diameter largest component diameter largest component vertices largest component diameter largest component diameter largest component degree assortativity degree assortativity figure comparison clustering coefficients size largest component diameter largest components graphs nodes one movement step nodes moved sampled randomly distribution graphs node movement shown left node movement right values averaged runs
| 8 |
problem plane concerning distance constraints aug lin lee institute information science academia sinica nankang taipei taiwan herbert kero dtlee abstract drezner proposed problem plane two players called leader follower open facilities provide service customers competitive manner leader opens first facility follower opens second customer patronize facility closest ties broken favor first one thereby decides market share two facilities goal find best position leader facility market share maximized best algorithm problem log parametric search approach searches space market share values paper drezner also proposed general version centroid problem introducing minimal distance constraint follower facility allowed located within distance leader proposed log algorithm general version identifying points candidates optimal solution checking market share paper develop new parametric search approach searching candidate points present log algorithm general version thereby close gap two bounds keywords competitive facility euclidean plane parametric search introduction economist hotelling introduced first competitive location problem seminal paper since subject competitive facility location extensively studied researchers fields spatial economics social political sciences operations research spawned hundreds contributions literature interested reader referred following survey papers hakimi drezner individually proposed series competitive location problems framework framework briefly described follows customers market endowed research supported grants lin lee certain buying power two players called leader follower sequentially open facilities attract buying power customers first leader opens facilities follower opens another facilities customer patronize closest facility buying power ties broken favor leader ones thereby decides market share two players since players ask market share maximization two competitive facility location problems defined framework given leader locates facilities set points follower wants locate facilities order attract buying power called problem hand knowing follower react maximization strategy leader wants locate facilities order retain buying power competition called problem drezner first proposed study two competitive facility location problems euclidean plane since many related results obtained different values due page limit introduce previous results case problem drezner showed exists optimal solution arbitrarily close solved problem log time sweeping technique later lee obtained log lower bound problem thus proved optimality result problem drezner developed parametric search based approach searches space possible market share values along test procedure constructing solving linear program constraints thereby gave log algorithm improving test procedure via megiddo result solving linear programs hakimi reduced time complexity log drezner also proposed general setting framework introducing minimal distance constraint medianoid problem problem follower facility allowed located within distance leader augmented problems respectively called problem problem paper drezner showed medianoid problem also solved log time using nearly proof technique problem however problem argued hard generalize approach problem solve general version due change problem properties gave log algorithm identifying candidate points plane contain least one optimal solution performing medianoid computation far bound gap two centroid problems remains unclosed paper propose log algorithm centroid problem euclidean plane thereby close gap last decades instead searching market share values develop new approach based parametric search technique searching candidate points problem plane mentioned made possible making critical observation distribution optimal solutions problem given provides useful tool prune candidate points respect extend usage tool design key procedure prune candidates respect given vertical line rest paper organized follows section gives formal problem definitions describes previous results section make observation problem make use find local centroid given line result extended new pruning procedure respect given line section utilized parametric search approach problem finally section give concluding remarks notations preliminary results let set points euclidean plane representatives customers point assigned positive weight representing buying power simplify algorithm description assume points general position three points collinear two points share common let denote euclidean distance two points set points plane define suppose leader located facility shortened simplicity due minimal distance constraint mentioned point infeasible follower choice follower locates facility feasible point set customers patronizing instead defined total buying power largest market share follower capture denoted function max called weight loss given point problem find denotes feasible point maximizing weight loss contrast leader tries minimize weight loss facility finding point point problem find denotes point minimizing weight loss note two problems degenerate problems lin lee previous approaches subsection briefly review previous results problems derive basic properties essential approach let arbitrary line partitions euclidean plane two halfplanes point define close including open including two distinct points let denote perpendicular bisector line segment given arbitrary point first describe algorithm finding let arbitrary point point open line segment see implies fact shows moving toward diminish weight capture thereby follows lemma lemma exists point let circles centered radii respectively lemma finding reduced searching point maximizing since perpendicular bisector point tangent line circle searching equivalent finding tangent line partitions weight latter problem solved log time follows outside calculate two tangent lines sorting tangent lines according polar angles corresponding tangent points respect use angle sweeping technique check much weight partition theorem given point problem solved log time next describe algorithm problem let subset define set circles convex hull circles easy see following lemma let subset point outside positive number let intersection convex hulls lemma lemma let positive real number point problem plane proof consider first case definition intersects every subset let subsets since point feasible must exist point implying feasible point acquire buying power customers follows feasible point acquire buying power larger equal must exist subset lemma drezner argued set equivalent intersection smallest possible slightly strengthen argument let following lemma obtained lemma let smallest number null point proof let wop weight loss first show null wop suppose contrary null exists point lemma wop contradicts optimality moreover since null wop show point wop lemma hand wop since definition see thus lemma although hard compute find vertices solutions problem let set outer tangent lines pairs circles subset boundary formed segments lines arcs circles since intersection convex hulls vertices must fall within set intersection points lines circles one line one circle let denote three sets intersection points respectively lemma lemma exists obviously intersection points viewed candidates drezner thus gave algorithm evaluating weight loss candidate theorem theorem problem solved log time remark degenerates convex polygon given null drezner proved lin lee case equivalent intersection thus whether null determined constructing solving linear program constraints takes time megiddo result since lemma problem solved log time applying parametric search unfortunately hard generalize idea case motivating develop different approach local within line section analyze properties given point subsection derive procedure prunes candidate points respect applying procedure study restricted version centroid problem subsection leader choice limited given line obtain algorithm algorithm extended basis test procedure parametric search approach section pruning respect point given point angle let point polar angle respect define set angles maximizing see figure observed sufficiently small belong intersect definition implies angles form open angle interval length simplify terms let remaining section also let line passing parallel following lemma provides basis pruning lemma let arbitrary point angle point proof since definition bisectors distance less implies therefore derive following inequality assume polar angle measured counterclockwise positive problem plane fig black arcs represent intervals angles whereas open circles represent open ends intervals completes proof lemma tells given point angle points ignored finding weight losses less lemma also prove weight loss function convex along line plane shown lemma let two arbitrary distinct points given line point max proof suppose contradiction point since lemma exists angle included however since locate different sides follows outside lemma contradicts assumption thus lemma holds investigate distribution angles let minimal angle interval covering angles see figure angle span radians mentioned consists open angle interval length implies open interval moreover derive following lemma proof prove lemma showing let arbitrary point polar angle respect obviously angle satisfying open interval angle span equal since lin lee edge fig edge definition exists angle thus lemma thereby proves lemma call point satisfying lemma strong since discovery gives immediate solution problem note problem instances strong exist suppose point let edge denote wedge defined intersection two beginning ending angles respectively illustrated figure edge infinite region lying two extending including two halflines defined called boundaries counterclockwise ccw angle two boundaries denoted edge since edge edge emphasized edge computational byproduct strong words every point wedge therefore make following assumption restriction order avoid misuse edge assumption whenever edge mentioned point found strong either computation properties equivalently following essential lemma makes edge main tool note proof trivially derived lemma since definition belong open intervals lemma let arbitrary point point edge proof symmetry suppose divide position two cases problem plane consider case two assumptions ensure exists angle passes obviously angle satisfies definition must exist angle infinitely close belongs thus lemma case angle since lemma finally consider computation edge lemma given point edge computed log time proof theorem first compute ordered tangent lines log time performing angle sweeping around identify time open intervals angles consists sweeping around obtained time find strong checking problem solved algorithm terminated otherwise edge constructed time searching line although computing wedges used prune candidate points serve stable tool since wedges different points indefinite angle intervals spans however assumption makes work fine lines show use wedges compute local optimal point given line point point line let arbitrary line assumed ease discussion point compute edge make use pruning purposes defining direction respect since edge definition three categories directions according intersection edge upward intersection including downward intersection including sideward intersection edge sideward local optimal point since lemma otherwise either edge upward downward points opposite half pruned lemma shows computing wedges acts predictable tool pruning next list sets breakpoints local optimal point locates recall set outer tangent lines pairs circles define set intersection points lines set intersection points circles following lemmas breakpoints lin lee lemma let two distinct points exists least breakpoint segment proof let arbitrary angle subset located definition outside convex hull hand since assumption inside lemma thus segment intersects boundary since boundary consists segments lines arcs circles intersection point either thereby proves lemma lemma exists local optimal point also breakpoint proof let local optimal point point adjacent note local optimal point exists every point must weight loss local optimal lemma holds trivially exist lemma breakpoint thus lemma holds remark outer tangent lines parallel exceptional cases considering breakpoints line parallel either intersect coincide either case irrelevant finding local optimal points counted defining lemma breakpoints sorted decreasing order local optimal point found performing binary search using wedges obviously sorted sequence obtained log time since however order speed computations local optimal points multiple lines alternatively propose log preprocessing local optimal point given line computed time preprocessing simple point compute sequence consisting points sorted increasing order polar angles respect computation takes log time total besides outer tangent lines computed time show given line sorted sequences obtained sequences log time used replace sorted sequence process binary search two points let outer tangent line right line similarly let outer tangent line left see figure moreover let trl tll points intersect respectively partition sets consider corresponding independently symmetry discuss case problem plane fig outer tangent lines lemma compute sequences satisfy following conditions sequence length obtained log time breakpoints sequence sorted decreasing union breakpoints sequences form proof without loss generality suppose either strictly right note point corresponds exactly one outer tangent line thereby exactly one breakpoint trl correspondence easily done time therefore equivalently computing sequences points instead breakpoints following consider two cases relative position intersects zero one point intersects two points case let angle upward direction along see figure classify points polar angles respect let denote sequence points polar angles interval sorted ccw order similarly let sequence points polar angles sorted ccw order obviously together satisfy condition note points polar angles ignored since correspond outer tangent lines parallel general position assumption observe two distinct points trl strictly trl precedes thus ordering points implicitly describes ordering corresponding breakpoints decreasing similarly ordering implies ordering corresponding breakpoints decreasing follows satisfy condition condition length definition also since sequence points lin lee intersection two intersection points fig two subcases intersects sorted ccw order implicitly represented concatenations subsequences done log time searching foremost elements polar angles larger respectively case suppose two intersection points let respectively polar angles respect see figure assumption implies divide points four sequences polar angles respect consists points polar angles sorted ccw order follows four sequences satisfy conditions condition hold similar discussion however two distinct points observe trl strictly trl precedes similarly argument holds thus satisfy condition actually reverse sequences also obtained log time satisfying condition lemma searching equivalent searching sequences breakpoints computed efficiently obvious way besides also obtain symmetrical lemma constructing sequences following show perform binary search within sequences lemma log preprocessing given arbitrary line local optimal point computed time problem plane proof lemma searching done within divided lemma sets replaced sorted sequences breakpoints besides consists breakpoints computed arranged sorted sequence decreasing ycoordinates therefore construct sequences breakpoints length sorted decreasing searching sorted sequences done performing parametric search parallel binary searches introduced technique used similar algorithm uses different weighting scheme sorted sequence first obtain middle element associate weight equal number elements compute weighted defined middle elements median ment finally apply lemma point strong centroid course local optimal assumption holds edge computed edge sideward local optimal point directly found otherwise edge either upward downward thus breakpoints opposite half pruned lemma pruning makes portion sequences possesses half total breakpoints definition weighted median lose least quarter elements hence least breakpoints pruned repeating process find log iterations time complexity finding analyzed follows lemma constructing sorted sequences takes log time computing sorting also takes log time log iterations pruning process iteration middle elements weighted median obtained time weighted selection algorithm computation edge takes log time lemma finally pruning sequences done time summary searching requires log log log time remark lemma easy obtain intermediate result problem plane lemma exists centroid applying lemma lines local optimum among intersection points obtained time applying theorem intersection points local optimum among obtained log time thus find time nearly improvement log bound plane section study problem propose improved algorithm time complexity log algorithm efficient lin lee algorithm problem based completely different approach subsection extend algorithm lemma develop procedure allowing prune candidate points respect given vertical line subsection show compute log time based pruning procedure pruning respect vertical line let arbitrary vertical line plane call strictly left left plane one strictly right right plane sideward wedge point said rightward leftward intersects right left plane observe point edge rightward every point left plane pruned since lemma similarly edge leftward points right plane pruned although power wedges fully exerted way pruning via vertical lines sideward wedges superior directly via wedges due predictable pruning regions therefore subsection describe design procedure enables prune either left right plane given vertical line mentioned key point searching sideward wedges achieved carrying three conditional phases first phase try find proper breakpoints sideward wedges failed pick representative point second phase check wedge determine whether sideward wedges exist finally case nonexistence show functional alternative computed called pseudo wedge still allows prune left right plane following develop series lemmas demonstrate details three phases property given point possible direction edge corresponding satisfies following conditions upward downward rightward leftward proof edge upward definition beginning angle ending angle must satisfy include follows thus recall case edge downward proved symmetric way edge rightward see must contain thus similar arguments therefore counterclockwise covering angles must include angle case edge leftward symmetrically proved problem plane lemma let two points strictly angle symmetrically proof angle observe since strictly follows second claim also holds symmetric arguments lemma let arbitrary point edge either upward downward point edge edge direction edge proof symmetry prove edge upward edge also upward every strictly property fact edge upward means thus let point strictly lemma follows edge upward well following lemma exist two arbitrary points wedges downward upward respectively derive must strictly points sideward wedges even strong locate thus find sideward wedges specified downward upward wedges let lowermost breakpoint wedge downward uppermost breakpoint wedge upward gdu open segment ease discussion assume exist show resolve assumption later constructing bounded box strictly also following corollary definitions corollary exist breakpoints segment gdu breakpoint either strong edge sideward given first phase thus done checking whether exist breakpoints gdu picking exist supposing picked one strong sideward wedge found corollary used pruning notice two breakpoints one may question whether wedges direction different directions result inconsistent pruning results following lemma answers question positive lemma let two distinct points strictly none strong edge edge sideward either rightward leftward proof prove lemma contradiction symmetry suppose case edge rightward edge leftward case divided two subcases whether intersect lin lee fig intersects consider first intersect edge rightward property thus exists angle since strictly lemma furthermore since edge leftward see edge therefore lemma follows thus definition implies intersect contradicting subcase assumption intersects intersection must completely included either due assumption symmetry assume latter subcase using similar arguments find angle contradiction since since subcases hold lemma proved second phase deals case breakpoint exists determining wedge direction arbitrary inner point gdu begin several auxiliary lemmas lemma let two distinct points strictly exists least one breakpoint segment intersects intersects proof symmetry show correctness condition assumption exists angle let definition implies strictly see figure first claim intersects must exist angle problem plane definition since thus contradicts condition intersect thus claim holds intersects locates either inside outside since locates outside former case boundary intersects forms breakpoint thereby proves condition hand outside exists angle similar arguments show assumption must belong implies strictly since strictly mentioned intersection point inner therefore lemma holds lemma let line segment connecting two consecutive breakpoints two distinct points inner proof suppose contrary lemma exists least one breakpoint contradicts definition thus lemma holds lemma breakpoint two distinct points gdu wedge direction strong centroids proof suppose contradiction directions wedges different lemmas two possible cases edge downward edge either sideward upward edge sideward edge upward following show cases hold case edge downward property thus intersect hand whether edge sideward upward see intersect property since lemma status two points satisfies condition lemma least one breakpoint exists definitions breakpoint inner gdu thereby contradicts assumption therefore case hold case proof case symmetric case condition lemma applied similarly show existence least one breakpoint contradiction combining discussions prove wedges direction thereby completes proof lemma lin lee lemma enables pick arbitrary point gdu bisector point representative inner points gdu strong edge sideward second phase finishes sideward wedge found otherwise edge downward upward derive following invoke third phase lemma breakpoint edge sideward exist neither strong points sideward wedges proof lemma lemma holds points gdu without loss generality suppose edge downward points gdu lemma holds lemma consider arbitrary point first show strong suppose contrary really definition thus intersect hand intersect due downward edge property since lemma applying condition lemma shows least one breakpoint exists contradicts assumption strong must downward wedge lemma therefore lemma holds points satisfies lemma consists points downward upward wedges said obviously pruning strategy via sideward wedges could apply lines third phase overcomes obstacle constructing functional alternative sideward wedges called pseudo wedge either pruning respect still achievable start auxiliary lemmas lemma following statements hold max points gdu proof prove correctness statement contradiction suppose besides fact implies breakpoint exists gdu lemmas wedges points gdu direction either downward upward suppose downward case symmetry pick arbitrary point gdu say since edge downward intersect oppositely definition edge upward included strictly according condition lemma exists least one breakpoint contradiction therefore statement holds proof statement also done contradiction symmetry assume statement consider arbitrary point problem plane gdu lemma max suppose equality hold lemma least one breakpoint exists segment contradicting fact thus statement holds let max going define pseudo wedge either depending one smaller weight loss consider first case obtain following lemma exists one angle proof first show exists least subset locates upper boundary let point strictly arbitrarily close lemma hence case assumption follows edge lemma edge must downward property thus exists angle let since inside lemma oppositely definition outside convex hull implies topmost intersection point hence upper boundary possible locates leftmost rightmost point claimed angle obtained follows since boundary point exists line passing tangent let angle satisfying obviously thus let arbitrary angle satisfying conditions lemma apply line trimming region edge sideward wedge obtained let called pseudo wedge denote intersection edge deriving three facts edge upward edge observe either intersects one right left plane two circumstances said null sideward respectively pseudo wedge similar functionality wedges shown following corollary corollary point proof edge lemma directly holds lemma otherwise thus contains lemma thereby completes proof lin lee lemma found sideward points opposite respect pruned null becomes another kind strong meaning also immediate solution problem without confusion call conditional latter case hand considering reverse case also obtain angle pseudo wedge symmetric arguments either sideward opposite side pruned conditional thus third phase solves problem nonexistence sideward wedges recall three phases searching sideward wedges based existence guaranteed show constructing appropriate border lines guarantee existence searching border lines bounding box defined smallest rectangle encloses circles obviously point outside box satisfies must thus given vertical line intersecting box pruned trivially decided moreover let ttop tbtm two arbitrary horizontal lines strictly bounding box respectively obtain following lemma let arbitrary vertical line intersecting bounding box denote intersection points ttop tbtm respectively edge downward edge upward proof consider case edge described know fact let arbitrary angle observe contain circles implies therefore edge downward property similar arguments show edge upward thus lemma holds according lemma inserting ttop tbtm existence enforced vertical line intersecting bounding box besides obvious see insertion affect correctness lemmas developed far summarizing discussion whole picture desired pruning procedure described follows beginning perform preprocessing obtain bounding box add ttop tbtm given vertical line whether prune left right plane determined following steps intersect bounding box prune containing box compute problem plane find sideward wedge pseudo wedge via three forementioned phases terminate whenever strong conditional found breakpoints exist pick check breakpoint decide whether checking compute depending smaller weight loss prune right left plane according direction sideward wedge pseudo wedge correctness procedure follows developed lemmas vertical line intersecting bounding box trivially dealt step due property box intersects box lemma certainly found step three step correspond three searching phases sideward wedge found either breakpoint step corollary step lemma otherwise according lemma symmetric version pseudo wedge built step respectively finally step whether prune left right plane determined via sideward wedge pseudo wedge respectively lemma corollary time complexity procedure analyzed follows preprocessing computing bounding box trivially takes time step vertical line intersecting box identified dealt time finding step requires help algorithm developed although algorithm designed find local optimal point easily observe slightly modifying objective makes applicable purpose without changing time complexity thus step done time lemma step breakpoints found log time follows done lemma first list breakpoints sorted sequences length takes log time performing binary search find within sequence breakpoints log time step checking picked point done computing requires log time lemma compute pseudo wedge step angle satisfying lemma symmetrically computed log time sweeping technique lemma thus computed log time finally pruning decision step takes time summarizing steps require time total since invocation lemma needs additional log preprocessing following result lemma log preprocessing whether prune right left plane given vertical line determined time lin lee searching euclidean plane subsection come back problem recall lemma least one found three sets intersection points consist total points let denote set vertical lines passing intersection points definition exists vertical line local optimal point conceptually help lemma derived applying approach pick vertical line median determine lemma whether right left plane pruned discard lines pruned repeat two vertical lines left obviously costs much approach carried explicitly generating sorting lines however separately dealing three sets implicitly maintain sorted sequences lines apply approach let sets vertical lines passing intersection points respectively local optimal line vertical line local optimal point weight loss larger points local optimal lines similarly defined respectively adopt different techniques find local optimal lines three sets shown following lemmas lemma local optimal line found log time proof let definition intersection points vertical lines efficiently searching within vertical lines apply ingenious idea parametric search via parallel sorting algorithms proposed megiddo consider two arbitrary lines parallel let tgh intersection point lgh vertical line passing tgh suppose left plane lgh applying lemma lgh prunes right plane remained left plane hand left plane lgh pruned remained right plane therefore lgh treated comparison sense applying lemma lgh determines ordering remained also decides ordering intersection points undetermined local optimal line since pruning ensures local optimal line stays remained follows resolving comparisons process pruning vertical lines find reduced problem determining ordering intersection points lines say sorting intersection points resolving comparisons sorting process simultaneously maintain remained two vertical lines boundaries thus resolving comparisons one two boundaries must local optimal line know efficient way problem plane obtain ordering apply optimal sorting algorithm needs resolve log comparisons instead comparisons since resolving comparison takes time lemma sorting done time finding however megiddo observed multiple comparisons indirectly resolved batch simulating parallel sorting algorithms sequential way naturally provides scheme batching comparisons thereby outperform case applying let arbitrary parallel sorting algorithm runs log steps processors parallel merge sort using sort lines takes log parallel steps parallel step comparisons resolved select one median among supposed applying lemma prunes left plane comparison left ordering corresponding lines remained right plane directly known thus comparisons left indirectly resolved time otherwise right plane pruned comparisons right resolved time repeating process selecting medians pruning remaining elements log times comparisons resolved takes log time therefore going log parallel steps requires log log time determines ordering lines also computes local optimal line lemma local optimal line found log time proof deal set use ideas similar proofs lemmas order divide sorted sequences points given fixed circle point show intersection points grouped sequences length sorted increasing summarizing circles total sequences length maps sequence vertical lines sorted increasing finding local optimal line done performing sequences vertical lines via parallel binary searches details steps described follows first discuss way grouping intersection points fixed point represented subsequences symmetry considered similar lemma actually computing sequences points corresponding intersection points outer tangent line may intersect two one zero point let denote first second points respectively intersects along direction note intersects less two points null following consider sequence computation two cases relationship lin lee intersection two intersection points fig two subcases intersects case since coincides set tangent points easy see angular sorted sequence directly corresponds sorted sequence tangent points ccw order partitioned two consist points polar angles respect intervals respectively since sorted ccw order intersection points corresponding reverse sorted increasing required obviously length obtained log time case suppose without loss generality locates lower left quadrant respect let polar angle respect case divided two subcases whether intersects less two points consider first subcase intersect none one point see figure let angles inner tangent note two circles intersect one point intersect polar angle respect neither implicitly obtain two subsequences consisting points polar angles respectively observed sequence points listed corresponds sequence intersection points listed clockwise order moreover sequence listed ccw order symmetrically sequence points corresponds sequence ccw order sequence order four implicit sequences intersection points partitioned horizontal line passing problem plane center resulted sequences naturally sorted either increasing decreasing therefore implicitly obtain eight sorted sequences length replace appropriately partitioning log time consider intersects two points upper right see figure let angles tangent respectively implicitly partitioned three subsequences consists points polar angles respectively similar observations corresponds two sequences intersection points listed ccw order respectively corresponds two sequences listed ccw order respectively however sequence points corresponds sequences listed ccw order sequences also partitioned sequences sorted follows implicitly obtain twelve sorted sequences length replace log time according discussion two points divided sequences log time consists intersection points sorted increasing xcoordinates thus sorted sequences length log time correspond sorted sequences vertical lines perform parametric search parallel binary search sequences vertical lines similar techniques used lemma sequences middle element first obtained assigned weight equal sequence length time weighted median elements computed time applying lemma time least total elements pruned sequences taking another time therefore single iteration pruning requires time log iterations local optimal line found total log time thereby proves lemma lemma local optimal line found log time proof points thus obtained sorted according log time simply performing binary search lemma local optimal line easily found log iterations pruning require total time summary computation takes log time lemma holds definition found among computed log time lemmas respectively centroid computed local optimal point time lemma combining log preprocessing computing angular sorted sequence bounding box enclosing following theorem lin lee theorem problem solved log time concluding remarks paper revisited problem euclidean plane consideration minimal distance constraint facilities proposed log algorithm close bound gap problem unconstrained version starting critical observation medianoid solutions developed pruning tool indefinite region remained pruning made use via structured parametric search approach quite different previous approach considering distance constraint facilities various competitive facility location models theoretical interest practical importance however similar constraints rarely seen literature would good starting points introducing constraint facilities players problems maybe even facilities player references cole slowing sorting networks obtain faster sorting algorithms journal acm vol cole parallel merge sort siam journal computing vol dasci conditional location problems networks plane eiselt marianov eds foundations location analysis springer new york davydov kochetov plyasunov complexity centroid problem plane top vol drezner competitive location strategies two facilities regional science urban economics vol drezner eitan competitive location plane annals operations research vol eiselt laporte sequential location problems european journal operational research vol eiselt laporte thisse competitive location models framework bibliography transportation science vol eiselt marianov vladimir drezner tammy competitive location models laporte nickel saldanha gama eds location science springer international publishing hakimi locating new facilities competitive environment european journal operational research vol hakimi locations spatial interactions competitive locations games mirchandani francis eds discrete location theory wiley new york hansen thisse wendell equilibrium analysis voting competitive location problems mirchandani francis eds discrete location theory wiley new york problem plane hotelling stability competition economic journal vol lee geometric complexity location problems algorithmica vol megiddo applying parallel computation algorithms design serial algorithms journal acm vol megiddo algorithms linear programming related problems siam journal computing vol plastria static competitive facility location overview optimisation european journal operational research vol reiser linear selection algorithm sets elements weights information processing letters vol location model networks spatial economics vol
| 8 |
hybrid fuel cells power long duration robot missions field environments jekanthan danielle daniel steven mechanical engineering department massachusetts institute technology massachusetts cambridge jekan dstrawse dubowsky department mechanical aerospace nuclear engineering rensselaer polytechnic institute jonsson engineering center troy new york mobile robots often needed long duration missions include search rescue sentry repair surveillance entertainment current power supply technology limit walking climbing robots many missions internal combustion engines high noise emit toxic exhaust rechargeable batteries low energy densities high rates theory fuel cells limitations particular proton exchange membrane pems provide high energy densities clean quiet however pem fuel cells found unreliable due performance degradation mitigated protecting fuel cell battery hybrid configuration using filtering electronics ensure fuel cell isolated electrical noise battery isolate power surges simulation results presented hoap humanoid robot suggests fuel cell powered hybrid power supply superior conventional batteries introduction mobile robots including walking robots needed perform long duration missions difficult dangerous tedious include search rescue repair entertainment sentry surveillance applications continuous operation robots lasting days weeks hours would ideal applications typical power demands field robots vary significantly mission often high peak power demands field systems often constraints mass volume noise current power supply technology key limiting factor long duration field robotic applications internal combustion engines provide high power long durations produce toxic exhaust noise strong thermal signatures making inappropriate many important applications current rechargeable batteries low energy densities high rates selfdischarge requiring systems stop recharge every hours making ineffective continuous long duration missions hence significant need power supply provide high total energy required long duration missions quiet clean figure left boston dynamics big dog supply robot right robonaut repair robot fuel cell power mobile robots fuel cells high energy sources power suggested robots promising alternative mobile source power potential overcome limitations current batteries internal combustion engines simple electrochemical devices convert chemical energy electricity figure unlike battery fuel cells require constant supply fuel oxidant produce electricity proton exchange membrane pem fuel cells particularly attractive robotics devices consist simple solid state components sandwiched together shown figure combine hydrogen fuel oxygen breathing air energy releasing reaction known produce electricity water demonstrated pem fuel cells reach higher operating efficiencies room temperature produce clean water exhaust iii challenges fuel cells robots pem fuel cells simple sound great theory three fundamental problems practical robotics applications problems storage hydrogen fuel reliability fuel cells low power hydrogen fuel due high energy content low density difficult store research developed simple innovative hydrogen storage technologies promise energy storage densities better best batteries today figure pem fuel cell consumes reactants hydrogen oxygen produce electricity water heat second pem fuel cells found unreliable studies pem fuel cells show delicate unreliable due degradation components resulting short lives premature failure however physical models experiments suggests pem fuel cells controlled operate within narrow operating made robust long lives years high operating efficiencies among factors known degrade fuel cells high operating voltages electrical noise discussed mobile field robots operating unstructured environments subject substantial variation without proper control result fuel cell degradation shortens lives solution problem discussed section third problem fuel cells high energy devices relatively low power problem robotics typical power requirements vary substantially mission rest periods short bursts peak power varying power demands known stress fuel cells resulting short lives solution use hybrid system mobile robots maintains fuel cell optimal operating conditions maximize life efficiency protecting external electrical load variations noises meeting peak power requirement using battery see figure fuel cell hybrid systems subjected meet rapid transient power demands large stationary applications robotics meet power surges however hybrid system designs considered effects fuel cell degradation figure proposed fuel hybrid power supply robots research research presented focused developing hybrid system design concept mobile robots energy densities exceed best battery technology hybrid system designed meet required peak power demands isolate fuel cell degrading stresses high low frequency noises generated conditioning circuits required battery management physical models used simulate expected conditions control systems developed demonstrate concept shown results vast improvement conventional batteries terms life efficiency energy density power density case study power humanoid walking robot hybrid fuel cell power supply humanoid walking robot figure developed fujitsu presented robot maximum rated power contains nickel metal hydride rechargeable battery pack default system contains servo actuators leg arm head one waist robot onboard computer equivalent pentium iii system vision system consisting ccd cameras onboard accelerometers gyroscope pressure sensors feet figure right fujitsu hoap robot left cyberbotics webotstm model simulation model cyberbotics webotstm used power demand calculations robot system consists three different subsystems power calculations namely system computer sensors power system system simulator model provides mechanical power output servo motors servo motors assumed electrical mechanical efficiency computer sensor system assumed always powered consume based specifications power demand profiles robot walking behavior shown figure scenarios alternative power sources compared default nickel metal hydride battery packs weigh fuel cell hybrid system fuel cell hybrid system consists fuel cell stack provides steady power source rechargeable lithium ion nanophosphate battery meets peak power demands fuel cells within stack operated constant operating voltage providing operating efficiency research degradation fuel cells based models experimental results shows increased operating voltages exponentially decreases life fuel cell operating fuel cell constant voltage less ensures providing sufficiently high operating efficiency fuel cell trickle charges battery idle times ensuring battery fully charged meet power peaks nanophosphatetm battery handles peak demands better handle deep battery discharges compared conventional lithium ion batteries ensuring battery nearly fully charged maximizes life battery hybrid system sized based specific power density meet maximum possible power requirements robot oscillation suppression circuit interfaces fuel cell power management system consisting power switching circuitry convertor interface circuit effectively extracts energy fuel cell transfers battery oscillation suppression circuits prevents voltage oscillation electrical circuits particularly convertors noticed fuel cell ensures fuel cell operates steady operating voltage without electrical load oscillations figure power demand robot walking vii hybrid system sizing based power demand profiles figure fuel cell provide constant steady source power power peaks handled battery steady supply power fuel cell stack weighing required peak required robot lithium ion nanophosphatetm battery required specific power cells another allocated mass power electronics items leaving lithium hydride fuel supply energy density viii power system comparison four power supply configurations compared including nickel metal hydride lithium ion battery system fuel cell system fuel cell hybrid system see table table power supply comparison humanoid robot power supply stack fuel energy system runmass mass density life time nimh battery year hours ion battery year hours fuel cell days hours fuel cell hybrid years hours nickel metal hydride lithium ion batteries lowest energy densities thus provide short requiring recharging system life batteries computed based expected lifetime multiplied hours direct fuel cell system longest runtime however life system based degradation models expected last days making option impractical fuel cell hybrid system offers good system life summary conclusions based results fuel cell hybrid system concept offers high energy density would meet required peak power demands battery key hybrid system concept effective control design fuel cell battery optimally sized minimize stress fuel cell enabling battery meet power demand peaks minimizing stresses fuel cell system operated high operating efficiencies references contribution world robotics technical report european robotics network asada roadmap robotics internet robotics technical report editor computing community research association rubio urquia dormida diagnosis performance degradation phenomena pem fuel cells international journal hydrogen energy thangavelautham dubowsky catalytic degradation fuel cell power supplies mobile field sensors fuel cells vol thangavelautham strawser dubowsky lithium hydride powered pem fuel cells small mobile robotic missions ieee international conference robots automation michel cyberbotics webotstm professional mobile robot simulation international journal advanced robotic systems vol kesner plante boston fabian dubowsky mobility power feasibility microbot team system extraterrestrial cave exploration proceedings ieee international conference robotics automation rome italy april joh direct methanol fuel cell system power humanoid robot journal power sources vol barbir pem fuel cells theory practice academic press view publication stats
| 3 |
using hierarchy domain specific languages complex software systems design lugovsky vslougovski arxiv sep february abstract new design methodology introduced examples building domain specific languages hierarchy top scheme introduction nized true macros hide access host language situation looks like paradox one hand industry uses metaprogramming ideas tools easy imagine would suffer without hand industry want hear anything related metaprogramming want people inventing new programming languages plenty industry coders barely use one language managers believe without reason taught use industry prefers wheel express sort complexity form libraries static steady languages strange reason learning complicated libraries language barely fits problem domain needs preferred learning small new language specifically designed paper trying advocate metaprogramming approach major design methodology complex systems sounds like another one silver bullet invention many methodologies claiming solve possible problems mankind rup extreme programming etc need another one simply previous approaches succeed tied particular programming technologies mostly oop varieties definitely silver bullets metaprogramming programs write programs write programs complicated hackers technique applied real world problems exactly industry specialists think metaprogramming completely wrong notion metaprogramming known way reduce complexity significantly areas programs write programs accepted industry due enormous level complexity corresponding handwritten code regular expressions lexers parsers generators name code wizards templates popular integrated development environments also widely used help overall methodology recognition industry beloved widely buzzworded language java even rudimentary preprocessor programmers idea use templates utilize stl without understanding true source power even enlightened world lisp programming misunderstanding surprisingly wide almost lisp dialects scheme implementations problems macros many people using even current scheme standard contains hygienic macros hardly methodology different strongly encourages domain specific languages tricky use possible programming approach describe based metaprogramming techniques requires gies invent impossible ones called core language top build hierarchy domain specific domain specific guages core language possess following properties guages providing outline proposed methodology problem domain best expressed using language mathematical programming natural specially designed cases one entity language every entity problem domain example problem domain recognition syntax constructions characters stream domain specific language contain characters characters sets primary entity automata constructions expressing syntax enough regular expressions language designed hard believe somebody ever invent anything better purpose optimal dsl problem domain already specified algebra even design dsl algebra galvanised underlying computational semantics way sql born problem domain graphics linear algebra stereometry used languages data formats dedicated contain subsets formal theories stated object software architecture minimise semantic distance system specification core language problem convenient language best fits already exist specialized languages common problems none available answer trivial implement implementation true macros must access complete programming language preferably host language different one inside macro definitions macros real programs anything programs written host language macros producing code host language form text directly abstract syntax tree true runtime eval programs generated runtime evaluated different language host language better one real programming language equivalent expressive power general purpose languages simplicity extensible core contain unnecessary complexity later added user really needs comprehensive easy use data types system type system well suited expressing possible abstract syntax trees language fits requirement top core language build functionality needed implement programming languages lexing parsing intermediate languages fit well computational models different model core language core language imperative eager functional need graph reduction engine implement lazy functional dsls term unification engine implement logical languages stack machine lower levels core language enriched swiss army knife programming languages development becomes major tool project new methodology development process must fit following chain divide problem possibly using object oriented design techniques whatever fits better formalize scheme example good example practical core language scheme addition common macros uses ast composition natural good enough represent possible ast example xml naturally represented sxml provides true runtime eval hosting language compile time exist practical efficient scheme implementations provide performance acceptable tasks good ffi thus integration legacy libraries implement domain specific language formalization using let start adding functionality core language dsl described scheme first semantics need parsing team members solve problem using best possible fond parentheses implelanguage ment many complicated syntaxes way project grow tree natural way functional programming lanarchy domain specific languages guage implement set parsing combinaguage subset superset another tors building recursive descendant parsers guage hierarchy may mostly fixed limit tion several languages amount lalr automata generators coding new language already deep comprehensive hierarchy quite small development team working within methodology consist least one specialist maintains hierarchy architect formalizes problems number coders specialize particular problem domains even may programmers know well domains operate terms close possible native problem domain terminology example html designer happy operating tags templates jsp custom tags popular mathematician find language modelled standard mathematical notation intuitive reason wolfram mathematica popular among game script writer operate language expressing characters properties action rules stating programming list continued infinitely course use metaprogramming wherever possible parsers functions consume list tokens characters input return result following form result anyresult fail reason input access parsing result provide following macros success caar fail sure result use following macro extract otherwise return fail message result cdar last definition looks surprisingly comin case access rest pact thanks pselect macro stream parsing pass stage power metaprogramming becomes rest obvious cdr reference show macros could also implemented definition choice combinator functions macros available context macro definitions functions let almost parsers fail success end input following safeguard macro extremely useful parser fail empty nested version obvious por pselect por skip rest combinators game becomes interesting definitions show gained handy macro nests example define floating point quence applications form number recognizer use definition define pselect por let car pmany pdigit cdr pcharx pselect pmany pdigit sequence parsing combinator two guments declared follows looks like bnf still schemish already domain specific language top scheme conform let perfectionist requirement however use success still perfect parsing engine imple let rest ment intermediate regular expressions lan success guage macro omitting definitions cons cons result show previous recognizer implemented append new way result define result regexp rest cons list fail car pdigit pdigit immediately turned sequence parsing combinator arbitrary new domain specific language number arguments used many ways example build pselect simple infix constants defparsers letrec epr let body regexp num lst aprs epr regexp body scm psym epr list body scm psym epr list body scm psym epr list body scm psym epr list body car result epr languages computational model close model scheme eager dynamically typed functional languages imperative features possible languages providing small intermediate dsls simulate alternative computational models need lowlevel power possible produce intermediate code language example bigloo scheme implementation allows include code compiling backend implementing complicated runtime models easy produce intermediate dsl top scheme use scheme forth metaprogramming powers alternatives make picture complete necesand wherever want calculate sary mention possible choices numerical constant compilation time core language popular programming may use macro language could become core language relatively easily turingthis language look like scheme complete macro system unfortunately featurany even ing language different host lanplementing pascal rlisp language guage one stage preprocessing poson top scheme using regexp sible lacks good type system could macro describe lexer parser simulated top existing lowlevel feaand compile resulting code tures exist implementations recursive descendant parsing combinators underlying scheme boost spirit library implementation functional programming pasqualish boost lambda even lisp compilers top template system runtime function fac evaluation available different ways using begin pluggable scripting languages using interpreter interesting fac approach described else another choice forth powerful end metalanguage core language remains lowlevel unsafe forth often parenthesis frighten choice available embedded systems programmers much even pascal limited resources worth mentioning modern experimengrammers use scheme code samples demonstrate tal extensions strictly typed functional lanof techniques available approach guages template haskell metaocaml complete implementation conform well loaded possible produce core language requirements objective caml also provides metaprogramming ming using sophisticated preprocessing engine ocaml quite good conclusion implementing interpreters using based technique examples found idea metaprogramming something esoteric metaprogramming used widely doubt common lisp would also commercial programmers rebe good platform since shares almost alize methodology proposed paall features scheme exception per attempt uncovering hidsimplicity killing feature common lisp den power metaprogramming techniques advanced runtime compilation available major implementations cmu scheme example presented descendant sbcl good examples part working project already defmacro guaranteed working proved supremacy approach subthe implementations available great set domain specific languages hierarchy advantage scheme designed www data acquiring project relatively small projects tcl would shown fig good choice computational model subject discussed requires future reis based rewrites primary data search practical approbation whose final tures strings text result may completely formalized matheders extremely powerful metaprogramming matically strict methodology description tool javascript language also based core language best fit methodthe rewrites semantics could used ology references diomidis spinellis reliable software implementation using domain specific languages kafka editors proceedings esrel tenth european conference safety reliability pages rotterdam september esra vdi tum balkema draft http boost project http bigloo practical scheme implementation http lugovsky dslengine project home http graham language http graham python paradox http lugovsky publications list http cmu common lisp http steel bank common lisp http tcl programming language resource http metaocaml project home http sheard jones template metaprogramming haskell http tempo project home http cint project home http core language parsing combinators stack machine lexer graph machine unification machine templates language parser generator regular expressins data aquision regexps rule engine sql templates figure sample dsls hierarchy subset web crawler project
| 2 |
acd term rewriting arxiv aug gregory duck peter stuckey sebastian brand nicta victoria laboratory department computer science software engineering university melbourne australia abstract paper introduce associative commutative distributive term rewriting acdtr rewriting language rewriting logical formulae acdtr extends term rewriting adding distribution conjunction operators conjunction vital expressive term rewriting systems since allows require multiple conditions hold term rewriting rule used acdtr uses notion conjunctive context conjunction constraints must hold context term enable programmer write expressive targeted rewriting rules acdtr seen general logic programming language extends constraint handling rules term rewriting paper define semantics acdtr describe prototype implementation introduction term rewriting powerful instrument specify computational processes basis functional languages used define semantics languages applied automated theorem proving name application areas one difficulty faced users term rewriting systems term rewrite rules local term rewritten occurs single place means order write precise rewrite rules need gather relevant information single place example imagine wish program overloaded ordering relation integers variables real variables pair variables order write type variable must encoded int int intleq int int real real realleq real real pair pair standard language type information variables information would kept separate looked required operator precedences used throughout paper binds tighter operators bind tighter term rewriting systems constraint handling rules chrs associative commutative term rewriting allow look managed straightforwardly single conjunction example term rewriting example could expressed int int int int intleq real real real real realleq pair pair pair pair rule replaces appropriate specialised version conjunction constraints associativity commutativity used easily collect required type information conjunction one difficulty remains term rewriting chrs look restricted single large conjunction example given term int int pair pair rewriting could rewrite since types appear different level order push type information inside disjunction need distribute conjunction disjunction simply adding distribution rules like solve problem rule creates two copies term increases size term rewritten adding rule counter effect results rewriting system conjunctive context address size explosion problem due distributivity rewrite rules similar way commutativity dealt handling distributivity language level restrict dealing expanding distributivity conjunction operator account idempotence thus concerned distribution rules form means conjunction distributive function presence redundant copy use idempotence simplify rhs derive let introduce conjunctive context term use rewrite rules informally consider term conjunction modulo idempotence would result exhaustive application rule superterm conjunctive context mean conjunction example conjunctive context boxed occurrence term allow rewrite rule refer conjunctive context rule head use following notation facility provides without undesirable effects rule term size example express equality used anywhere scope viewing equality conjunctive context using rule term example results without dissolving disjunction motivation applications constraint model simplification concrete motivation behind associative commutative distributive term rewriting acdtr constraint model mapping part project key aim mapping solver independent models efficient solver dependent models see acdtr basis writing mappings since models flat conjunctions constraints need beyond term rewriting chrs example consider following simple constraint model inspired social golfers problem two groups playing week overlap players maxoverlap aim maximise number times overlap two groups less words minimise number times two players play together group constraint maxoverlap maximise holds maxoverlap consider following acdtr program optimising constraint model maxoverlap maxoverlap true holds true holds false first rule removes redundant maxoverlap constraints next two rules implement partial evaluation holds auxiliary function coerces boolean integer representing constraint model giant term optimise model applying acdtr program example consider trivial case one week two groups model becomes maxoverlap maximise holds maxoverlap subterm holds maxoverlap simplifies using conjunctive context maxoverlap clear pure chrs insufficient constraint model mapping least two reasons namely constraint model example typically flattened conjunction rules rewrite functions rules rewriting function holds outside scope chrs rewrite constraints global definitions seen conjunctive context matching provides natural mechanism making global information available constraint model structured data constraint definitions typically global top level access data use defined constraint local type information example another example partial evaluation example solver independent modelling language support arrays take model array given values could represented term array deeper inside model accesses array occur constraint lookup following rules expand array lookup array array lookup index list element array index list element list element list element referring respective array lookup expression via conjunctive context allows ignore direct context lookup concrete constraint expression occurs propagation rules processing logical formula often useful able specify new formula derived existing formula without consuming basic term rewriting obvious rule causes trivial issue recognised chrs provide support inference propagation rules account fact use rules form express circumstances example following classic chr leq program reimplemented acd term rewriting omit basic rules logical connectives leq true leq leq leq leq true leq leq leq reflexivity antisymmetry idempotence transitivity rules almost chr version exception second third rule antisymmetry idempotence generalise original using conjunctive context matching propagation rules also used adding redundant information model mapping rest paper organised follows section covers standard syntax notation term rewriting section defines declarative operational semantics acdtr section describes prototype implementation acdtr part project section compares acdtr related languages finally section conclude preliminaries section briefly introduce notation terminology used paper much borrowed term rewriting use represent set terms constructed set function symbols set variables assumed countably infinite use represent set function symbols arity position string sequence integers uniquely determines subterm term represents empty string define function returns subterm position similarly define function replaces subterm position term define set pos represent set positions subterms identity pair usually written given set identities define set identities closed axioms equational logic symmetry transitivity etc define congruence class set terms equal respect finally define function vars return set variables syntax semantics syntax acdtr closely resembles chrs three types rules following form simplification propagation simpagation rule identifier head conjunctive context guard body arbitrary terms rule identifier assumed uniquely determine rule program set rules assume vars vars vars vars vars simpagation rules rule identifier omitted true guard omitted present declarative semantics acdtr based equational logic first define set operators acdtr treats specially definition operators define set associate commutative operators set must satisfy examples assume also treat operator distributive explained acdtr supports simple form guards definition guards guard term denote set true guards guard said hold iff assume true false define declarative semantics acdtr order employ special binary operator explicitly attach conjunctive context term intuitively meaning equivalent provided true otherwise meaning unconstrained boolean expressions useful interpret conjunction therefore identity becomes equivalent advantage distinguishing forced extend definition arbitrary functions denote following set identities true functions definition declarative semantics acdtr declarative semantics acdtr program represented multiset rules given function defined follows guard function guard returns guard rule function maps acdtr rules identities head body terms variables existentially note new identity possible binding guard holds propagation rule equivalent simplification rule introduces head conjunction body rhs analogous propagation rules chrs simpagation rule equivalent simplification rule provided conjunctive context satisfied rules definition contain identities distributing conjunctive context terms operator set also contains identities properties operators example consider following acdtr rule corresponding identity identity using rules show follows operational semantics section describe operational semantics acdtr based theoretical operational semantics chrs includes support identifiers propagation histories conjunctive context matching simpagation rules variables implicitly universally quantified universal quantifiers appear outside existential ones propagation history chr concept propagation history prevents trivial propagation rules needs generalised arbitrary terms acdtr propagation history essentially record propagation rule applications checked ensure propagation rule applied twice sub term chrs constraint associated unique identifier multiple copies constraint appear chr store copy assigned different identifier extend notion identifiers arbitrary terms definition identifiers identifier integer associated sub term use notation indicate term associated identifier term annotated subterms associated identifier also define function ids return set identifiers term return version example annotated term ids term identifiers considered separate term could precise separating two explicitly maintain map pos identifiers use approach space reasons extend overload standard operations terms section annotated terms obvious manner example subterm relation annotated terms returns annotated term position exception elements congruence class formed relation assume satisfies following constraints neglected mention identifiers operators identifiers ignored later leave unconstrained propagation history set entries defined follows definition entries propagation history entry form propagation rule identifier string identifiers define function entry return propagation history entry rule annotated term follows entry entry entry entry entry entry entry entry otherwise definition means propagation history entries unaffected associativity effected commutativity example consider annotated term although belong different propagation history entries entry entry sub term rewritten another new term assigned set new unique identifiers define auxiliary function annotate map set identifiers term annotated term ids conditions ensure identifiers new unique rule applied propagation history must updated accordingly reflect terms copied matching example rule essentially clones term matching identifiers however cloned term cloned expect copies inherit propagation history original likewise terms merged merges two instances term matching case propagation histories copies also merged achieve duplicate entries propagation history occurrence variable body also appeared head definition updating history define function update terms annotated terms propagation histories minimal propagation history satisfying following conditions pos set variables pos define identifier renaming identical annotated terms example consider rewriting term propagation history using rule resulting term new propagation history conjunctive context according declarative semantics term conjunctive context represented operationally never explicitly build term containing clause instead use following function compute conjunctive context subterm demand definition conjunctive context given annotated term position pos define function return conjunctive context position follows true states transitions operational semantics defined set transitions execution states definition execution states execution state tuple form term goal propagation history set variables appearing initial goal set identifiers also define initial final states follows definition initial final states given initial goal program initial state hga vars ids annotate final state state rules applicable goal define operational semantics acdtr follows definition operational semantics simplify exists renamed rule exists matching substitution term pos annotate ids update propagate exists renamed rule exists matching substitution term pos entry annotate update entry ids simpagate exists renamed rule exists matching substitution term leq leq leq leq leq leq leq leq leq leq leq leq leq leq leq leq fig example derivation leq program pos annotate update ids example consider leq program example goal leq leq figure shows one possible derivation goal final state representing alse brevity omit fields represent identifiers subscripts also substitute transitivity state soundness result acdtr theorem soundness respect program means algebras satisfy equivalent assignment fresh variables implementation implemented prototype version acdtr part mapping language project called cadmium section give overview implementation details particular focus implementation conjunctive context matching main contribution paper cadmium constructs normalised terms bottom normalised term one reduced application rule given goal first must recursively normalise say attempt find rule applied standard execution algorithm used many trss implementations approach normalising terms bottom complicated consideration conjunctive context matching conjunctive context current term appears higher overall goal term thus conjunctive context must passed top yet normalising bottom means guarantee conjunctive context normalised example consider following acdtr program uses conjunctive context matching var nonvar one one alse consider goal one expect normalised alse assume one selected normalisation first conjunctive context one subterm one rule applicable one reduced next subterm one reduced second rule fire resulting new term conjunctive context first term one changed expect rewritten number however one already considered normalisation current cadmium prototype solves problem terms conjunctive context changes example conjunctive context one changes term one renormalised one first rule general execution algorithm cadmium shown figure function normalise takes term substitution conjunctive context boolean value keeps track conjunctive context current subterm changed true assume substitution maps variables normalised terms initial goal assume empty otherwise executing body rule matching substitution operationally normalise splits three cases depending variable conjunctive context changed true longer guaranteed normalised case return result renormalising respect otherwise alse simply return must already normalised conjunction repeatedly call normalise conjunct added conjunctive context repeated fixed point normalisation result either conjunct changing reached return result apply rule discuss fixed point calculation accounts case conjunctive context term changes shown example otherwise term form construct new term normalising argument finally return result apply rule applied function call apply rule attempt apply rule normalised term respect conjunctive context matching rule found normalise var return normalise alse else return else normalise true normalise true return apply rule else normalise normalise return apply rule fig pseudo code cadmium execution algorithm result normalise alse returned renamed rule body matching substitution otherwise simply returned related work acdtr closely related trs chrs section compare three languages term rewriting systems problem dealing associative commutative operators trs well studied popular solution perform rewriting modulo permutation operators although complicates matching algorithm problem trivial continually rewriting respect commutativity solved acdtr subsumes actrs associative commutative trs introduced distributivity via simpagation rules added concepts identifiers propagation rules given actrs program map equivalent acdtr program interpreting actrs rule acdtr rule state theorem relating actrs acdtr theorem let actrs program ground term iff hta ids hsa annotate term chrs acdtr deliberately designed extension chrs several chr concepts propagation rules adapted differences chrs acdtr main difference acdtr underlying solver acdtr constraint programming language however possible encode solvers directly rules simple leq solver example another important difference chrs based predicate logic exists distinction predicate symbols names constraints functions used construct terms acdtr based equational logic terms hence distinction predicates functions predicate boolean function overcome assume existence set pred contains set function symbols boolean functions assume pred mapping chr program acdtr program simply true however assume program restricted follows rules guards apart implicit equality guards constraint true initial goal also restricted must form form pred pos pred conditions disallow predicate symbols appearing arguments chr constraints theorem let chr program initial goal satisfying conditions true true vars theoretical operational semantics chrs iff hga ids hsa acdtr term identifiers believe theorem could extended include chr programs extend underlying solver provided rules handling tell constraints added acdtr program example combine rules rational tree unification leq program example get program equivalent traditional leq program chrs acdtr generalises chrs allowing operators besides conjunction inside head body rules one extension chrs studied namely allows disjunction body unlike acdtr one slight difference syntax chrs use represent conjunction whereas acdtr uses manipulates disjunction syntactically typically finds solutions using backtracking search one notable implementation operational semantics described tree rewriting system limited form conjunctive context matching used similar used acdtr based knowledge conjunction distributes disjunction acdtr generalises distributing functions future work conclusions presented powerful new programming language acdtr naturally extends term rewriting chrs main contribution ability match rule conjunctive context sub term taking advantage distributive property conjunction possible functions shown natural way expressing problems building distributive property matching algorithm avoid issues arise naively implementing distribution rewrite rules intend acdtr become theoretical basis cadmium constraint mapping language part project work acdtr cadmium ongoing wide scope future work confluence termination issues references abdennadher operational semantics confluence constraint propagation rules gert smolka editor proceedings third international conference principles practice constraint programming lncs pages abdennadher flexible query language international conference flexible query answering systems number lncs pages roskilde denmark baader nipkow term rewriting cambridge univ press duck stuckey garcia banda holzbaur refined operational semantics constraint handling rules demoen lifschitz editors proceedings international conference logic programming lncs pages september theory practice constraint handling rules journal logic programming menezes vitorino aurelio high performance execution engine second workshop constraint handling rules sitges spain stuckey garcia banda maher marriott slaney somogyi wallace walsh project mapping solver independent models efficient solutions gabrielli gupta editors proceedings international conference logic programming number lncs pages examples motivating examples example conjunctive normal form one roles mapping models convert model written expressive language restricted language easy solve many standard approaches solving propositional formulae require formulae conjunctive normal form cnf disjunction distributive used establish cnf direct way using oriented rule cnf conversion based rule exponentially increase size formula undesirable circumstance means practice cnf conversions preferred replace subformulae new propositional atoms increases formula size linearly let formulate approach rewrite rules keep example simple assume subformula occurs positive context example preprocessing negation normal form replace new atom defined logical implication rewrite rule form unit resolution unit subsumption formalised rewrite rules two versions one using conjunctive context regular one conj context regular true false false furthermore assume rules eliminating logical constants true false conjunctions disjunctions obvious way let contrast two rule sets formula following terminating rewrite history conj context regular true true true obtain simple conjunct using regular rule format rule expressing binary resolution follows would required however rule undesirable would create arbitrary binary resolvents increasing formula size moreover superfluous atom remains formula example type remapping one main model mappings interested expressing type variable changed high level type easy modelling low level type easy solve prime example mapping set variable ranging finite subsets fixed set array variables indexed variable example use concrete modelling syntax indicates variable type types interested integers range set set ranging elements array array indexed set elements type use orall sum looping constructs iterate sets expressed acdtr follows set array map map array card sum array array array orall array array array orall array orall card card maxoverlap card typec vsubs card cap cup emptyset card cupl cupr capl capr eql eqr leql leqr maxo constructor adds local conjunctive context arbitrary term like last rules bar move context outwards nearest predicate scope last rule defines maxoverlap predicate used introduce new variables type constraints upon example consider following derivation set set maxoverlap set set card array map set card array map set card array map array map card array map array map card array map array map card array orall array map array map card array orall array map array map card array orall final goal flat conjunction constraints types similarly translated conjunction constraints sent finite domain solver unrolling orall replacing arrays sequences variables example rational tree unification directly express rational tree unification algorithm acd term rewriting system alse split ail split rule must defined constructor fail rule pair different constructors remaining rules var true var nonvar var nonvar size size var var lip tsubs vsubs size size term terms number symbols syntactic identity even though goals single conjunction constraints acd used succinctly expressing vsubs rule replaces one variable another position colmerauer prolog infinite trees logic programming apic studies data processing academic press following derivation illustrates unification process action underlined part show matching elements lip true expanded examples purpose section show example derivations operational semantics acdtr rather descriptions allow shorthand namely identifiers conjunctive context section explain parts derivation example detail initial goal corresponds initial state initial state quadruple contained annotated version goal empty propagation history set variables goal set used identifiers first derivation step simplify transition lip rule replaced annotated subterm flipped operands equality reannotated new term fresh identifiers also added set used identifiers since propagation history empty remains unchanged next derivation step simpagate transition vsubs rule conjunctive context subterm true current goal position first conjunct matches conjunctive context vsubs rule thus subterm replaced identifier added list used identifiers execution proceeds final state true reached annotation goal set identifiers final state rules applicable matching propagation histories consider propagation rule leq program trans leq leq leq initial state hleq leq apply propagate directly without permuting conjunction arrive state leq leq leq trans propagation history prevents rule firing terms however permute terms find new matching namely permute annotated goal call leq leq leq leq leq leq latter element identifiers preserved correct way entry trans propagation history apply propagate arrive leq leq leq leq trans trans propagation history prevents rule trans applied first two leq constraints guard also prevents trans rule firing either two new thus reached final state updating propagation histories consider modified version previous example two rules trans leq leq leq first rule enforces idempotence conjunction consider initial state hleq leq leq apply trans rule first two copies leq constraint identifiers hleq leq leq leq trans next apply idempotence leq constraints identifiers hleq leq leq trans trans extra entry trans added propagation history order satisfy requirements definition replaced annotated constraint leq newly annotated term leq defines identifier renaming since trans element propagation history trans must also element hence history expanded without guard acdtr chrs guaranteed terminate
| 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.